Interview with Olivier DUCHENNE & Sophie EOM
Conducted by Jooyong JUN
C&S: Please make a summary presentation about Solidware.
Olivier DUCHENNE & Sophie EOM: We (Solidware) build Machine Learning-based predictive models for finance companies. They need a predictive model for their business as they are related to various kinds of risks, such as underwriting, product offering, or customer retention if we name just a few.
Many financial companies already have the enormous size of valuable data. Their risk management system, however, has been mostly based on a simple model, which hinders their ability to fully utilize the data as it is plagued with many human assumptions and biases. As a result, financial firms fail to maximize their revenues while consumers pay more for loans and insurance premiums.
With our machine-learning based data analytics solution, DAVinCI LABS, we analyze clients' data and find the best combination of different machine learning algorithms such as deep learning to generate the most accurate risk prediction as far as possible, without wasting any information in the data. In all, we help to find and minimize risks, and eventually generate significant additional values.
A lot has been talked, but few success stories were observed in machine learning (ML) and FinTech at this point. Which area of finance do you think is best-suited for machine learning? (e.g. credit information discovery, robo-advisor, failure or default prediction), and which is not?
(Sophie) At this point, credit scoring is best-suited for ML. Robo-advisor does not necessarily work well because more often than not the amount of data is insufficient. It is also far harder to predict the market.
How far can ML replace the implicit knowledge that banks have and use in relational banking? What do you think the role of human traders/investors/analysts will be like in the future after the widespread of ML in finance, if there is any?
(Sophie) Humans will be able to spend more time building better "strategies" based on the insights extracted through machine learning, rather than trying to spend time extracting insights from themselves.
If every financial firm and/or investor uses FinTech services based on the similar ML algorithm, would not it homogenize the financial system and increase systematic risks?
(Sophie) I want to emphasize "No free lunch theorem". There's no single algorithm that works the best for all cases. Moreover, different datasets will give different results.
From your experience, what do you think would be the value of social networks' data or publicly available data, which are claimed to be used by many FinTech lending firms for the credit evaluation of an individual at this point? What do you think it would be like in the future? Is there any difference between individuals and firms?
(Sophie) Social Network Service data is biased, not complete, and usually difficult to match with target variables such as default and fraud probability that financial companies are interested in. In my opinion, it is not really worth spending time and resources to use SNS data for our business at this point.
If most of the financial market participants use ML, decision makings may become more homogeneous. Would not it worsen the probability of systematic risks such as a (bank) run?
(Olivier) I think if everyone has the same data, the correlation of decisions will increase with adopting ML. However, individuals, companies, and organizations have huge amount of private data now and will collect more in the future. As we said before, machine learning applied to different datasets will not lead homogeneous behaviors among financial market participants.
Many economists are still reluctant to adopt ML and big data because ML finds correlations in big data while not identifying causality which is important for policymaking. In your opinion, how can we use ML for policymaking decisions?
(Olivier) Well, I am not very familiar with the application of ML to social science and policymaking. In my opinion, for many cases, the data may not be "big" enough to justify the use of ML. Those applications require supervised machine learning, which implies that you need to already know which option is "more correct" than others. Still, ML may be useful if we know the underlying mechanism, the policy making decision is very specific (e.g. fine tuning of sales tax rate or interest rates), and there exist sufficiently big data for the process.
What are the barriers which may handicap the ML usage in banks and financial companies (potential impacts on the employment, regulation, etc.)?
(Sophie) I think one of the barriers is that banks and financial companies have to "explain" the result of their predictions to their customers and regulatory authorities. For example, if a bank rejects to give out a loan to a certain individual, that individual will ask the bank why, and if the bank does not provide a clear answer, the customer may file a complaint to the government, which will be a big trouble to the bank. However, ML is like a black box and it's difficult to explain logics behind prediction results.
What is the difference between desired objectives in computer science and in finance when it comes to applying machine learning?
(Sophie) In computer science, it is about finding the algorithm that beats the state-of-the-art, world-best one at the moment. On the other hand, in finance, it is about finding the explainable algorithm beating the incumbent models while working fast enough.
You had a choice to start your business either in France or in Korea. What factors affected your decision making?
(Olivier) First, compared with France and other EU countries, starting a company in Korea requires less complicated processes. Taxes and other regulatory burdens are also lighter in Korea. Second, compared with the US and the EU, compensation costs for engineers in Korea are lower. Third, the level of competition in the Korean market is also lower.
What are the points that you might emphasize in managing a company with people from different countries (France, Korea, Russia, and Sweden in Alphabetical order) and cultures like Solidware?
(Sophie) The common language must be English (Very important) all the time. All official documents and all talks are done in English. We emphasize task-based management. No hierarchy. (Olivier) Some of our engineers are not very comfortable with speaking in English, but they can communicate well with writing.
Do you want to stay specialized on the financial sector? For example, is there any room in your vision for a business model in which you would license your ML technology to banks and insurances? Who is your more relevant competitor?
(Sophie) We're trying to be more vertically integrated in the financial sector. That is, we are trying to adapt our solution to specific forms of datasets that financial companies often use. Potential competitors may be financial companies which are our clients as of now, if they try to build their own machine learning system.
How do you see the future of your company?
(Sophie) Bright! (Olivier) The market for ML application in finance grows fast in Korea and our revenue also does. Second, big names in ML such as IBM and Palentir are in the market, but their performance here is not on par with their reputation. We can cover more tailored and specified needs of our customer companies. As Sophie mentioned, potential competitors may be financial or credit information companies which want to internally have both data and technologies. Personally, I think it requires quite a long time for them to have both.
Interview with Jean-Hervé LORENZI
Chairman of the Pole of Competitiveness "Finance Innovation"
Conducted by Yves GASSOT CEO IDATE DigiWorld and Maximilien NAYARADOU, Director of R&D projects,
Pôle de Compétivité Mondial Finance Innovation
DW Economic Journal: Finance vs. FinTech, where are we seeing innovation today?
Jean-Hervé LORENZI: Without a doubt, it is new entrants in what is commonly referred to as FinTech that are driving innovation today. FinTech start-ups are the myriad micro, small and medium businesses that combine information technologies and finance, and are coming to disrupt a sector that has long been protected by regulation. FinTech innovations range from new payment systems that make it possible to decrease the cost of digital transactions, to new financing platforms: crowdfunding of course, but also seed capital, stocks, robo-advisors that digitise financial consulting, blockchains that lower the cost of certifying transactions by decentralising the process, and of course the plethora of digital services that FinTech companies are ushering in: account aggregation, generating coupons based on individual shopping habits, etc. Added to this are the myriad possibilities opened up by big data: extra-financial analysis that, at last, selects financially-relevant variables and helps expand the range of enterprises that have access to financing.
But it should also be said that, even if FinTechs have the momentum on their side, traditional finance industry players are also innovating, albeit at a slower pace: the pace of private bureaucracies and as a defensive measure, but innovating nonetheless. Regulatory pressure is stepping up, and it seems safe to predict that the rate of innovation inside banks and insurance companies will accelerate… naturally with the help of FinTechs.
The Finance Innovation competition cluster, of which I am the president, is at the very heart of these changes in the financial sector. Twice a year, we give our seal of approval to 50 innovative projects, most of which are FinTech projects, and so coming from micro, small and medium businesses, but we also give our seal to projects from the sector's larger enterprises that are seeking to promote new products that are innovative, strategic and reliable. Lastly, we extend our seal to collaborative projects driven by FinTechs, large corporations and academics, projects eligible for public subsidies aimed at encouraging players to work together to innovate. The Finance Innovation competition cluster is the only structure in France that centralises finance-related innovations of all kinds – technological or service-centric – and regardless of the entity behind the endeavour: FinTechs of course, but also large corporations and academics.
What are the different views on competition between FinTech companies and veteran market players? Is FinTech not synonymous with disintermediation?
Veteran players have their own set of assets, including their market power and immense size, especially in France, which enable economies of scale and create real barriers to entry. FinTechs are small and in some cases tiny companies. But veteran players' strengths are also their weaknesses: their large size also means a heavy bureaucracy that paralyses initiative and agility.
We should also point out that, when competing with incumbents, FinTechs have the prevailing wind of financial disintermediation in their sails. Of course, these are two separate phenomena, but they do feed and foster one another. The high-speed digitisation of the financial sector is helping to bring down market entry costs for newcomers, and the cost of disintermediation. Disintermediation allows assets managers and insurance companies to finance companies directly, without having to go through the banks, and allows crowd-funders to do the same. Disintermediation opens up the market for FinTechs, digitisation makes it possible to roll out a solvent product with very little capital, contrary to insurance companies and assets managers. FinTechs are also entering the realm of shadow banking, this non-banking form of finance that is developing and, when properly regulated, contributing to funding the economy: crowdfunding, seed capital platforms and online factoring are all part of the shadow banking phenomenon.
Does Europe lag behind in the area of FinTech and innovative financial solutions? What differences do you see between the situation in France and Europe from the one in the United States or in Asia?
It is not Europe that is lagging behind, but rather the Eurozone. The United Kingdom is absolutely not lagging behind: prior to the Brexit vote, London was Europe's FinTech capital. Compared to the US and even the UK, fundraising levels in France are still quite meagre, despite a significant increase in both frequency and volume since the end of 2014: €1.2 million on average, compared to €5 million in the UK and well over that in the US. Of course, this can be explained by the very limited development of investment capital in France, compared to English-speaking countries.
Next, government intervention in start-ups and innovative companies is very efficient: in the US and the UK, the public sector takes far more risks of losing money and in financing businesses with little or no funds of their own, so public subsidies have a far greater impact there than in France. Added to which Anglo-Saxon governments and regulatory authorities are very FinTech-friendly: the Bank of England has an office dedicated to FinTechs which helps remove the regulatory barriers to their entry into the market. A FinTech bureau was created in France as well, but several years after the one in Britain. Another very important example is that the UK equivalent of BPI France (France's public investment bank) has financed crowdfunding platforms so that they might distribute funding to SMBs, which gave the sector an enormous boost. Plus, in both the US and the UK, relations between SMBs, large corporations and the State are regulated, and a percentage of the federal government's (under the Small Business Act in the US) and big businesses' procurements must be from small businesses, which guarantees a minimum set of opportunities for start-ups. In the financial sector, the banks thus have a very practical incentive to work with start-ups. The positive and pragmatic ecosystem that we find in English-speaking countries made investments in FinTechs profitable much more quickly, so investors were quicker to invest heavily, which helped perpetuate a virtuous circle.
We should nevertheless point out that, inside the Eurozone, France in general (thanks French Tech) and the Paris exchange in particular, are in an especially strong position with respect to FinTech, compared to Germany or Italy. For instance, France has had crowdfunding legislation in place since 2014, which provides the sector with a secure framework and is allowing it to develop in a healthy, controlled fashion, which is not the case in the other major Eurozone nations. Moreover, France and the Paris exchange in particular have a sizeable advance in terms of R&D; France is a global leader in the areas of Big Data (France was the birthplace of data mining, Big Data's predecessor, back in the 1970s) and of artificial intelligence. Not only do France and the Paris exchange have considerable R&D assets, but France in general and the Paris region have a concentration of FinTech entrepreneurs, and a tremendous intensity of entrepreneurial creativity. Lastly, listings on the Paris exchange include the largest banks, assets managers and insurance companies in Europe. If they are quick to embrace the digital transition and learn to work with FinTechs, the Paris exchange will have all the assets needed to catch up to London.
Is banking and financial regulation an impediment to FinTech's development in France? And, looking at it from another angle, could FinTech weaken a financial system that public authorities and market players have been working to strengthen since 2008?
Regulation in France, which is very strict when it comes to protecting investors and consumers, increases the cost of entering the market and, as a result, favours incumbents. It took several years for France's financial market regulators, ACPR and AMF to decide to open up a dedicated FinTech office. Added to which, once open, the FinTech bureau did not follow the more FinTech-friendly sandbox approach taken by regulators in English-speaking countries. The sandbox approach consists of relaxing regulation temporarily to be able to test the relevance of a given innovation and, at the same time, of existing regulation. France's FinTech regulator refused the sandbox approach, which means that FinTechs are not exempt from existing regulations, even when testing new products.
FinTechs are still too small to upset the balance of the financial system, but we can also point out that it is the Base III, Solvency II, MIFID et. al regulations introduced since 2008 that enable FinTechs to emerge as alternatives for the financial sector's clientele. The capital constraints that have been mandatory since 2008, and which limit the banks' leverage, have paved the way for solutions such as peer-to-peer lending and crowdfunding to develop.
What scenario does the prospect of no more cheques and especially no more cash evoke for you?
This, in fact, means the end of paper money, since blockchain technology allows us to imagine the existence of digital cash, in other words a digital currency but one that is anonymous and traceable like cash. Bitcoin is to some extent a form of digital cash, but it carries the baggage of a bad reputation due to its use on the Dark Net (i.e. non public corner of the Web). There will always be a demand for a portion of transactions to remain anonymous, without implying criminal activity. When central banks start to use blockchain, digital sovereign currencies will emerge, which will be a sort of pegged exchange rate Bitcoin, like all sovereign currencies, but which would be tied to a central bank.
Can you tell us a bit about the Finance Innovation competition cluster? And what the cluster believes are the key issues facing financial market innovation today?
The Finance Innovation cluster has over 350 members: FinTechs, major banks, insurance companies and assets management companies, as well as academics, working together to disseminate a culture of innovation within the financial sector, and to accelerate the development of innovative projects in the sector that take on economic, societal and environmental issues, in the service of growth and job creation.
Finance Innovation holds two seal of approval ceremonies a year, recognising innovative FinTech start-up projects – although not confined to start-ups as the seal can also be awarded to innovative projects from large corporations in the sector, as well as collaborative projects between corporations and academia. A total of around 100 projects are awarded the seal of approval each year, through these two ceremonies. The goal is to obtain private (private fundraising) or public financing (BPI France, regional financing, innovation clusters for the trades (PIA), Single inter-ministry fund (FUI)….) and to promote innovative solutions through the cluster's YouTube channel, publishing articles on Hello Finance, use of social media, etc.
The Finance Innovation cluster is also synonymous with experimenting, testing, disseminating and promoting innovative financing solutions for micro, small and medium businesses, within the sector itself across the whole of France, and in other sectors through 70 other competition clusters.
Lastly, the Finance Innovation cluster means roadmaps for finance industry innovation. To establish these roadmaps, which are published in the form of White Papers, we work in concert with large corporations, FinTech start-ups and academics to determine which areas of innovation are priorities, and to identify future catalysts of growth. These White Papers allow us to structure innovation in our domains of expertise, and to provide the State with tools for selecting the innovative projects to subsidise. In 2016, we are publishing a White Paper on innovation in retail banking, which will be followed by two more in 2017: one on innovation in the accounting and consulting professions, and one on innovation in e-health and prevention.
Innovation in the finance industry today is coming from FinTech, in other words from technologies that are enabling the creation of new innovative and high value-added services. And let us not forget that FinTech also encompasses InsurTech, which adds connected objects, on-board systems and security issues to the mix. It is this entirely new framework that is revamping the insurance sector, as new data are available and challenging actuaries' classic risk models. The public at large tends to focus more on FinTech and the banking sector even though, thanks to InsurTech, the insurance sector is in the throes of an equally dramatic upheaval.
What do economists believe are the real stakes of the financial sector's digital transformation, in terms of economies' competitiveness, growth and job creation?
English-speaking countries have fully embraced FinTech: a shift that will allow the sector to enjoy gains in productivity, and create economies that are more competitive in terms of financing. FinTechs expand the range of what can be financed, which is something that States and the sector's regulators need to understand. On the other hand, we should not have any illusions: the prospects for job creation are strong, but so are the prospects for job destruction. Thanks to the use of digital technology, FinTechs will enjoy enormous productivity gains: over the long term, thousands of back-office jobs in banks, teller jobs and financial consultant jobs will be destroyed. The banking sector is tomorrow's steel industry. We find the classic dilemma of Schumpeterian creative destruction: a great many jobs will be lost and a great many created, but which will outnumber the other? The jobs that will be lost will be low skilled ones, while the ones that will be created will be jobs for the highly skilled: engineers, doctors, data scientists… This could further exacerbate inequalities: unskilled workers will have trouble finding a new job in the digital economy. So, to meet the challenges in terms of training, upgrading skills and making the transition to the digital economy, substantially more public monies will need to be invested in these areas to limit the negative impact of increased inequality, and the difficulties of vocational reconversion for the least skilled workers.
Jean-Hervé LORENZI, Major at the Agrégation des facultés de droit et sciences economies (Faculties of Law and Economics) in 1975, is Chairman of the Cercle des économistes (the famous circle of French economists), holder of the Chair "Demographic Transition, Economic Transition within the Fondation du Risque (Foundation of the Risk)" and Chairman of the Pôle de Compétitivité (Pole of Competitiveness) "Finance Innovation". He is a member of the Board of Directors of the Edmond de Rothschild France Group, of the supervisory board of Euler Hermes and the Boards of directors (board meetings) of the Médéric Alzheimer Foundation, the IDATE and the BNP Paribas Cardif. He was Professor at the university Paris-Dauphine and the member of council of economic analysis. He has notably published: Un monde de violences. L'économie mondiale 2015-2030, Paris, Eyrolles, 2014; Rajeunissement et vieillissement de la France (with J. Pelletan and A. Villemeur), Paris, Descartes & Cie, 2012; Droite contre gauche, (with O. Pastré), Paris, Fayard, 2012; Le fabuleux destin d'une puissance intermédiaire, Paris, Grasset, 2011; Le choc des populations: guerre ou paix, (in collaboration with P. Dockès), Paris, Fayard, 2010.
"Mobile dynamics: the path to 5G"
DigiWorld Economic Journal n°102
Interview with Martin FRANSMAN
Professor of economics, Founder-director of the Institute for Japanese-European Technology Studies, University of Edinburgh, United Kingdo
Conducted by Anders HENTEN Aalborg University, Balllerup, Denmark
DW Economic Journal: What do you mean by an "Innovation Ecosystem"?
Martin FRANSMAN: By an "Innovation Ecosystem" I mean a group of players who through their symbiotic interactions (both cooperative and competitive) make innovation happen and, by so doing, coevolve over time.
How may the idea of an Innovation Ecosystem be applied?
The key point to bear in mind is that an "Innovation Ecosystem" is not an observable object. Rather it is a conceptual construct which serves a particular purpose. This important point requires some elaboration.
As Edith PENROSE has pointed out, "a "firm is by no means an unambiguous clear-cut entity; it is not an observable object physically separable from other objects, and it is difficult to define except with reference to what it does or what is done within it". She goes on to observe that "Herein lies a potential source of confusion" . The same is true of an "Innovation Ecosystem".
This becomes clear in reflecting on my definition of an Innovation Ecosystem as a group of players who make innovation happen. This raises the question of which players should be included in the ecosystem and which excluded. This is the question of the appropriate boundary of the Innovation Ecosystem being conceived. How far "back" is it necessary to go in conceiving of an Innovation Ecosystem?
If we are using the concept of Innovation Ecosystems to understand how innovation happens in the Mobile Telecommunications sector, for example, where should this boundary be drawn? It may be readily agreed that players such as final consumers, telecoms operators, telecoms equipment suppliers, and regulators should be included in the ecosystem. And for some purposes this definition may suffice. However, if the purpose is to understand the main determinants of the innovation process in this sector the net should obviously be considerably widened to include, for instance, universities and government research institutes who not only do relevant research but also provide important training. Other players may also merit inclusion in order to achieve the purpose.
This example makes it clear that an appropriate conceptualisation of an Innovation Ecosystem depends on the purposes and questions asked in the investigation. But complications may go even further than this. For example, even if different analysts can agree on the purposes and questions they may differ regarding which players should necessarily be included.
In view of problems such as these it is necessary to exercise more caution than is usually done in defining an innovation ecosystem. At the very least it is important to make explicitly clear the purposes and questions that are being pursued as well as the reasons for particular boundary decisions.
What in your view is the difference between an "Innovation System" and an "Innovation Ecosystem" and why did you choose to use the latter concept in your work?
The literature on innovation in this area tends to fall into two groups. The Innovation System Group, which is more homogeneous, is made up primarily of heterodox economists such as Chris FREEMAN, Dick NELSON and Stan METCALFE. They all acknowledge intellectual inspiration from the work of Joseph SCHUMPETER. Having originally trained as economists they all came to believe that the various approaches to economic growth adopted by mainstream economics do not provide a sufficiently robust explanation of how economic growth happens and why different countries often exhibit different growth patterns. They also share a common belief that innovation is the most important driver of economic growth and that mainstream economics does not have an adequate understanding of how innovation happens and who makes it happen. The concept of an "Innovation System", originally proposed by Chris FREEMAN in his book on Japan , is put forward as an alternative way of explaining growth. Central to this concept, and explicit in their definitions of "innovation system", is the role played by institutions understood not only in the Douglass NORTH sense of rules of the game but also as non-firm determinants that help (and perhaps hinder) innovation and therefore economic growth.
The Innovation Ecosystem literature, in contrast, is far more heterogeneous. It tends to come from scholars with a background in business studies. A notable example is the iconic book by IANSITI & LEVIEN from Harvard Business School, The Keystone Advantage: what the new dynamics of business ecosystems mean for strategy, innovation, and sustainability. A central concern in this literature is the cooperative networks created by complementary businesses which both individually and jointly create value for customers. The common belief (whether tacit or explicit) is that the truth lies in the constellation of businesses, rather than in individual businesses taken alone. This has important implications for dealing with topics such as business strategy and sustainability.
In contrast, my own use of the terms "ecosystem" and "innovation ecosystem" is inspired not so much by business behaviour as by the example of biological ecosystems with their populations of interacting organisms and species. As Alfred MARSHALL, the nineteenth century economist said, "The Mecca of the economist lies in economic biology rather than in economic dynamics" . This analogy, however, should not be pushed too far and I insist that the basic unit that makes up the "players" in my ecosystem are purposive and conscious individuals whose decisions and actions imply necessary complications such as beliefs, mistakes, and expectations which are not pre-determined in any meaningful sense of this word. Whilst there is significant overlap between my "ecosystem" and the concepts of Innovation System and Innovation Ecosystem perhaps the main difference is the emphasis I give to the dilemmas involved in interacting individuals, albeit in populations, understanding and acting in the uncertain world that is ours.
Can the concept of Innovation Ecosystem contribute to our understanding of leadership in an area such as mobile telecommunications?
The first problem in answering this question is to agree on what should be understood in this context by "leadership". Both countries and companies may lead, the former, for example, in performance of infrastructure and services, and the latter, for instance, in terms of indicators such as revenue growth, market capitalisation, and market share.
Having agreed on who leads the next problem is to explain why this leader has been able to lead. It is here that the concept of an Innovation Ecosystem as defined earlier potentially becomes useful. Let us take several examples to illustrate.
The first example is the lead by "Europe" in 2G mobile. Not only were the main European telecoms operators able to introduce world-leading 2G mobile infrastructure and services, the key European mobile equipment providers, notably Ericsson and Nokia, were able to become globally dominant players. Why did this happen?
Whilst the answer to this question clearly necessitates that we understand the strengths (and also weaknesses) of these two groups of company players there were other important determinants without which their global leadership would have been, if not impossible, then far less likely. These include, notably, the prior establishment of an agreed Nordic mobile set of standards and systems initially meant to facilitate inter-country mobile communications within the Nordic region as well as the establishment and functioning of a set of European institutions that enabled the emergence of GSM standards. These events required the interventions of other players, including policy-makers, regulators, and researchers. By following this kind of reasoning we will be able to identify both the relevant players and the ecosystem of symbiotic interactions that facilitated the eventual global success of GSM.
The second example is the remarkable rise of Huawei as a leading player not only in telecoms equipment but also, more recently, in smartphones. Once again, a key part of the explanation must involve an account of the emerging capability inside this company to successfully innovate. This success was dramatically illustrated by the successful entry of this company as a supplier to some of Europe's major telecoms operators in the face of very strong and long-standing competition from the key European telecoms equipment providers. Crucially, this entry depended not only on a Chinese comparative advantage-based cost benefit but also on the ability of Huawei to address some of the important problems expressed by the operators.
But reflection soon reveals that there is more to this success story than only what happened within Huawei. Also significant was Huawei's membership of the Chinese Innovation Ecosystem. Although at first a Chinese outsider that depended as much on other emerging countries as it did on the poorer Chinese regions for sale of its equipment, Huawei, with adept leadership, soon developed sufficiently strong capabilities to become a domestic supplier of growing importance able to both contribute to and benefit from the rapid growth of China and its telecoms infrastructure. Fleshing this story out requires an account of the key players (including, for instance, Chinese universities and other organisations) in the Chinese Innovation Ecosystem whose interactions made important contributions.
The third example is the central role of the US in smartphone developments. Here too a discussion is needed of the key telecoms operators and equipment providers as well as other important players such as policy-makers, regulators, university and other research organisations. But also of crucial importance is the direction taken by the evolution of the mobile telecoms sector itself. More specifically it is also important to understand the convergence of the mobile telephone and the computing subsectors that, until the advent of the smartphone with its own operating system, were largely distinct. This convergence gave a huge opportunity to the US that had always dominated the field of computing from its origins. Once the phone became in effect a computer that added many other functionalities US players, incumbents and new entrants, were able to leverage the superior computing capabilities that they and their ecosystem possessed in this area. This was also a significant contributor to US dominance.
As these three examples illustrate, the idea of Innovation Ecosystems can make a significant contribution to our ability to understand and explain these cases of leadership. However, the conceptual caveats mentioned in the answer to the first question must be kept in mind in deploying this idea.
Does the idea of Innovation Ecosystems have any positive implications for a European attempt to regain global leadership in the field of mobile telecommunications?
We must be careful not to slip into the voluntaristic error, i.e. "create the correct Innovation Ecosystem and all will be well!". The reason is that there are always some given constraints that remain binding. Examples are the historically inherited stock of capabilities, whether one's own or those of competitors; the given institutional framework; etc. One apposite example is the demise of Nokia as one of the global mobile industry's foremost pioneers and leaders.
From my personal discussions with some of Nokia's most important leaders I have no doubt that in its last years the company and its key decision-makers had an excellent understanding of what here is called the idea of an Innovation Ecosystem. Indeed, many of the company's key documents, both private and public, were formulated using the terminology of Innovation Ecosystems. There is every reason to suppose that this made both thinking and strategy formulation in Nokia better than it would have been without these conceptualisations.
However, the fact of the matter, sadly and regretfully, was that Nokia was significantly constrained by its historical path-dependence. More specifically, the company was substantially impeded by the Symbian operating system that it had inherited from the pre-computerised smartphone past. Not only did this operating system have defects from the point of view of application development, a key requirement for competitiveness, relative to the operating systems of the main competitors it suffered important shortcomings. No amount of perceptive Ecosystems thinking could, in the time required by competition, suffice to stay the company's threat of execution at the hands of unforgiving market forces. The same goes for the company's new leaders brought in to try and stay this execution.
The Nokia example has important implications for policy-making that uses the concept of Innovation Ecosystems. The main lesson, to repeat, is to avoid voluntaristic errors by coming to better understand what can, and what cannot, be changed by purposeful action.
How useful is the idea of Company Innovation Ecosystems?
Paradoxically, very little scholarly work has been done on how innovation happens, and who makes it happen, within purposefully created Company Innovation Ecosystems. Even the book by IANSITI & LEVIEN referred to earlier, despite the word "innovation" in its subtitle, does not delve into these questions, preferring to devote only a little attention to the incentive to create organisational innovation that benefits the business network/ecosystem as a whole. Accordingly, these questions unfortunately remain unaddressed.
The "open innovation" literature does not do justice to these questions either. Although the issue of innovation players outside the focal firm is explicitly addressed the questions of how all the players in the company's Innovation Ecosystem make innovation happen and who can and should make it happen are not discussed. Yet these questions are crucial for any company or other organisation wanting to improve performance through innovation.
What kind of guidance can be given to the leaders of companies who would like to make use of the idea of Company Innovation Ecosystems in order to improve their performance? This question is currently occupying a good deal of my attention.
Published in DigiWorld Economic Journal DWEJ No. 102
"Mobile dynamics: the path to 5G"
Interview with Wassim CHOURBAJI
Vice-President, Public Policy and Government Affairs, Europe, Middle East and North Africa, Qualcomm
Conducted by Denis LESCOP, Télécom Ecole de Management, Evry, France
DW Economic Journal: "What do you really mean by 5G from a technology perspective?"
Wassim CHOURBAJI: As we did with 3G and 4G, Qualcomm is leading development of technologies for 5G. We are designing a unified, more capable 5G platform to meet expanded and radically diverse requirements. 5G will be much more than just a new generation with faster peak rates. We are building a 5G platform to connect new industries, enable new services and empower new user experiences in the next decade and beyond. The foundation of this platform is a new OFDM-based 5G Unified Air Interface that is scalable across all services and spectrum. 5G will usher in the next era of enhanced mobile broadband experience with more uniform high data rates everywhere, lower latency and lower cost per bit. It will connect massive numbers of things through the ability to scale down in data rates, power and mobility. It will enable new mission critical services with ultra-reliable low latency links. It will provide edgeless connectivity with new ways for devices and things to connect and interact. 5G will be also a platform for all spectrum bands and types, designed for licensed spectrum from below 1 GHz for coverage to mmWave for extreme bandwidth as well as for unlicensed and shared spectrum.
How will 5G impact the everyday life of people?
Wireless connectivity transformed human communication. With 5G, we're extending its reach and adding intelligence to transform everything else. 3G and 4G have enabled people to experience broadband on their smartphones and tablets, wherever they are, indelibly changing the way we communicate with one another. We take this for granted now, but it was actually science fiction less than two decades ago. The next step, which is quintessential to the long-term realisation of 5G, is the massive social and economic impact of the tens of billions of devices and things that will get connected to each other, to the cloud and to people, unlocking greater efficiencies, personalized services and new user experiences. This will profoundly change our lives.
Where devices such as smartphones and tablets are now still the endpoint of communication, countless methods of connectivity and interaction will emerge in homes, cars, cities, healthcare and more. Where data services are now limited to certain providers and insights, there will be near unlimited insight available thanks to a broad expansion of all kinds of discovery services. It will not just be devices that will be "smart", it will be the connectivity itself. Intelligence will be found at the place where interactions are happening and will no longer be buried in the data centre or confined to a wall garden – it will make those interactions more intuitive, immersive and secure for people.
How will 5G impact the everyday life of enterprises? Can we say that 5G will open tremendous business opportunities?
The transition that businesses will experience towards 5G will be as sweeping as that experienced by consumers, and arguably even more so because the stakes in terms of competitiveness, economic growth and job creation are extremely high. I think it is fair to say that there are tremendous opportunities for businesses big and small, but the value created by spurring technological innovation with 5G will strongly depend on the policies under which industry at large will digitize and evolve.
Businesses have so far had to adapt to a changing environment where the internet has expanded to cover most, although not all, processes related to selling and distributing goods and content. To name but two obvious examples: e-commerce has metamorphosed retail and wholesale distribution operations; and the web has completely revolutionised publishing and journalism. These changes were basically driven by the fact that people could suddenly buy things and access content online. It's a process that started in the early days of landline internet connectivity, but which has really been boosted by mobile thanks to anywhere, anytime connectivity.
But as I said, with 5G the change will not simply be about connecting people to the Internet – more people, in more places and at faster speeds – but crucially about bringing intelligent connectivity to everything. So it is not just the sale and distribution of goods that will come into play – it is the very products you are developing as a business that will be affected. You used to be a company that was top-notch at designing and manufacturing this great product, but now you need to think in terms of your connected product – what you want to do in this new environment is deliver greater efficiencies, personalised services and new user experiences. You need to stay relevant to the user or people will be drawn elsewhere. You need to be skilful in doing that because there are many other companies out there which will take any opportunity they have to disrupt your market.
How are regulators – and especially the European Union – supporting (or not) the emergence of 5G?
I think the European Commission has really embraced the vision of 5G as a cornerstone of Europe's competitiveness. In April, the Commission earmarked 5G as a technology standards priority. The fact that Europe has leadership positions in so many key industrial sectors and that European industry needs to take advantage of the business opportunities that will potentially be enabled by 5G connectivity is not lost on Vice President ANSIP, Commissioner OETTINGER and the Commission as a whole.
I see the Digital Single Market as essentially a statement that Europe cannot afford to waste this opportunity. I like the fact that it goes back to the concept of the Single Market, one of the greatest achievements not just for Europe but arguably for humanity – there is no other place that equals Europe's level of social and economic unity, imperfect though it may be, between peoples, countries and interests that used to be so disparate. Implicitly, what it says is that the key to making the Single Market stronger for Europe and the world in the 5G digital era is to stay true to its core values of integrating differences.
When you transpose integration from different countries to different industrial players, the process is actually not that different. And when you translate integration into digital terms, you are talking about interoperability. That is why I think we see a strong emphasis on facilitating more cross-sector partnerships in the European Commission's recent Communication on ICT Standardisation Priorities for the Digital Single Market. We need more collaboration and strategic vision to bring together "traditional" non-ICT industries, the telecoms industry and the rest of the value chain to deliver on the promise of interoperable 5G connectivity. Europe can turn its apparent complexity into an asset.
There are a lot of initiatives that the Commission is facilitating with a view to 5G, such as the Alliance for Internet of Things Innovation (AIOTI) and the 5G Action Plan. Where I think Europe needs to act more quickly is spectrum and the review of the regulatory framework. Notably, I think Europe should decide fast and by 2017 on a list of "pioneering" 5G bands in the low, mid and high ranges, as well as a roadmap for the harmonisation and coordinated release of these bands across Europe. This will help industry players to invest and develop interoperable 5G standards globally and pave the way for commercial deployment in 2020. In Europe, there is a lot of potential in bands such as 700 MHz and 3.4-3.8 GHz, which are suited for IoT and "Industry 4.0"-type deployments, as well as in the 24 GHz and 31 GHz bands, which can deliver extreme mobile broadband bandwidth.
How should the framework be modified to better support 5G initiatives?
In terms of policy direction, I think we need to be aware of the paradigm shift between the old Digital Agenda for Europe and the new Digital Single Market strategy, which should very much reflect the shift from connecting people to connecting everything that I've talked about earlier.
We are used to having Digital Agenda targets that are linked exclusively to "fast internet access for all", called broadband objectives, like for instance 30 Mbps for 99% of people. That is good if you are trying to connect more people, in more places and at faster speeds, but if our aim is to bring, with 5G, intelligent, reliable and secure connectivity to new industries, which have different kinds of requirements, then these targets are no longer sufficient and we need new ones. These new targets should also address the vicious circle of the 3 "lows" the mobile industry is facing in Europe, and which I and others have talked extensively about. Low revenues, low use and thus low investment. The current targets solely address the supply side, with network coverage and speed obligations, and I think new targets should also address the demand side. This is key for takeup and revenues, bringing both the mobile industry and verticals together.
So I think the Digital Single Market targets should be specified for example as 1-Gigabit connectivity by 2030; 70% penetration of connected vehicles by 2025 and 100% by 2030; 100% road coverage by 2025; 60% penetration of remote monitoring for chronic patients in 2025 and 100% of low latency, very high data rate cloud access by the same date. I think these targets are far more meaningful from both a societal and economic perspective. I believe there are ways to incorporate these new elements in the upcoming review of the EU telecoms regulatory framework to make it futureproof and 5G-ready.
Does 5G raise issues pertaining to standardisation?
Yes, the main issue being that we'll need standardisation like we've never needed it before. As you expand the need for connectivity beyond people to literally everything, you can easily imagine that there is going to be a need to invest billions and billions of euros to create and evolve interoperable solutions that can cater to the many different requirements coming from the different sectors. The 5G platform is expected to be introduced with 3GPP release 15, forecast to be complete in 2018 for 5G commercial launches in the 2020 timeframe.
We will need high-performance standards, incorporating intensive levels of interoperability. If we only end up with extremely basic functionality incorporated in standards, we'll see much less interoperability, follow-on innovation and competition along the value chain. The bulk of the technology that consumers will be interested in may end up being developed by one or a very limited number of players that will control it in full. That is going to be bad for consumers and the rest of the market.
What this means is that standardisation needs to remain a priority for Europe. As I said, given that 5G will be about complexity, thanks to its leadership in standards Europe has a real chance of turning what many perceive as a weakness – the need to intermediate between contrasting interests, be it Member States or industrial sectors in our case – into an asset. So I welcome the Commission's intention of facilitating cross-sector partnerships for standardisation – I think this initiative can unlock situations where market players aren't naturally inclined to sit together at the table, which results in them losing commercial opportunities and the entire market not moving forward.
At the same time, one cannot forget that the investment needed to develop and evolve highly interoperable standards will come first and foremost from industry players. If standardisation is not an appealing option for them, they will not participate and we won't have the standards we need. And as one can easily imagine, fair return on investment is a top priority for businesses, including when the decision has to be made as to whether or not they want to contribute their inventions to standards and thus allow access to those inventions. There is always the option of going proprietary if participating in standardisation is not generating fair value for you. And, as I said, this would represent a risk for society in that it would lead to less interoperability, less follow-on innovation and less competition.
How are actors positioning themselves around the question of standardisation and intellectual property?
Balanced and effective intellectual property rules are essential, on the one hand, to incentivise companies to contribute their technology to standards and, on the other, to enable access to standardised technology. It is a balance that we absolutely need to get right as there is too much at stake.
I think the dynamics of the IP and standards debate haven't fundamentally changed in the last few years. Repeatedly, concerns are raised about Standard-Essential Patents (SEPs) and Fair, Reasonable, and Non-Discriminatory (FRAND) licensing. These concerns, which took centre stage during the ill-famed "smartphone wars", have proven to be tragically unfounded when it comes to smartphones and tablets. As mobile communications standards have improved and included more and more patented technology in the various iterations of 3G and 4G, average device prices have been falling dramatically and we have witnessed a proliferation of new products with new features. Irrespective of any theoretical debate about "patent thickets" and "royalty stacking", it is quite clear we simply haven't seen any thickets or stacking in the actual market, which on the contrary has been incredibly successful in achieving innovation, competition and consumer choice.
That being said, there are now what I think are valid discussions about standards and intellectual property in the new context of the IoT. As the number of players who will need to implement standards in their different industrial products grows, including SMEs, there is a need to simplify access to standards for them. In this context, the Commission has announced plans in its Communication on ICT Standardisation Priorities to facilitate fast, predictable and efficient access that can keep in place the right incentives for companies to contribute technology to standards. We welcome this approach, which I think is shared among the major standards contributors, and we look forward to working with the Commission and other stakeholders to this end.
Key to a balanced environment for investment in and access to IoT and 5G standards is flexibility. The IoT and 5G are going to be new markets, and the different parts of the value chain are still in the process of figuring out how best to structure new business models and how to create and reward value. The proverbial "one size fits all" will really not work here. However, some vested interests are promoting inflexible interpretations of FRAND that would force companies to license their technology to lower parts of the value chain or at the level of the smallest-saleable unit. This would for sure devalue standardisation – what it amounts to is guaranteed destruction of value for technology contributors to standards. And as I said earlier, companies will not contribute their technology to standards if standardisation is not generating fair value for them. If we in Europe care about interoperability, we really shouldn't go down that route.
Wassim CHOURBAJI is Vice President and head of Government Affairs for Europe, the EU and MENA. He is the Managing Director of the EU Brussels Office and oversees Qualcomm's public policy, regulatory affairs and senior government relations. Wassim is member of Qualcomm Europe leadership. He leads an EMENA-wide senior team responsible for innovation, technology, intellectual property, telecoms & digital economy, spectrum, standardization, security, data protection and antitrust policy. Wassim is chairman of the Communication Policy Council of TechUK, the policy arm of the UK digital industry. He was previously chairman of the spectrum group at DigitalEurope, the Brussels-based EU industry association, and chairman of the European spectrum group at the GSMA. Prior to joining Qualcomm in 2006, Wassim was the head of spectrum for the France Telecom Group, overseeing the group's fixed, mobile and satellite spectrum strategy across its operating companies. He was also designated by European administrations as lead coordinator on 4G spectrum for Europe at the ITU World Radio Conference. Previously, he served as regulatory manager for SkyBridge, Alcatel Space global Internet satellite project. He started his career as a spectrum engineer at French mobile operator Bouygues Telecom. Wassim holds a master's degree in wireless communications and is a graduate engineer from Supelec France.
More information on DigiWorld Economic Journal No. 102 "What do you really mean by 5G from a technology perspective? on our website
Published in DigiWorld Economic Journal DWEJ No. 101 "Towards a single digital audiovisual market?"
Interview with Adam MINNS
Executive Director, COBA, London
Conducted by Sally BROUGHTON MICOVA
The Commercial Broadcasters Association (COBA) is an industry association whose members include digital, cable and satellite broadcasters, both linear and on-demand. The association is active on policy and regulatory issues primarily in the UK, and also in Europe.
DW Economic Journal: When the Audiovisual Media Services directive was drafted it was designed to be platform neutral, maintaining a distinction only between linear and on-demand services with the intention of future-proofing it for potential changes in technology and markets. To what extent has that held up?
Adam MINNS: The European broadcasting sector is a success story, worth more than 74.6 billion euros annually, according to the European Audiovisual Observatory. Audiences have more choice than ever before, with the number of linear channels growing across the EU and the gradual emergence of on-demand services (a recent study by the European Audiovisual Observatory put the number of on-demand audiovisual services established in Europe at 2,563) .
We therefore see no need to tamper with the fundamental principles of the directive, i.e. a technology-neutral approach that applies varying levels of regulation according to consumer expectations and the nature of different services. Indeed, radical change creates a risk of damaging the successful growth of the European audiovisual sector. That said, there is a case for a moderate level of reform regarding certain, specific aspects of the rules for commercial communications for linear services. In some areas, these are overly prescriptive and it is difficult to see the consumer purpose these are serving in a world of rapidly changing behaviour and the ability to access content from a multitude of different devices and services.
Does it still make sense to regulate linear and on-demand differently?
Yes. The directive's two tier approach to regulation has helped underpin this growth and innovation. In comparison with linear channels, non-linear services, while growing, generate a relatively small amount of revenue for COBA members, and the regulatory burden must reflect this if it is not to dampen investment. Many "Catch-Up" VoD services are loss leaders, for example, and are provided to viewers at no additional cost.
In addition, one of the directive's guiding principles, that consumers exercise more control in regard to non-linear services and therefore a lower level of regulation is appropriate, holds true today.
There have been calls to revisit the "country of origin" principle that is at the core of how audiovisual media services in Europe are regulated. How important is that principle to the business of commercial broadcasters?
Few pieces of regulation are more important for our members' businesses than the Country of Origin principle set out in the Audiovisual Media Services directive – but the key point I would like to make is the benefit to EU audiences.
For the avoidance of doubt, I am referring throughout this piece to the principle set out in the AVMS directive, not to any other directive. The AVMSD's Country of Origin rule enables a broadcast or on-demand service – licensed in one EU Member State – to be made available in another country without having to separately obtain another licence at the service's destination. Where, for example, costs and content can be shared amongst channels tailored to multiple Member States, because they comply with a single set of rules, a channel is viable for a more niche audience in each market. This creates more choice for audiences, and supports media pluralism and freedom of expression.
For example, the British Sign Language and Broadcasting Trust (BSLBT) is an organisation in the UK that, supported by broadcasters, provides sign-presented content to the deaf community. It makes a range of signed content available on its on-demand service to viewers in Member States across Europe. Deaf communities in Germany, France, Estonia, Spain and many more countries are watching this content, which is made available under a UK-based notification under the Country of Origin principle in the AVMSD.
The example of the BSLBT is from an independent report COBA recently commissioned on the AVMSD Country of Origin principle from Olsberg SPI. Olsberg are still finalising the report, but their clear conclusions are that the AVMSD's Country of Origin principle has supported the growth of the European broadcasting sector and is critical for unlocking the potential of European non-linear services. Testifying to this, some 41% of linear channels established in Europe are available under the Country of Origin rule, and 34% of on-demand services (this excludes services licensed from outside the EU).
So-called Catch-Up VoD services are particularly dependent on the Country of Origin rule. These are provided by broadcasters to give their audiences on-demand access to their programming for a given period after the original transmission. These are some of the most popular VoD services in Europe (accounting for 29% of all VoD services), but are in general provided to viewers at no additional charge, so there is a real need to keep the costs of providing them down. As you would expect, they are nearly always licensed (or notified) in the same Member State as their parent channel so they can re-use content complied for the linear channel.
Around a third of these (nearly 300 services) are made available under the Country of Origin (mirroring their parent linear services). In a situation where non-linear services were not able to benefit from the Country of Origin rule, these services would clearly be at risk.
As you might also expect, smaller Member States in particular stand to be harmed by the loss of the Country of Origin principle. According to Olsberg's analysis, 41% of linear channels across the EU operate under non-domestic licences supported by the AVMSD's Country of Origin principle. In the ten smallest markets (by population), however, that rises to 75%, reflecting the greater need for economies of scale in markets that might not be able to support a stand-alone channel. To give you an idea of the kind of range and choice these channels offer, in some smaller markets the only children's channels available are provided under non-domestic licences.
COBA's view is that the AVMSD's Country of Origin principle has underpinned economic growth, consumer choice and media plurality in the European audiovisual sector to date, and for the same reasons is set to be pivotal in the on-demand era.
What do you think are the prospects for creating a single market for audiovisual media service in Europe? Is it even desirable?
I would say that it depends on how you define single market. The AVMSD has successfully enshrined an important set of European values, providing for a minimum level of standards and protection for consumers and, through the Country of Origin principle, safeguarding freedom of speech and media plurality, and supporting innovation and the growth of Europe's creative industries, as I have outlined above. At the same time, Member States rightly have the flexibility to prioritise according to national sensibilities. The current balance seems right.
In some of your recent policy papers and consultation responses you have reported impressive growth in the investments of your members in UK original content. As some of your members are large transnational players that operate in multiple European Countries, to what extent is that trend mirrored in the rest of Europe?
These are hugely exciting times for European television content. It's almost a cliché now but television has become the new film, with a range of players all investing in ambitious, high quality original content. Funding has become more fragmented than ever before, flowing from broadcasters, on-demand services, and the production companies, not to mention public support, but that is the new reality.
The most important factor to remember is that it is a mixed ecology. Many COBA members are multi-national, but others are focused on the UK, and some are relatively small. All are investing in different ways, and that mixed approach builds strength into the overall ecosystem, which is less reliant on any one funding stream. QVC, for example, is a shopping channel, that creates 17 hours a day of live television. That high volume of production provides an exceptional training ground for crews and technical staff who go on to work across the industry. It is all part of a mixed ecology, continually building critical mass.
Our analysis of content investment has been focused on the UK so I don't have detailed figures for other Member States. But you can see that investment growing across other markets. Take, the recent European Film Market at the Berlin Film Festival, which held a television drama event to promote investment in European production. At that one event we saw announcements from HBO Europe, Sony Pictures Entertainment and Sky Deutschland involving production in Scandinavia, Germany, Italy and the UK. There is a lot more.
What can be done to boost investment by transnational commercial broadcasters in original content in Europe?
Again, I am not just referring to transnational broadcasters, but to commercial sector broadcasters generally. For COBA, there are two key factors in encouraging investment, and both take time. Firstly, encourage a mixed ecology, where a genuine range of players can grow. That increases creative competition, plurality in commissioning and strengthens the sector as a whole by diversifying funding streams. Frankly, in the world today, where so many different players are investing in content, and production more than ever relies on a patchwork of funding sources, fostering such a mixed ecology seems like common sense.
The second point I would make is to allow the industry to make content that audiences want to watch. That sounds obvious, but it doesn't always happen when companies are forced into quotas or other relatively blunt regulatory instruments. In the UK we have recently experienced something of a transformation, with non-domestic European drama now appearing on our screens in prime time slots, backed by significant marketing. Most importantly, they are achieving record audiences – most recently, German drama Deutschland '83 went out in prime time on Sunday evening and was watched by more than 2 million people.
This didn't happen to fulfil a quota; it is the result of a steady stream of high quality European dramas like Gomorrah on Sky, The Killing on BBC, and The Returned on Channel 4 – broadcast on a range of channels, including both commercial and public interest - breaking down UK audience's preconceptions about foreign-language content.
Of course, it takes time – a lot of time – to develop an industry capable of making shows that resonate with audiences on any consistent basis. I don't mean the funding, which is perhaps more available than ever now, but the creative skills. I found it fascinating, for example, that Denmark has consciously reproduced the American model of the "writer's room" and the primacy of the writer/creator, with of course its own vision. As much as anything, that creative process has established Denmark as one of Europe's key creators of high quality drama, and in the process done far more to promote Danish and European culture abroad than a quota would ever achieve.
And of course underlying these points, the principle of territoriality is still an absolute cornerstone in how production is financed, so needs to be maintained. Undermining the ability of rights owners to tailor how they licence their rights from market to market would harm their ability to generate a return, and so reduce the incentive to invest in creating that content in the first place.
Adam MINNS is Executive Director of the Commercial Broadcasters Association (COBA), the trade association for UK multichannel broadcasters and on-demand services. He leads COBA's work on a range of UK and European legislative and regulatory matters, reporting to COBA's board. He joined from Pact, the trade association for UK independent production companies, where he was Director of Policy and played a key role in Pact's work on the Terms of Trade and a range of other UK and European issues. Prior to Pact, Adam was UK film editor of Screen International, the film business publication, covering the British and European film industries. He has written for the Financial Times and the Independent on Sunday.
 The Development of the European Market for On-Demand Audiovisual Services, European Audiovisual Observatory, March 2015.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 101
Interview with Nicolas CURIEN & Nathalie SONNAC
Commissioners, Conseil supérieur de l'audiovisuel (CSA) (*)
Conducted by Alexandre JOLIN
(*) This interview only reflects the views of the contributors, not the CSA's official positions.
C&S: Since the late 70's, the European Commission has aimed to harmonize the regulatory landscape for audio-visual in Europe. The TVWF then the AVMS directives have created a legal framework allowing the circulation of linear TV and on-demand audio-visual media services in Europe. As part of the European Commission's Digital Single Market strategy, a review of the AVMSD has begun in 2015 and legislative proposals are due to be set out in 2016. Being the regulatory body for France, as a member state, how is the CSA involved in those consultations? According to you, which issues are to be primarily resolved?
Nicolas CURIEN & Nathalie SONNAC: Intending to bring its regulator's viewpoint and its expertise in the practice of regulation, the CSA contributed to the European Commission's consultation about the review of the AVMS directive, entitled: "A framework for the audiovisual media in the 21th century". The CSA also participated in the cross-ministerial preparation of the French Authorities' positions and it provided a contribution to the French answer to the AVMS consultation. Mostly, the CSA plays a very active role in the European Regulators Group for Audiovisual Media Services (ERGA), which was chaired, during its two first years of existence (2014-2015), by Olivier Schrameck, the chairman of the CSA. Created in February 2014, by the European Commission as an advisory body examining issues related to media services, the ERGA stands now as a key institutional innovation, pushing forward European audio-visual policy matters. For us, as national regulators, working together within this structure represents a strong opportunity to carry out an in depth future analysis of the audio-visual sector and to stimulate the emergence of common initiatives. The ERGA is in charge of assisting the Commission in the revision of its legislative instruments, as it is now the case for the AVMS directive.
The audiovisual services drastically changed since the adoption of the previous directive in 2007. The present situation, resulting from the dynamics of "convergence", was not anticipated in the first place and it calls for several substantial adjustments in order to take into account the development of on demand non linear services, of interactivity, as well as usage of associated data. Moreover, the irruption into the French and European audio-visual markets of large and worldwide OTT players, such as Netflix or Google, raises a new kind of issues, which must be solved at the European scale. As specifically regards the revision of the AVMS directive, the ERGA produced three reports, published in January 2016, respectively about the independence of national audio-visual regulatory authorities, about the possible extension of the directive to new online players, and about minors' protection. These reports include recommendations which were unanimously approved by the 28 regulators of the European Union's member States. The ERGA thus invites the Commission to incorporate its proposals in the revised directive. An additional report about the territorial competency of regulators will be issued in the course of spring 2016.
One of the proposals on the table is to apply the same obligations placed on TV broadcasters and on-demand TV-like services to online video sharing platforms as well. Is this a realistic solution to complete the existing film and audio-visual financing system?
This issue goes well beyond the particular case of video sharing platforms, as it also includes all digital intermediaries which are commonly designated under the generic term of "platforms", such as content distributors, content aggregators, providers of applications, sharing platforms or suppliers of devices; that is, all players which hold a position between content and usage, making them gatekeepers of the access to content. All actors who develop a strategy around content and/or are involved in the exposition and the promotion of content, especially through algorithms are concerned. Since these new operators do orient consumers and deliver prescriptions to them, they doubtless play an editorial role which is similar, up to a certain extent, to that played by traditional audio-visual editors. Then, it seems both sensible and in line with the driving principles of audio-visual regulatory policy to set up for new players an adapted regime of obligations. However, such a regime should not of course ignore the necessity of sustaining the pace of innovation: when contemplating a new deal and a new toolkit for audio-visual regulation, one must not at the same time slow down the growth of innovative services which largely contribute to widen the exposition of works and do constitute a major source of creativity in the audio-visual sector.
One size does not fit all and all platforms should not be subject to the same degree of regulation: namely, a small platform should not be treated as YouTube. Proportionality should thus be set as a guideline and the regulator should focus in priority on platforms which bear a significant impact onto the market. Moreover, as it would clearly prove inefficient to set local obligations to global players, a common harmonized framework has to be defined within the European Union. Achieving proportionality, within a renewed regulatory scheme designed for digital intermediaries, also requires that rules existing for traditional editors be adapted in order to reach a satisfactory matching between obligations and the specific characteristics of the new actors. More generally, traditional regulation should not be transposed unchanged onto the digital world, a world in which the speed of evolution is very high, in which some players are active at an international scale and in which the business models greatly differ from classical ones. Accordingly, an effective regulation should be based on a triptych associating public policy, users and operators and could mainly rely upon co-regulation and self regulation. Such a perspective is precisely consistent with ERGA's present undertakings, which consist in identifying audio-visual centric platforms, rather than all platforms, with the objective to align their behaviour with the traditional goals of audio-visual public policy, although under a proportionate regulatory approach. Indeed, the public policy goals, which underlie the existing obligations set for traditional actors, such as minors' protection, copyright enforcement, investment in creation, or fair competition, do still prevail for digital platforms. In the Digiworld, goals remain the same; modalities may differ!
With the rise of international OTT services and the ongoing consolidation of the European content industry, how can policymakers best safeguard and promote cultural diversity across Europe?
Reaching a critical size through consolidation is a necessary step to preserve a model of diversified content in Europe. This does not amount to geographic confining, but rather calls for a more extensive and international approach, strongly based upon European cultural specificities. This global strategy should concern production, traditional edition and new digital platforms as well. Europe holds a solid position in terms of local content production and it must derive benefit from it. However, the momentum has to be generated through a coordinated policy, as it cannot result from the separate actions of isolated national players. In this regard, regulators also are at stake and they must rapidly come towards a more inter-institutional approach.
In their efforts to promote the diversity of content, the European editors should use linear TV, which is still by far the dominant mode in consumers' practice, as a kind of "factory" in order to produce pieces of original content promised to become brands of their own and move towards non linear usage on electronic platforms, after a first lifetime spent inside the grids of linear TV in order to get some notoriety. As access through networks is a necessary condition for access to content, synergies between medias and telcos should also be considered in order to extend the scope of content distribution and to reduce its cost. Moreover, promoting diversity heavily depends on the ability of creators to finance their content and make it available to consumers. In this respect, fair access to all distribution channels, especially online platforms, stands as a key enabling factor: hence, the strong attention of regulators to the net neutrality and the content visibility issues
Today, the OTT video industry is mainly driven by non European players such as Netflix, Apple or Liberty Global, which, despite its British implantation, is controlled by a US holding company. According to you, what could be done to ensure the development of a strong European OTT players and ensuring the sustainability of the traditional broadcasting market?
This question relates in part to the issue of rights' territoriality. A right balance has to be found between the two conflicting objectives of maximizing rights' monetization, on the one hand, and extending content's exposition, on the other hand, in a fast moving context where the growth of digital platforms makes territorial enclosure unsustainable against bypass or piracy. Since reaching such a balance likely means substantial change in the present contractual arrangements, a concerted sectorial process is needed gathering together rights holders, editors and distributors.
At least, large national players should contract partnerships and launch together digital Pan-European services, with a strong identity. As already mentioned above, these developments cannot take place at a national scale, while the main international competitors, such as Netflix, do operate worldwide, do offer worldwide content, and are less and less subject to territorial constraints; it is especially the case as regards TV series available in SVOD services, such as House of Cards, exploited under a "free" regime. In this revolutionary context, where the historical category of TV channel might sooner or later be replaced by the upcoming category of brand-content, the sustainability of traditional players is clearly conditioned to their ability and willingness to co-design adaptive and cooperative ways of deriving as much value as possible from their content.
On demand video services are currently regulated in their "country of origin". Some players are denouncing this as a distortion of competition because legal obligations can differ highly from one Member State to another. As was already done for the VAT last year, would it be recommended or possible to apply a "user-centric" approach, setting the focal point on the end-user instead of the service publisher?
The country of origin's principle certainly helped to create a common audio-visual market, as it facilitated the cross-border circulation of services, warranting legal security to broadcasters. In practice, however, this principle proves insufficient to set the conditions of a fair competition across service providers, since the AVMS directive is a framework for coordination, not harmonization, and some member States chose to adopt stricter rules than those prescribed in the directive. This may lead to a particularly critical situation, whenever a service is explicitly directed towards a given State within the Union, although it is established in another one: such as they are today libelled in the directive, the present procedures do not actually allow a member State to apply its possibly stricter rules to a foreign service aiming to reach its citizens. As a consequence, a severe imbalance is potentially created across operators competing in a same local market, some being subject to stronger obligations than others. Then, in order to avoid damageable "regulatory shopping" strategies, a fair and effective competition across all European operators must be guaranteed. In this regard, it is proposed that the European regulation be modified, by introducing an exception to the country of origin's principle, which would allow a given destination country to apply its own rules to those services which specifically address its population. This proposal does not intend to abolish the country of origin's setting, which would remain the general ruling, but just to amend it at the margin, to deal with circumstances where its application would obviously result in a harmful distortion in the marketplace.
The European Commission has also made a legislative proposal to change the copyright framework to allow cross-border portability of online video services, ensuring that consumers can access content they bought when they travelled in other EU countries. Could content portability be a structural threat for national TV industries? What could be the right balance between protecting right holders' revenues and guaranteeing access to consumers?
The European ruling about portability, issued last December, is a most appropriate initiative and it brings very good news to all European citizens, who will have access to their national offers of digital content when they travel abroad within the Union. Yielding such a significant benefit to the travelling and nomadic citizens should nevertheless not threaten the principle of rights' territoriality, which remains a very important piece in the framework in order to preserve a fair remuneration of authors. The application of rights' portability should also not hinder the commercial development of European players. Therefore, the precise conditions of portability now have to be carefully designed, through a clear specification of the criteria, characterizing temporary versus permanent residence. Finally, a realistic time frame should be set, that is not too short a one, in order to ease the operational implementation by operators.
Over the last years, linear TV revenues growth has tended to stagnate in Western Europe while on demand services, mainly SVOD, have been generating increasing traffic with low monetization rate. On the other hand, traditional broadcasters currently face stricter rules than on demand video services in some areas, such as promoting European cultural works. According to you, what would be the right balance between promoting European OTT players and protecting the traditional broadcasting market?
Seeking here for a "right" balance is maybe not fully appropriate, for the consumers do not show a same and unique profile of usage. Consumption practices vary greatly indeed, especially according to age and to social class, which leads to a wide scope of expectations in terms of kind of content, modality of usage and type of viewing device: television, tablet or smartphone. Linear TV and OTT services are likely more complements than substitutes, since they don't address the same audience and are operated under different business models. Therefore, the relevant issue is less that of balancing efforts between online versus traditional supply, than that of designing tailored offers, well fitted to individual contrasted needs, and identifying efficient synergies as regards, for instance, works' circulation and cross-promotion. In this direction, a major difficulty must be overcome: market prices of online services are established at a low level, those of SVOD lying around 10€ per month, in such a way they do not enable a single player to make the substantial investment which is required to produce attractive, competitive and self viable content. Hence, a consolidation of means at the European scale appears as a necessity. Finally, demand must be stimulated as well as supply and, in this respect education to media and to European culture is a key factor of success.
Is there any need for concentration in both service publishing and distribution sectors in order to make European champions emerge? Should this solution be supported by national regulators?
A process of concentration across players located at different links within the audio-visual chain of value, or even between actors present within that chain and outsiders, may already be observed in France, just as it is in other European countries. In France, major recent examples are the fusion of Numericable and SFR, the agreement between Altice and NextRadioTV, the acquisition of Newen by TF1, the integration of Canal+ within Vivendi. Public policy should of course encourage all industrial strategies which favour a cultural rebalancing, enhance the exposition of the French and the European cultural patrimonies and increase their value. Regulators should nevertheless be most attentive in ensuring that major transformations in the audio-visual industry do not bear a threat against fundamental ethical principles, such as liberty of expression, editorial freedom and independence of information.
Nicolas CURIEN, a member of Corps des Mines, sits at the board of the French Regulatory Body for Radio and Television (CSA), since 2015. He also is Emeritus professor at Conservatoire National des Arts et Métiers, where he held the chair "Telecommunications Economics and Policy" from 1992 to 2011, before being Commissioner in the French Regulatory Body for Telecommunications and Post from 2005 to 2011. An expert in digital economics, he taught at École Polytechnique from 1985 to 2007 and is a founding member of the French National Academy of Engineering.
Nathalie SONNAC (Doctor of Economics) chaired the Information and Communication Department of Paris 2 from 2009 to 2015 and was in charge of the professional Master 2 "Media & Public". As a media economy expert, culture and digital technology, she is also the author of numerous scientific books and articles in this field. More specifically she analyses the issues of competition and regulation in the digital age, market interaction, new business models, and monetization of digital content. She was appointed Commissioner at the Conseil supérieur de l'audiovisuel by the President of the French National Assembly on January 5, 2015 for a six-year mandate.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 101
Interview with Lorena Boix Alonso, EC-DG Connect, Brussels
Conducted by Sally BROUGHTON MICOVA
DW Economic Journal: You recently completed a comprehensive consultation on audiovisual media services with a view to possible revision of the EU's regulatory framework in this area. How much of a call for change is there from stakeholders?
Lorena BOIX ALONSO: The Audiovisual Media Services Directive (AVMSD) was adopted in 2007 and replaced the Television Without Borders Directive of 1989.
Since 2007 – and let alone since the '80s –, the audiovisual media landscape has changed significantly, in particular due to the phenomenon of media convergence. In light of these changes, we are currently reviewing the Directive and assessing its regulatory fitness, with a view to presenting a new legislative proposal later this year.
The public consultation we organised last year is an important part of this exercise and informs our future actions.
Currently, the AVMSD regulates television broadcasts and on-demand services. It applies to programmes that are "TV-like" and for which providers have editorial responsibility. The preliminary trends of the consultation show some convergence of stakeholders' views on the need to revise the scope of application of the rules. However, respondents are not always clear as to how to do this, what new services should be involved and to what type of rules they should be subject. The main concern seems to be viewers' protection, including minors.
A crucial pillar of the Directive is the so-called country of origin principle. Thanks to this principle, service providers only need to abide by the rules of a Member State rather than of multiple countries - making things simpler for businesses, especially those wishing to develop cross-border. Quite unsurprisingly, most of the respondents to the consultation want to maintain the country of origin principle.
De facto, the country of origin principle has facilitated the growth and proliferation of those services. As of end 2013, 5,141 TV channels were established in the EU. Almost 2,000 of them targeted foreign markets. This share has increased from 28 % in 2009 - year of implementation of the Directive - to 38 % in 2013 (from 45% to 68 % for the UK). As far as VoD services are concerned, in 2015, on average in Member States, 31 % of the VoD services available were established in another EU country.
Another subject on which we observed a clear trend in the responses to the public consultation is the importance to ensure the independence of national audiovisual regulators.
We have however observed less clear trends regarding other areas covered by the Directive, for example on the way forward for the rules on protection of minors, commercial communications and promotion of EU works.
The independence of regulatory authorities has historically been a touchy subject for some member states, thus it was not really dealt with in the current Directive nor the one before it. However, things seem to be different this time around, particularly with the regulators themselves taking a stand on the issue. Why have things changed and what exactly is on the table?
The independence of audiovisual regulatory bodies is key for the implementation of legislation in an impartial manner (i.e., free from influence by political players or industry). When regulatory bodies lack independence, this has a direct impact on the effective transposition and application of EU legislation. This is why many EU regulatory frameworks in other domains (i.e. telecom, gas, electricity, postal services, personal data protection) require from Member States regulatory independence. In the field of media, regulatory independence is also important for the preservation of a free and pluralistic media system.
However, the Audiovisual Media Services Directive does not impose an explicit obligation on the Member States to create an independent regulatory body.
The currently-running review of the AVMSD is assessing whether the Directive should be reinforced explicitly by requiring Member States to ensure independence of audiovisual regulatory bodies. As I said, the preliminary results of the public consultation indicate that the majority of respondents would support this position.
The Commission has established the European Regulators Group for Audiovisual Media Services (ERGA), which is – among other tasks – looking precisely into the issue of independence. And yes – in particular following the newly approved amendments to the Polish media law – the Group has recently pointed to the importance of independence.
ERGA called "upon all Member States of the European Union to act to uphold the principle of independence of the media across all European Member States." The Group also called on the Commission "to continue to actively monitor developments and to take all necessary steps to support a free and independent media, including the taking of firm action against the weakening of the necessary institutional arrangements".
How does what your team is working on in relation to audiovisual media services interact with other elements of the Digital Single Market plans such as copyright reform and addressing online intermediaries?
The Digital Single Market (DSM) strategy for Europe calls for a modernisation of the AVMSD to reflect market, consumption, and technological changes. It requires the Commission to focus the scope of the AVMSD and on the nature of the rules applicable to all market players, in particular those for the promotion of European works, protection of minors, and advertising rules.
The overall vision of the DSM strategy is to create an internal market for digital content and services and ensure that Europe is a leader in the global digital economy. To meet this objective, the DSM puts forward a range of initiatives beyond the AVMSD review.
The AVMSD review is being coordinated with these other DSM initiatives such as the assessment of the role of online platforms and intermediaries as well as the evaluation of the telecoms framework. Besides, the Commission continues to work on the modernisation of the copyright framework as well as on the implementation of a set of support measures accompanying these legislative changes in order to facilitate cross border access to European content within the digital single market.
What can we do about "the Netflix problem"? Have any good ideas come to light in your consultations in relation to OTT audiovisual services?
We are well aware of the concerns, raised by some in the public consultation, related to the lack of a level playing field, resulting from the different level of requirements introduced by Member States. This relates particularly to the field of promotion of European works.
New players start investing in new content. This is already a trend in the US. US players active on the EU market, e.g. Netflix and Amazon, also start investing in European productions. European VoD players are also more and more financing European content, also often in the form of co-financing.
However, it is true that these players do not contribute to the financing of new European content to the same extent as traditional players (television and cinema) do.
All these aspects are considered in the context of the AVMSD review. In that view and even though all the options are open at this stage, during our assessment we are looking in particular into the best ways to ensure the promotion of European works in on-demand services.
How do you think we are going to be able to encourage European content production and distribution in the future?
The promotion of European works is a key value of the Directive. The current provisions of the Directive have contributed to the cultural diversity in Europe though the production and distribution of valuable European content. For instance, the 66th Berlinale film festival that took place in February was a very good example of the creative power and diversity of cinema, with a new attendance record. I believe we can celebrate the fruits of the work of the European audiovisual and film industry that we can all be very proud of.
However, it is undeniable that the market and viewing habits have changed since the last review of the Directive, in particular regarding the rapid developing of Video on Demand. Young people consume audiovisual content increasingly on-line. People want access to audiovisual content whenever and wherever they are, on the device of their choice. Technology has made this possible.
I believe this can be a great opportunity to increase the production and circulation of European films. The Commission is very much keeping this objective in mind in the revision of the AVMSD rules on promoting European works as well as in the context of the implementation of the Creative Europe MEDIA programme. In addition, the Commission is launching other coordinated initiatives to exploit all synergies available to increase attractiveness of European films. This require measures in various areas on which the Commission is working together with all interested parties including the audiovisual sector ( film producers, authors, distributors, sales agents, VOD services, broadcasters, etc.) as well as public authorities and film funds in the frame of the European Film Forum.
On December 2015, the Commission adopted the Copyright Communication "Towards a modern, more European copyright framework" which sets an agenda of non-legislative measures meant to accompany the legislative agenda in order to ensure a wider access to audiovisual content across borders. The rationale for these measures is that audiovisual works and films require investment in order to really benefit from the DSM and to be widely accessible. Audiovisual works and films need to be available in formats and catalogues ready for use and to be understood (issue of language versions).
Finally, the Commission is also deeply engaged into the Creative Europe Media programme, which this year celebrates its 25th anniversary. Through this programme the EU invests roughly €100 million per year in European films and audiovisual industries and supports projects which are aimed at enhancing the prominence of European films on VOD Platforms.
Lorena BOIX ALONSO is the Head of Unit for Converging Media and Content Unit, Directorate General for Communications Networks Content and Technology since July 2012. Formerly, she was Deputy Head of Cabinet of Vice President Neelie Kroes, European Commissioner for the Digital Agenda. During Ms Kroes' mandate as Commissioner for Competition, Lorena Boix Alonso commenced in October 2004 as a member of her Cabinet and became Deputy Head of Cabinet in May 2008. She holds a Master of Laws, with a focus on Antitrust Law and Intellectual Property, from the Harvard Law School. She graduated in Law from the University of Valencia (Spain) and then obtained a Licence Spéciale en Droit Européen from the Université Libre de Bruxelles. She joined the European Commission Directorate-General for Competition in 2003. Prior to that, she has worked for Judge Rafael García Valdecasas, at the European Court of Justice, as well as Deputy Director and Legal Coordinator of the IPR-Helpdesk Project and in private practice in Brussels.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 100
Interview with Mark T Bohr
Intel Senior Fellow, Technology and Manufacturing Group Director, Process Architecture and Integration Conducted by Gilbert CETTE & Yves GASSOT
C&S: Moore's Law is turning 50. Can you comment on and characterise the progress so far? How important is this in the amazing digital development that we're witnessing? Mark T. BOHR:
Moore's Law is a driving force of technological, economic and social change and is a foundational force in modern life. While most people have never seen a microprocessor, we use countless devices every day that are made possible by microprocessors and Moore's Law. Microprocessors and related technologies have become so integrated into daily life that they've become indispensable, yet nearly invisible.
Against the regular predictions of its demise, Moore's Law endures and remains essential to today's generation, which has come to expect and enjoy the experiences and opportunities defined by the observation.
Moore's Law will enable us to continuously shrink technology and make it more power efficient, allowing Intel and the industry to rethink where – and in what situations – computing is possible and desirable. Computing can disappear into the objects and spaces that we interact with – even the fabric of our clothes or shoes. New devices can be created with powerful, inexpensive technology and combining this with the ability to pool and share more information, new experiences become possible.
Moore, in a recent interview, said he thought that in the coming 5 to 10 years his "law" would be validated… Other observers think it will have seen a period of acceleration in the decade after 1990, followed by a sharp slowdown in the 2000s. Do you share this view? How do you account for the different analyses? Do you think Moore's Law has been slowed down because of the physical limitations to increasing the number of transistors per chip? Because of the 'diversion' of some R&D spending on the part of chip producers toward the fight against heat generation? Because of the exponential and hence unsustainable increase in the R&D spending it would take to extend Moore's Law? Or for other reasons?
The demise of Moore's Law has been predicted many times. Continuing Moore's Law is getting tougher, but we believe we have a lead versus our competitors. We remain confident in our ability to deliver Moore's Law and expect to continue true cost reduction through leading-edge process technology and generating real product improvements that apply across our product portfolio.
What other constraints might contribute to questioning the validation of Moore's Law?
We can't speak for others in the industry. Intel recognizes that the continuation of Moore's Law provides us with a competitive differentiator and the ability to bring higher-performance and lower-cost technologies to market quicker than our competition. Over the last several decades, we've said that we can see Moore's Law continuing for the next 10 years, and that is still the case.
Faced with these difficulties, what are the various alternative options (3-tier architecture, superconductivity technologies, biochips...) that researchers are working on? Which ones do you find the most promising?
In addition to making the features on a chip smaller, Intel is exploring numerous technologies, including:
1) Heterogeneous integration in which elements such as radios and sensors are integrated onto one piece of silicone or package;
2) Three-dimensional manufacturing with multiple layers of transistors;
3) Approaches beyond traditional CMOS including high mobility materials and new transistor structures with improved electrostatics;
4) New ways of computing including neuromorphic, or brain-inspired, computing and in-memory computing.
In 1966, the cost of constructing a plant for a new chip was $14 million. In 1995, it took $1.5 billion. Today we talk in terms of $10 billion… What is the justification for this cost explosion? Will the trend become established? What impact will this have on the price of components?
Pursuing Moore's Law is getting more expensive in part because the job is getting more difficult. For Intel, the fundamental rationale of Moore's Law continues – even though it's more expensive overall, the price-per-transistor for Intel continues to decrease with each new generation. Intel will continue investing as long as we see a positive return and a competitive advantage.
Intel and some other U.S. firms dominate the microprocessor industry… how do you explain the continued U.S. leadership in this area?
The semiconductor industry started in the U.S. but it certainly isn't a U.S.-only industry today. Intel's chip-making plants can be found in the U.S., Europe, Israel and China and large manufacturers – Samsung and TSMC – are headquartered in Asia. It's a competitive industry, and we're proud that Intel is the world's largest chip company by revenue and is recognized as the leader in the pursuit of Moore's Law.
Mark T. BOHR is an Intel Senior Fellow and director of Process Architecture and Integration at Intel Corporation. He is a member of Intel's Logic Technology Development group located in Hillsboro, Oregon, where he is responsible for directing process development activities for Intel's advanced logic technologies. He joined Intel in 1978 and has been responsible for process integration and device design on a variety of process technologies for memory and microprocessor products. He is currently directing development activities for Intel's 7 nm logic technology. BOHR is a Fellow of the Institute of Electrical and Electronics Engineers and was the recipient of the 2012 IEEE Jun-ichi Nishizawa Medal and 2003 IEEE Andrew S. Grove award. In 2005 he was elected to the National Academy of Engineering. He holds 73 patents in the area of integrated circuit processing and has authored or co-authored 49 published papers.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
Published in DigiWorld Economic Journal DWEJ No. 100
Interview with Philippe AGHION
College de France, London School of Economics
Conducted by Gilbert CETTE & Yves GASSOT
C&S: Is more competition always favourable to boost innovation? Many representatives of the telecom industry are arguing that the innovation and investment in this sector is badly impacted by the intensity of competition, do you share this analysis?
Philippe AGHION: My work with Richard Blundell and co-authors shows that competition boosts innovation for firms that are close to the technological frontier (this is the escape competition effect) whereas it may discourage innovation in firms far below the technological frontier (this is the discouragement effect). Overall, the effect of competition on innovation is an inverted-U: innovation increases with competition at low levels of competition and it decreases with competition at high initial levels of competition.
Productivity has slowed down in the U.S. and in the main developed countries since the mid 2000s. How do you explain this slow-down when we consider the dramatic momentum we know in the digital economy? Are you optimistic about a new productivity surge in the near future?
Part of the slowdown in the U.S. may be due to the fact the fact that the ICT wave has partly run out of steam. But I also believe that innovation is not properly taken into account when measuring productivity growth, and this is particularly true in sectors that experience a high degree of firm turnover and where innovations are made by newcomers in the market. In the long run I am optimistic for at least two reasons. First, the ICT revolution has improved the technology for producing new ideas. Second, with the advent of globalization, the returns to innovation have greatly increased.
Are ICTs the main driver for innovation allowing for a productivity surge in the future?
I think that with the 3D printing and the clouds, the ICT sector still has glorious days ahead. But I also anticipate breakthroughs in other sectors, for example in the renewable energy and in the health/biotech sector.
Is according to you innovation a factor of inequality increase?
My recent work shows that innovation contributes to increasing the fraction of income earned by the top richest 1% or 0.1%. By this inequality is temporary as innovation rents are eroded by imitation and disappear when current innovations are eventually replaced by newer innovations (the Schumpeterian process of “creative destruction”). Moreover, my co-authors and I show that innovation does not increase overall inequality and that it enhances social mobility (again as a result of creative destruction).
Philippe AGHION is a Professor at the College de France and at the London School of Economics, and a fellow of the Econometric Society and of the American Academy of Arts and Sciences. His research focuses on the economics of growth. With Peter HOWITT, he pioneered the so-called Schumpeterian Growth paradigm which was subsequently used to analyze the design of growth policies and the role of the state in the growth process. Much of this work is summarized in their joint book Endogenous Growth Theory (MIT Press, 1998) and The Economics of Growth (MIT Press, 2009), in his book with Rachel GRIFFITH on Competition and Growth (MIT Press, 2006), and in his survey "What Do We Learn from Schumpeterian Growth Theory" (joint with U. AKCIGIT & P. HOWITT). In 2001, Philippe Aghion received the Yrjo Jahnsson Award of the best European economist under age 45, and in 2009 he received the John Von Neumann Award.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
Published in DWEJ No. 100
Interview with Joel MOKYR
Professor of Arts and Sciences and Professor of Economics and History, Northwestern University, USA
Sackler Professor (by spec. appt.), Tel Aviv University, Israel
Conducted by Gilbert CETTE & Yves GASSOT
C&S: As a well-known economic historian, you have done extensive work and research on industrial revolutions and the conditions of emergence of British leadership in the 19th century. This could have led you, like your colleague and friend from Northwestern University - Robert Gordon - to relativizing digital innovation, with the fear that in the absence of breakthrough inventions, the world is returning to a long period of stagnation. But this isn't the case. And while some people recognize the power of the digital transformation yet tend to focus on the damage and suffering it can cause, in your own case, while you don't deny the short-term consequences, you see the typical characteristics of creative destruction so dear to Schumpeter.
How do you justify your optimism in regard to the digital revolution at a time when productivity has been slowing down in all developed countries since the early 2000s, and the pace of productivity growth is very low? To what extent can this slowdown be accounted for by the deficit of our statistical system (the limits of what is taken into account by GDP)? By the delay in spreading the digital innovation throughout the various sectors? By the delay in adapting and training the workforce? Or the fact that digital innovation potential (AI, 3D printing, ...) will essentially be realized in the future?
Joel MOKYR: To start off, I don't see the future of technological progress as merely defined by the "digital revolution." AI, robots, 3D printing and such will be an important part of our technological future, but I see progress on a much broader front. Technology will continue to develop at an ever faster rate. But much of that will be necessary to repair the damage that previous innovation has caused. Climate change is only the best known of a whole array of phenomena in which past advances have had unknown and hidden costs that now have to be paid. These costs will be lower if we get better technology, but then that technology will have unintended and unpredicted consequences. And so on. There is progress, of course, but it is not linear, it is not even monotonic. If we knew precisely in advance what every innovation implied, it would not be much of an innovation.
You have on occasion emphasized the interactions between the progress of instruments, breakthrough innovation in technology and scientific invention. How would you apply the analyses you developed for the 18th and 19th century to the components of the digital revolution today?
Compared to the tools we have today for scientific research, Galileo's and Pasteur's look like stone age tools. Yes, we build far better microscopes and telescopes and barometers today, but digitalization has penetrated every aspect of science. It has led to the re-invention of invention. It is not just "IT" or "communications." Huge searchable databanks, quantum chemistry simulation, and highly complex statistical analysis are only some of the tools that the digital age places at science's disposal. Vastly more sophisticated tools – just think of the Betzig-Hell nanoscopes for which the inventors earned a Nobel Prize last year – will allow us to work at smaller and smaller levels of both materials and living things.
Materials are the core of our production. The terms bronze and iron ages signify their importance; the great era of technological progress between 1870 and 1914 was wholly dependent on cheap and ever-better steel. But what is happening to materials now is nothing short of a sea change, with new resins, ceramics, and entirely new solids designed in silico, developed at the nano-technological level. These promise the development of materials nature never dreamed of and that deliver custom-ordered properties in terms of hardness, resilience, elasticity, and so on. New resins, advanced ceramics, carbon nanotubes and other new solids have all come on line. Graphene, the new super-thin wonder material is another substance that promises to revolutionize production in many lines. The new research tools in material science have revolutionized research.
Of perhaps even more revolutionary importance is the powerful technology developed by Stanley Cohen and Herbert Boyer in the early 1970s, in which they succeeded in creating transgenic organisms through the use of micro-organisms. Genetic selection is an old technology: nature never intended to create poodles. But genetic engineering is to artificial selection what a laser driven fine-tuned surgical instrument is to a meat-axe. The potential economic significance of genetic engineering is simply staggering, as it completely changes the relationship between humans and all other species on the planet. Ever since the emergence of agriculture and husbandry, people have "played God" and changed their biological and topographical environment, creating new phenotypes in plants and animals. Genetic engineering means we are just far better at it.
Do you think that in the long-term future, productivity gains will be mainly driven by breakthrough innovations like the creation of new microprocessors with enhanced performance or the implementation of existing innovations in several areas? And in the latter case, isn't there a risk that the induced productivity gains will gradually dwindle?
I don't believe they will ever dwindle. But I think that productivity growth as traditionally measured will become largely irrelevant in describing what is really going on. Such techniques were designed to measure process innovation, that allowed firms to produce wheat and steel with fewer inputs. It is much harder to use it to measure quality improvements, many of them subtle and often hard to quantify (e.g. the introduction of airbags into cars or more sophisticated diagnostic machinery). It is even harder for traditional NIPA to deal with entirely new products such as anesthesia or microwave ovens or online encyclopedias.
For some, the collaborative economy is one of the most fruitful products of the internet. Should we see this primarily as an illustration of the capacity of digital to reduce transaction costs or as the sign of a possible surpassing of the market economy?
Technology will change the market economy. The "share economy" (now already known to some as the "uber-economy") has transformed urban transportation, and airbnb is transforming tourism. But these will be dwarfed by the impact of digital technology on the labor market, as already illustrated by taskrabbit handimen, upcounsel on-demand attorneys, urbansitter for babysitting and healthtap for on-line doctors. But this is just scratching the surface. Digital technology will change the labor market as much as the factory did during the Industrial Revolution. The factory eventually replaced the home as the main location where production took place. That pendulum may swing back, especially if mass customization through home manufacturing (misnomered as 3D printing) starts spreading. If both Robert Reich and Jeremy Rifkin are panicking about this, it cannot be all bad.
Your work has been partly guided by the question as to why the industrial revolution primarily took place in the UK rather than in Germany or France? Can you draw a parallel with the North American domination that we are seeing today in microprocessors, software and the internet? What conditions have favored this supremacy? What factors could threaten it? What priority changes could enable Europe to acquire the necessary conditions to compete with the digital domination of the US?
I am not sure that I am still all that overawed by the question of "why Britain first". The parallel is the putative "domination" of Americans today in high-tech. Rather than seeing the leader as the locomotive that pulls the entire train forward, I think of this as an electric train, in which the motive power is external, and the lead car is there more or less by accident. Technology today is the result of a multinational effort in which boundaries mean less and less. Finland led in cellphones, Israel in flash storage, France in nuclear power – so what? Does that mean they alone can use it? Let's face it, in today's world, if an invention is made somewhere, it is made everywhere. Silicone valley is in the US, but half of the people working there are foreign-born. They could be anywhere (as long as they are together). Of course, if a country has really terrible institutions, such as Putin's Russia or Khamenei's Iran, they are not only not likely to generate new technology, but may even find it hard to absorb it. But nations such as Norway or Switzerland will always be at the frontier even if they are contributing relatively little to pushing it out.
Many observers agree that the 21st century will be marked by the emergence of China in the forefront of the global economy. Do you think this country has the necessary conditions or is developing the conditions to establish its supremacy with new leadership in digital technology sectors?
No. Their institutions are not quite as bad as Russia or Nigeria, which are corrupt to the core and where a small kleptocracy extinguishes entrepreneurship. But to have technological progress and not just a thriving and well-functioning market economy more is needed. What you need is not only the rule of law, respect for property and human rights, and the enforcement of contracts. What you need is pluralism, tolerance, and freedom of expression and association. You need political competition and decentralization, in which the ruling elite is held accountable and in which the government is constrained by what it can do to its citizens. We need to keep in mind that innovators were and are deviants, people who in some way are different and abnormal, eccentric perhaps, and in conformist societies such people in some way are suppressed. Europe's advances started in earnest when those who thought "outside the box" were no longer in fear of being accused of "black magic" or heresy. Chinese history is a fascinating story of how incredible creativity and sophistication were essentially wasted after the Song dynasty and China fell behind the West. Mutatis mutandis, the same is true for the Soviet Union. The potential of Soviet Russia was huge, but bad institutions channeled its creativity into Sputniks, MIG's and Katyushas and little else.
Joel MOKYR is the Robert H. Strotz Professor of Arts and Sciences and Professor of Economics and History at Northwestern University and Sackler Professor (by special appointment) at the Eitan Berglas School of Economics at the University of Tel Aviv. He specializes in economic history and the economics of technological change and population change. He is the author of Why Ireland Starved: An Analytical and Quantitative Study of the Irish Economy, The Lever of Riches: Technological Creativity and Economic Progress, The British Industrial Revolution: An Economic Perspective, The Gifts of Athena: Historical Origins of the Knowledge Economy, and The Enlightened Economy. His most recent book is A Culture of Growth, to be published by Princeton University Press in 2016. He serves as editor in chief of a book series, the Princeton University Press Economic History of the Western World. He serves as chair of the advisory committee of the Institutions, Organizations, and Growth program of the Canadian Institute of Advanced Research. Prof. Mokyr has an undergraduate degree from the Hebrew University of Jerusalem and a Ph.D. from Yale University. He has taught at Northwestern since 1974, and has been a visiting Professor at Harvard, the University of Chicago, Stanford, the Hebrew University of Jerusalem, the University of Tel Aviv, University College of Dublin, and the University of Manchester. He is a fellow of the American Academy of Arts and Sciences, a foreign fellow of the Royal Dutch Academy of Sciences, the Accademia Nazionale dei Lincei and a Fellow of the Econometric Society and the Cliometric Society. His books have won a number of important prizes, and in 2006 he was awarded the biennial Heineken Prize by the Royal Dutch Academy of Sciences for a lifetime achievement in historical science. In 2015 he was awarded the Balzan Prize for Economic History.