Published in DigiWorld Economic Journal DWEJ No. 101
Interview with Nicolas CURIEN & Nathalie SONNAC
Commissioners, Conseil supérieur de l'audiovisuel (CSA) (*)
Conducted by Alexandre JOLIN
(*) This interview only reflects the views of the contributors, not the CSA's official positions.
C&S: Since the late 70's, the European Commission has aimed to harmonize the regulatory landscape for audio-visual in Europe. The TVWF then the AVMS directives have created a legal framework allowing the circulation of linear TV and on-demand audio-visual media services in Europe. As part of the European Commission's Digital Single Market strategy, a review of the AVMSD has begun in 2015 and legislative proposals are due to be set out in 2016. Being the regulatory body for France, as a member state, how is the CSA involved in those consultations? According to you, which issues are to be primarily resolved?
Nicolas CURIEN & Nathalie SONNAC: Intending to bring its regulator's viewpoint and its expertise in the practice of regulation, the CSA contributed to the European Commission's consultation about the review of the AVMS directive, entitled: "A framework for the audiovisual media in the 21th century". The CSA also participated in the cross-ministerial preparation of the French Authorities' positions and it provided a contribution to the French answer to the AVMS consultation. Mostly, the CSA plays a very active role in the European Regulators Group for Audiovisual Media Services (ERGA), which was chaired, during its two first years of existence (2014-2015), by Olivier Schrameck, the chairman of the CSA. Created in February 2014, by the European Commission as an advisory body examining issues related to media services, the ERGA stands now as a key institutional innovation, pushing forward European audio-visual policy matters. For us, as national regulators, working together within this structure represents a strong opportunity to carry out an in depth future analysis of the audio-visual sector and to stimulate the emergence of common initiatives. The ERGA is in charge of assisting the Commission in the revision of its legislative instruments, as it is now the case for the AVMS directive.
The audiovisual services drastically changed since the adoption of the previous directive in 2007. The present situation, resulting from the dynamics of "convergence", was not anticipated in the first place and it calls for several substantial adjustments in order to take into account the development of on demand non linear services, of interactivity, as well as usage of associated data. Moreover, the irruption into the French and European audio-visual markets of large and worldwide OTT players, such as Netflix or Google, raises a new kind of issues, which must be solved at the European scale. As specifically regards the revision of the AVMS directive, the ERGA produced three reports, published in January 2016, respectively about the independence of national audio-visual regulatory authorities, about the possible extension of the directive to new online players, and about minors' protection. These reports include recommendations which were unanimously approved by the 28 regulators of the European Union's member States. The ERGA thus invites the Commission to incorporate its proposals in the revised directive. An additional report about the territorial competency of regulators will be issued in the course of spring 2016.
One of the proposals on the table is to apply the same obligations placed on TV broadcasters and on-demand TV-like services to online video sharing platforms as well. Is this a realistic solution to complete the existing film and audio-visual financing system?
This issue goes well beyond the particular case of video sharing platforms, as it also includes all digital intermediaries which are commonly designated under the generic term of "platforms", such as content distributors, content aggregators, providers of applications, sharing platforms or suppliers of devices; that is, all players which hold a position between content and usage, making them gatekeepers of the access to content. All actors who develop a strategy around content and/or are involved in the exposition and the promotion of content, especially through algorithms are concerned. Since these new operators do orient consumers and deliver prescriptions to them, they doubtless play an editorial role which is similar, up to a certain extent, to that played by traditional audio-visual editors. Then, it seems both sensible and in line with the driving principles of audio-visual regulatory policy to set up for new players an adapted regime of obligations. However, such a regime should not of course ignore the necessity of sustaining the pace of innovation: when contemplating a new deal and a new toolkit for audio-visual regulation, one must not at the same time slow down the growth of innovative services which largely contribute to widen the exposition of works and do constitute a major source of creativity in the audio-visual sector.
One size does not fit all and all platforms should not be subject to the same degree of regulation: namely, a small platform should not be treated as YouTube. Proportionality should thus be set as a guideline and the regulator should focus in priority on platforms which bear a significant impact onto the market. Moreover, as it would clearly prove inefficient to set local obligations to global players, a common harmonized framework has to be defined within the European Union. Achieving proportionality, within a renewed regulatory scheme designed for digital intermediaries, also requires that rules existing for traditional editors be adapted in order to reach a satisfactory matching between obligations and the specific characteristics of the new actors. More generally, traditional regulation should not be transposed unchanged onto the digital world, a world in which the speed of evolution is very high, in which some players are active at an international scale and in which the business models greatly differ from classical ones. Accordingly, an effective regulation should be based on a triptych associating public policy, users and operators and could mainly rely upon co-regulation and self regulation. Such a perspective is precisely consistent with ERGA's present undertakings, which consist in identifying audio-visual centric platforms, rather than all platforms, with the objective to align their behaviour with the traditional goals of audio-visual public policy, although under a proportionate regulatory approach. Indeed, the public policy goals, which underlie the existing obligations set for traditional actors, such as minors' protection, copyright enforcement, investment in creation, or fair competition, do still prevail for digital platforms. In the Digiworld, goals remain the same; modalities may differ!
With the rise of international OTT services and the ongoing consolidation of the European content industry, how can policymakers best safeguard and promote cultural diversity across Europe?
Reaching a critical size through consolidation is a necessary step to preserve a model of diversified content in Europe. This does not amount to geographic confining, but rather calls for a more extensive and international approach, strongly based upon European cultural specificities. This global strategy should concern production, traditional edition and new digital platforms as well. Europe holds a solid position in terms of local content production and it must derive benefit from it. However, the momentum has to be generated through a coordinated policy, as it cannot result from the separate actions of isolated national players. In this regard, regulators also are at stake and they must rapidly come towards a more inter-institutional approach.
In their efforts to promote the diversity of content, the European editors should use linear TV, which is still by far the dominant mode in consumers' practice, as a kind of "factory" in order to produce pieces of original content promised to become brands of their own and move towards non linear usage on electronic platforms, after a first lifetime spent inside the grids of linear TV in order to get some notoriety. As access through networks is a necessary condition for access to content, synergies between medias and telcos should also be considered in order to extend the scope of content distribution and to reduce its cost. Moreover, promoting diversity heavily depends on the ability of creators to finance their content and make it available to consumers. In this respect, fair access to all distribution channels, especially online platforms, stands as a key enabling factor: hence, the strong attention of regulators to the net neutrality and the content visibility issues
Today, the OTT video industry is mainly driven by non European players such as Netflix, Apple or Liberty Global, which, despite its British implantation, is controlled by a US holding company. According to you, what could be done to ensure the development of a strong European OTT players and ensuring the sustainability of the traditional broadcasting market?
This question relates in part to the issue of rights' territoriality. A right balance has to be found between the two conflicting objectives of maximizing rights' monetization, on the one hand, and extending content's exposition, on the other hand, in a fast moving context where the growth of digital platforms makes territorial enclosure unsustainable against bypass or piracy. Since reaching such a balance likely means substantial change in the present contractual arrangements, a concerted sectorial process is needed gathering together rights holders, editors and distributors.
At least, large national players should contract partnerships and launch together digital Pan-European services, with a strong identity. As already mentioned above, these developments cannot take place at a national scale, while the main international competitors, such as Netflix, do operate worldwide, do offer worldwide content, and are less and less subject to territorial constraints; it is especially the case as regards TV series available in SVOD services, such as House of Cards, exploited under a "free" regime. In this revolutionary context, where the historical category of TV channel might sooner or later be replaced by the upcoming category of brand-content, the sustainability of traditional players is clearly conditioned to their ability and willingness to co-design adaptive and cooperative ways of deriving as much value as possible from their content.
On demand video services are currently regulated in their "country of origin". Some players are denouncing this as a distortion of competition because legal obligations can differ highly from one Member State to another. As was already done for the VAT last year, would it be recommended or possible to apply a "user-centric" approach, setting the focal point on the end-user instead of the service publisher?
The country of origin's principle certainly helped to create a common audio-visual market, as it facilitated the cross-border circulation of services, warranting legal security to broadcasters. In practice, however, this principle proves insufficient to set the conditions of a fair competition across service providers, since the AVMS directive is a framework for coordination, not harmonization, and some member States chose to adopt stricter rules than those prescribed in the directive. This may lead to a particularly critical situation, whenever a service is explicitly directed towards a given State within the Union, although it is established in another one: such as they are today libelled in the directive, the present procedures do not actually allow a member State to apply its possibly stricter rules to a foreign service aiming to reach its citizens. As a consequence, a severe imbalance is potentially created across operators competing in a same local market, some being subject to stronger obligations than others. Then, in order to avoid damageable "regulatory shopping" strategies, a fair and effective competition across all European operators must be guaranteed. In this regard, it is proposed that the European regulation be modified, by introducing an exception to the country of origin's principle, which would allow a given destination country to apply its own rules to those services which specifically address its population. This proposal does not intend to abolish the country of origin's setting, which would remain the general ruling, but just to amend it at the margin, to deal with circumstances where its application would obviously result in a harmful distortion in the marketplace.
The European Commission has also made a legislative proposal to change the copyright framework to allow cross-border portability of online video services, ensuring that consumers can access content they bought when they travelled in other EU countries. Could content portability be a structural threat for national TV industries? What could be the right balance between protecting right holders' revenues and guaranteeing access to consumers?
The European ruling about portability, issued last December, is a most appropriate initiative and it brings very good news to all European citizens, who will have access to their national offers of digital content when they travel abroad within the Union. Yielding such a significant benefit to the travelling and nomadic citizens should nevertheless not threaten the principle of rights' territoriality, which remains a very important piece in the framework in order to preserve a fair remuneration of authors. The application of rights' portability should also not hinder the commercial development of European players. Therefore, the precise conditions of portability now have to be carefully designed, through a clear specification of the criteria, characterizing temporary versus permanent residence. Finally, a realistic time frame should be set, that is not too short a one, in order to ease the operational implementation by operators.
Over the last years, linear TV revenues growth has tended to stagnate in Western Europe while on demand services, mainly SVOD, have been generating increasing traffic with low monetization rate. On the other hand, traditional broadcasters currently face stricter rules than on demand video services in some areas, such as promoting European cultural works. According to you, what would be the right balance between promoting European OTT players and protecting the traditional broadcasting market?
Seeking here for a "right" balance is maybe not fully appropriate, for the consumers do not show a same and unique profile of usage. Consumption practices vary greatly indeed, especially according to age and to social class, which leads to a wide scope of expectations in terms of kind of content, modality of usage and type of viewing device: television, tablet or smartphone. Linear TV and OTT services are likely more complements than substitutes, since they don't address the same audience and are operated under different business models. Therefore, the relevant issue is less that of balancing efforts between online versus traditional supply, than that of designing tailored offers, well fitted to individual contrasted needs, and identifying efficient synergies as regards, for instance, works' circulation and cross-promotion. In this direction, a major difficulty must be overcome: market prices of online services are established at a low level, those of SVOD lying around 10€ per month, in such a way they do not enable a single player to make the substantial investment which is required to produce attractive, competitive and self viable content. Hence, a consolidation of means at the European scale appears as a necessity. Finally, demand must be stimulated as well as supply and, in this respect education to media and to European culture is a key factor of success.
Is there any need for concentration in both service publishing and distribution sectors in order to make European champions emerge? Should this solution be supported by national regulators?
A process of concentration across players located at different links within the audio-visual chain of value, or even between actors present within that chain and outsiders, may already be observed in France, just as it is in other European countries. In France, major recent examples are the fusion of Numericable and SFR, the agreement between Altice and NextRadioTV, the acquisition of Newen by TF1, the integration of Canal+ within Vivendi. Public policy should of course encourage all industrial strategies which favour a cultural rebalancing, enhance the exposition of the French and the European cultural patrimonies and increase their value. Regulators should nevertheless be most attentive in ensuring that major transformations in the audio-visual industry do not bear a threat against fundamental ethical principles, such as liberty of expression, editorial freedom and independence of information.
Nicolas CURIEN, a member of Corps des Mines, sits at the board of the French Regulatory Body for Radio and Television (CSA), since 2015. He also is Emeritus professor at Conservatoire National des Arts et Métiers, where he held the chair "Telecommunications Economics and Policy" from 1992 to 2011, before being Commissioner in the French Regulatory Body for Telecommunications and Post from 2005 to 2011. An expert in digital economics, he taught at École Polytechnique from 1985 to 2007 and is a founding member of the French National Academy of Engineering.
Nathalie SONNAC (Doctor of Economics) chaired the Information and Communication Department of Paris 2 from 2009 to 2015 and was in charge of the professional Master 2 "Media & Public". As a media economy expert, culture and digital technology, she is also the author of numerous scientific books and articles in this field. More specifically she analyses the issues of competition and regulation in the digital age, market interaction, new business models, and monetization of digital content. She was appointed Commissioner at the Conseil supérieur de l'audiovisuel by the President of the French National Assembly on January 5, 2015 for a six-year mandate.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 101
Interview with Lorena Boix Alonso, EC-DG Connect, Brussels
Conducted by Sally BROUGHTON MICOVA
DW Economic Journal: You recently completed a comprehensive consultation on audiovisual media services with a view to possible revision of the EU's regulatory framework in this area. How much of a call for change is there from stakeholders?
Lorena BOIX ALONSO: The Audiovisual Media Services Directive (AVMSD) was adopted in 2007 and replaced the Television Without Borders Directive of 1989.
Since 2007 – and let alone since the '80s –, the audiovisual media landscape has changed significantly, in particular due to the phenomenon of media convergence. In light of these changes, we are currently reviewing the Directive and assessing its regulatory fitness, with a view to presenting a new legislative proposal later this year.
The public consultation we organised last year is an important part of this exercise and informs our future actions.
Currently, the AVMSD regulates television broadcasts and on-demand services. It applies to programmes that are "TV-like" and for which providers have editorial responsibility. The preliminary trends of the consultation show some convergence of stakeholders' views on the need to revise the scope of application of the rules. However, respondents are not always clear as to how to do this, what new services should be involved and to what type of rules they should be subject. The main concern seems to be viewers' protection, including minors.
A crucial pillar of the Directive is the so-called country of origin principle. Thanks to this principle, service providers only need to abide by the rules of a Member State rather than of multiple countries - making things simpler for businesses, especially those wishing to develop cross-border. Quite unsurprisingly, most of the respondents to the consultation want to maintain the country of origin principle.
De facto, the country of origin principle has facilitated the growth and proliferation of those services. As of end 2013, 5,141 TV channels were established in the EU. Almost 2,000 of them targeted foreign markets. This share has increased from 28 % in 2009 - year of implementation of the Directive - to 38 % in 2013 (from 45% to 68 % for the UK). As far as VoD services are concerned, in 2015, on average in Member States, 31 % of the VoD services available were established in another EU country.
Another subject on which we observed a clear trend in the responses to the public consultation is the importance to ensure the independence of national audiovisual regulators.
We have however observed less clear trends regarding other areas covered by the Directive, for example on the way forward for the rules on protection of minors, commercial communications and promotion of EU works.
The independence of regulatory authorities has historically been a touchy subject for some member states, thus it was not really dealt with in the current Directive nor the one before it. However, things seem to be different this time around, particularly with the regulators themselves taking a stand on the issue. Why have things changed and what exactly is on the table?
The independence of audiovisual regulatory bodies is key for the implementation of legislation in an impartial manner (i.e., free from influence by political players or industry). When regulatory bodies lack independence, this has a direct impact on the effective transposition and application of EU legislation. This is why many EU regulatory frameworks in other domains (i.e. telecom, gas, electricity, postal services, personal data protection) require from Member States regulatory independence. In the field of media, regulatory independence is also important for the preservation of a free and pluralistic media system.
However, the Audiovisual Media Services Directive does not impose an explicit obligation on the Member States to create an independent regulatory body.
The currently-running review of the AVMSD is assessing whether the Directive should be reinforced explicitly by requiring Member States to ensure independence of audiovisual regulatory bodies. As I said, the preliminary results of the public consultation indicate that the majority of respondents would support this position.
The Commission has established the European Regulators Group for Audiovisual Media Services (ERGA), which is – among other tasks – looking precisely into the issue of independence. And yes – in particular following the newly approved amendments to the Polish media law – the Group has recently pointed to the importance of independence.
ERGA called "upon all Member States of the European Union to act to uphold the principle of independence of the media across all European Member States." The Group also called on the Commission "to continue to actively monitor developments and to take all necessary steps to support a free and independent media, including the taking of firm action against the weakening of the necessary institutional arrangements".
How does what your team is working on in relation to audiovisual media services interact with other elements of the Digital Single Market plans such as copyright reform and addressing online intermediaries?
The Digital Single Market (DSM) strategy for Europe calls for a modernisation of the AVMSD to reflect market, consumption, and technological changes. It requires the Commission to focus the scope of the AVMSD and on the nature of the rules applicable to all market players, in particular those for the promotion of European works, protection of minors, and advertising rules.
The overall vision of the DSM strategy is to create an internal market for digital content and services and ensure that Europe is a leader in the global digital economy. To meet this objective, the DSM puts forward a range of initiatives beyond the AVMSD review.
The AVMSD review is being coordinated with these other DSM initiatives such as the assessment of the role of online platforms and intermediaries as well as the evaluation of the telecoms framework. Besides, the Commission continues to work on the modernisation of the copyright framework as well as on the implementation of a set of support measures accompanying these legislative changes in order to facilitate cross border access to European content within the digital single market.
What can we do about "the Netflix problem"? Have any good ideas come to light in your consultations in relation to OTT audiovisual services?
We are well aware of the concerns, raised by some in the public consultation, related to the lack of a level playing field, resulting from the different level of requirements introduced by Member States. This relates particularly to the field of promotion of European works.
New players start investing in new content. This is already a trend in the US. US players active on the EU market, e.g. Netflix and Amazon, also start investing in European productions. European VoD players are also more and more financing European content, also often in the form of co-financing.
However, it is true that these players do not contribute to the financing of new European content to the same extent as traditional players (television and cinema) do.
All these aspects are considered in the context of the AVMSD review. In that view and even though all the options are open at this stage, during our assessment we are looking in particular into the best ways to ensure the promotion of European works in on-demand services.
How do you think we are going to be able to encourage European content production and distribution in the future?
The promotion of European works is a key value of the Directive. The current provisions of the Directive have contributed to the cultural diversity in Europe though the production and distribution of valuable European content. For instance, the 66th Berlinale film festival that took place in February was a very good example of the creative power and diversity of cinema, with a new attendance record. I believe we can celebrate the fruits of the work of the European audiovisual and film industry that we can all be very proud of.
However, it is undeniable that the market and viewing habits have changed since the last review of the Directive, in particular regarding the rapid developing of Video on Demand. Young people consume audiovisual content increasingly on-line. People want access to audiovisual content whenever and wherever they are, on the device of their choice. Technology has made this possible.
I believe this can be a great opportunity to increase the production and circulation of European films. The Commission is very much keeping this objective in mind in the revision of the AVMSD rules on promoting European works as well as in the context of the implementation of the Creative Europe MEDIA programme. In addition, the Commission is launching other coordinated initiatives to exploit all synergies available to increase attractiveness of European films. This require measures in various areas on which the Commission is working together with all interested parties including the audiovisual sector ( film producers, authors, distributors, sales agents, VOD services, broadcasters, etc.) as well as public authorities and film funds in the frame of the European Film Forum.
On December 2015, the Commission adopted the Copyright Communication "Towards a modern, more European copyright framework" which sets an agenda of non-legislative measures meant to accompany the legislative agenda in order to ensure a wider access to audiovisual content across borders. The rationale for these measures is that audiovisual works and films require investment in order to really benefit from the DSM and to be widely accessible. Audiovisual works and films need to be available in formats and catalogues ready for use and to be understood (issue of language versions).
Finally, the Commission is also deeply engaged into the Creative Europe Media programme, which this year celebrates its 25th anniversary. Through this programme the EU invests roughly €100 million per year in European films and audiovisual industries and supports projects which are aimed at enhancing the prominence of European films on VOD Platforms.
Lorena BOIX ALONSO is the Head of Unit for Converging Media and Content Unit, Directorate General for Communications Networks Content and Technology since July 2012. Formerly, she was Deputy Head of Cabinet of Vice President Neelie Kroes, European Commissioner for the Digital Agenda. During Ms Kroes' mandate as Commissioner for Competition, Lorena Boix Alonso commenced in October 2004 as a member of her Cabinet and became Deputy Head of Cabinet in May 2008. She holds a Master of Laws, with a focus on Antitrust Law and Intellectual Property, from the Harvard Law School. She graduated in Law from the University of Valencia (Spain) and then obtained a Licence Spéciale en Droit Européen from the Université Libre de Bruxelles. She joined the European Commission Directorate-General for Competition in 2003. Prior to that, she has worked for Judge Rafael García Valdecasas, at the European Court of Justice, as well as Deputy Director and Legal Coordinator of the IPR-Helpdesk Project and in private practice in Brussels.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 100
Interview with Mark T Bohr
Intel Senior Fellow, Technology and Manufacturing Group Director, Process Architecture and Integration Conducted by Gilbert CETTE & Yves GASSOT
C&S: Moore's Law is turning 50. Can you comment on and characterise the progress so far? How important is this in the amazing digital development that we're witnessing? Mark T. BOHR:
Moore's Law is a driving force of technological, economic and social change and is a foundational force in modern life. While most people have never seen a microprocessor, we use countless devices every day that are made possible by microprocessors and Moore's Law. Microprocessors and related technologies have become so integrated into daily life that they've become indispensable, yet nearly invisible.
Against the regular predictions of its demise, Moore's Law endures and remains essential to today's generation, which has come to expect and enjoy the experiences and opportunities defined by the observation.
Moore's Law will enable us to continuously shrink technology and make it more power efficient, allowing Intel and the industry to rethink where – and in what situations – computing is possible and desirable. Computing can disappear into the objects and spaces that we interact with – even the fabric of our clothes or shoes. New devices can be created with powerful, inexpensive technology and combining this with the ability to pool and share more information, new experiences become possible.
Moore, in a recent interview, said he thought that in the coming 5 to 10 years his "law" would be validated… Other observers think it will have seen a period of acceleration in the decade after 1990, followed by a sharp slowdown in the 2000s. Do you share this view? How do you account for the different analyses? Do you think Moore's Law has been slowed down because of the physical limitations to increasing the number of transistors per chip? Because of the 'diversion' of some R&D spending on the part of chip producers toward the fight against heat generation? Because of the exponential and hence unsustainable increase in the R&D spending it would take to extend Moore's Law? Or for other reasons?
The demise of Moore's Law has been predicted many times. Continuing Moore's Law is getting tougher, but we believe we have a lead versus our competitors. We remain confident in our ability to deliver Moore's Law and expect to continue true cost reduction through leading-edge process technology and generating real product improvements that apply across our product portfolio.
What other constraints might contribute to questioning the validation of Moore's Law?
We can't speak for others in the industry. Intel recognizes that the continuation of Moore's Law provides us with a competitive differentiator and the ability to bring higher-performance and lower-cost technologies to market quicker than our competition. Over the last several decades, we've said that we can see Moore's Law continuing for the next 10 years, and that is still the case.
Faced with these difficulties, what are the various alternative options (3-tier architecture, superconductivity technologies, biochips...) that researchers are working on? Which ones do you find the most promising?
In addition to making the features on a chip smaller, Intel is exploring numerous technologies, including:
1) Heterogeneous integration in which elements such as radios and sensors are integrated onto one piece of silicone or package;
2) Three-dimensional manufacturing with multiple layers of transistors;
3) Approaches beyond traditional CMOS including high mobility materials and new transistor structures with improved electrostatics;
4) New ways of computing including neuromorphic, or brain-inspired, computing and in-memory computing.
In 1966, the cost of constructing a plant for a new chip was $14 million. In 1995, it took $1.5 billion. Today we talk in terms of $10 billion… What is the justification for this cost explosion? Will the trend become established? What impact will this have on the price of components?
Pursuing Moore's Law is getting more expensive in part because the job is getting more difficult. For Intel, the fundamental rationale of Moore's Law continues – even though it's more expensive overall, the price-per-transistor for Intel continues to decrease with each new generation. Intel will continue investing as long as we see a positive return and a competitive advantage.
Intel and some other U.S. firms dominate the microprocessor industry… how do you explain the continued U.S. leadership in this area?
The semiconductor industry started in the U.S. but it certainly isn't a U.S.-only industry today. Intel's chip-making plants can be found in the U.S., Europe, Israel and China and large manufacturers – Samsung and TSMC – are headquartered in Asia. It's a competitive industry, and we're proud that Intel is the world's largest chip company by revenue and is recognized as the leader in the pursuit of Moore's Law.
Mark T. BOHR is an Intel Senior Fellow and director of Process Architecture and Integration at Intel Corporation. He is a member of Intel's Logic Technology Development group located in Hillsboro, Oregon, where he is responsible for directing process development activities for Intel's advanced logic technologies. He joined Intel in 1978 and has been responsible for process integration and device design on a variety of process technologies for memory and microprocessor products. He is currently directing development activities for Intel's 7 nm logic technology. BOHR is a Fellow of the Institute of Electrical and Electronics Engineers and was the recipient of the 2012 IEEE Jun-ichi Nishizawa Medal and 2003 IEEE Andrew S. Grove award. In 2005 he was elected to the National Academy of Engineering. He holds 73 patents in the area of integrated circuit processing and has authored or co-authored 49 published papers.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
Published in DigiWorld Economic Journal DWEJ No. 100
Interview with Philippe AGHION
College de France, London School of Economics
Conducted by Gilbert CETTE & Yves GASSOT
C&S: Is more competition always favourable to boost innovation? Many representatives of the telecom industry are arguing that the innovation and investment in this sector is badly impacted by the intensity of competition, do you share this analysis?
Philippe AGHION: My work with Richard Blundell and co-authors shows that competition boosts innovation for firms that are close to the technological frontier (this is the escape competition effect) whereas it may discourage innovation in firms far below the technological frontier (this is the discouragement effect). Overall, the effect of competition on innovation is an inverted-U: innovation increases with competition at low levels of competition and it decreases with competition at high initial levels of competition.
Productivity has slowed down in the U.S. and in the main developed countries since the mid 2000s. How do you explain this slow-down when we consider the dramatic momentum we know in the digital economy? Are you optimistic about a new productivity surge in the near future?
Part of the slowdown in the U.S. may be due to the fact the fact that the ICT wave has partly run out of steam. But I also believe that innovation is not properly taken into account when measuring productivity growth, and this is particularly true in sectors that experience a high degree of firm turnover and where innovations are made by newcomers in the market. In the long run I am optimistic for at least two reasons. First, the ICT revolution has improved the technology for producing new ideas. Second, with the advent of globalization, the returns to innovation have greatly increased.
Are ICTs the main driver for innovation allowing for a productivity surge in the future?
I think that with the 3D printing and the clouds, the ICT sector still has glorious days ahead. But I also anticipate breakthroughs in other sectors, for example in the renewable energy and in the health/biotech sector.
Is according to you innovation a factor of inequality increase?
My recent work shows that innovation contributes to increasing the fraction of income earned by the top richest 1% or 0.1%. By this inequality is temporary as innovation rents are eroded by imitation and disappear when current innovations are eventually replaced by newer innovations (the Schumpeterian process of “creative destruction”). Moreover, my co-authors and I show that innovation does not increase overall inequality and that it enhances social mobility (again as a result of creative destruction).
Philippe AGHION is a Professor at the College de France and at the London School of Economics, and a fellow of the Econometric Society and of the American Academy of Arts and Sciences. His research focuses on the economics of growth. With Peter HOWITT, he pioneered the so-called Schumpeterian Growth paradigm which was subsequently used to analyze the design of growth policies and the role of the state in the growth process. Much of this work is summarized in their joint book Endogenous Growth Theory (MIT Press, 1998) and The Economics of Growth (MIT Press, 2009), in his book with Rachel GRIFFITH on Competition and Growth (MIT Press, 2006), and in his survey "What Do We Learn from Schumpeterian Growth Theory" (joint with U. AKCIGIT & P. HOWITT). In 2001, Philippe Aghion received the Yrjo Jahnsson Award of the best European economist under age 45, and in 2009 he received the John Von Neumann Award.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
Published in DWEJ No. 100
Interview with Joel MOKYR
Professor of Arts and Sciences and Professor of Economics and History, Northwestern University, USA
Sackler Professor (by spec. appt.), Tel Aviv University, Israel
Conducted by Gilbert CETTE & Yves GASSOT
C&S: As a well-known economic historian, you have done extensive work and research on industrial revolutions and the conditions of emergence of British leadership in the 19th century. This could have led you, like your colleague and friend from Northwestern University - Robert Gordon - to relativizing digital innovation, with the fear that in the absence of breakthrough inventions, the world is returning to a long period of stagnation. But this isn't the case. And while some people recognize the power of the digital transformation yet tend to focus on the damage and suffering it can cause, in your own case, while you don't deny the short-term consequences, you see the typical characteristics of creative destruction so dear to Schumpeter.
How do you justify your optimism in regard to the digital revolution at a time when productivity has been slowing down in all developed countries since the early 2000s, and the pace of productivity growth is very low? To what extent can this slowdown be accounted for by the deficit of our statistical system (the limits of what is taken into account by GDP)? By the delay in spreading the digital innovation throughout the various sectors? By the delay in adapting and training the workforce? Or the fact that digital innovation potential (AI, 3D printing, ...) will essentially be realized in the future?
Joel MOKYR: To start off, I don't see the future of technological progress as merely defined by the "digital revolution." AI, robots, 3D printing and such will be an important part of our technological future, but I see progress on a much broader front. Technology will continue to develop at an ever faster rate. But much of that will be necessary to repair the damage that previous innovation has caused. Climate change is only the best known of a whole array of phenomena in which past advances have had unknown and hidden costs that now have to be paid. These costs will be lower if we get better technology, but then that technology will have unintended and unpredicted consequences. And so on. There is progress, of course, but it is not linear, it is not even monotonic. If we knew precisely in advance what every innovation implied, it would not be much of an innovation.
You have on occasion emphasized the interactions between the progress of instruments, breakthrough innovation in technology and scientific invention. How would you apply the analyses you developed for the 18th and 19th century to the components of the digital revolution today?
Compared to the tools we have today for scientific research, Galileo's and Pasteur's look like stone age tools. Yes, we build far better microscopes and telescopes and barometers today, but digitalization has penetrated every aspect of science. It has led to the re-invention of invention. It is not just "IT" or "communications." Huge searchable databanks, quantum chemistry simulation, and highly complex statistical analysis are only some of the tools that the digital age places at science's disposal. Vastly more sophisticated tools – just think of the Betzig-Hell nanoscopes for which the inventors earned a Nobel Prize last year – will allow us to work at smaller and smaller levels of both materials and living things.
Materials are the core of our production. The terms bronze and iron ages signify their importance; the great era of technological progress between 1870 and 1914 was wholly dependent on cheap and ever-better steel. But what is happening to materials now is nothing short of a sea change, with new resins, ceramics, and entirely new solids designed in silico, developed at the nano-technological level. These promise the development of materials nature never dreamed of and that deliver custom-ordered properties in terms of hardness, resilience, elasticity, and so on. New resins, advanced ceramics, carbon nanotubes and other new solids have all come on line. Graphene, the new super-thin wonder material is another substance that promises to revolutionize production in many lines. The new research tools in material science have revolutionized research.
Of perhaps even more revolutionary importance is the powerful technology developed by Stanley Cohen and Herbert Boyer in the early 1970s, in which they succeeded in creating transgenic organisms through the use of micro-organisms. Genetic selection is an old technology: nature never intended to create poodles. But genetic engineering is to artificial selection what a laser driven fine-tuned surgical instrument is to a meat-axe. The potential economic significance of genetic engineering is simply staggering, as it completely changes the relationship between humans and all other species on the planet. Ever since the emergence of agriculture and husbandry, people have "played God" and changed their biological and topographical environment, creating new phenotypes in plants and animals. Genetic engineering means we are just far better at it.
Do you think that in the long-term future, productivity gains will be mainly driven by breakthrough innovations like the creation of new microprocessors with enhanced performance or the implementation of existing innovations in several areas? And in the latter case, isn't there a risk that the induced productivity gains will gradually dwindle?
I don't believe they will ever dwindle. But I think that productivity growth as traditionally measured will become largely irrelevant in describing what is really going on. Such techniques were designed to measure process innovation, that allowed firms to produce wheat and steel with fewer inputs. It is much harder to use it to measure quality improvements, many of them subtle and often hard to quantify (e.g. the introduction of airbags into cars or more sophisticated diagnostic machinery). It is even harder for traditional NIPA to deal with entirely new products such as anesthesia or microwave ovens or online encyclopedias.
For some, the collaborative economy is one of the most fruitful products of the internet. Should we see this primarily as an illustration of the capacity of digital to reduce transaction costs or as the sign of a possible surpassing of the market economy?
Technology will change the market economy. The "share economy" (now already known to some as the "uber-economy") has transformed urban transportation, and airbnb is transforming tourism. But these will be dwarfed by the impact of digital technology on the labor market, as already illustrated by taskrabbit handimen, upcounsel on-demand attorneys, urbansitter for babysitting and healthtap for on-line doctors. But this is just scratching the surface. Digital technology will change the labor market as much as the factory did during the Industrial Revolution. The factory eventually replaced the home as the main location where production took place. That pendulum may swing back, especially if mass customization through home manufacturing (misnomered as 3D printing) starts spreading. If both Robert Reich and Jeremy Rifkin are panicking about this, it cannot be all bad.
Your work has been partly guided by the question as to why the industrial revolution primarily took place in the UK rather than in Germany or France? Can you draw a parallel with the North American domination that we are seeing today in microprocessors, software and the internet? What conditions have favored this supremacy? What factors could threaten it? What priority changes could enable Europe to acquire the necessary conditions to compete with the digital domination of the US?
I am not sure that I am still all that overawed by the question of "why Britain first". The parallel is the putative "domination" of Americans today in high-tech. Rather than seeing the leader as the locomotive that pulls the entire train forward, I think of this as an electric train, in which the motive power is external, and the lead car is there more or less by accident. Technology today is the result of a multinational effort in which boundaries mean less and less. Finland led in cellphones, Israel in flash storage, France in nuclear power – so what? Does that mean they alone can use it? Let's face it, in today's world, if an invention is made somewhere, it is made everywhere. Silicone valley is in the US, but half of the people working there are foreign-born. They could be anywhere (as long as they are together). Of course, if a country has really terrible institutions, such as Putin's Russia or Khamenei's Iran, they are not only not likely to generate new technology, but may even find it hard to absorb it. But nations such as Norway or Switzerland will always be at the frontier even if they are contributing relatively little to pushing it out.
Many observers agree that the 21st century will be marked by the emergence of China in the forefront of the global economy. Do you think this country has the necessary conditions or is developing the conditions to establish its supremacy with new leadership in digital technology sectors?
No. Their institutions are not quite as bad as Russia or Nigeria, which are corrupt to the core and where a small kleptocracy extinguishes entrepreneurship. But to have technological progress and not just a thriving and well-functioning market economy more is needed. What you need is not only the rule of law, respect for property and human rights, and the enforcement of contracts. What you need is pluralism, tolerance, and freedom of expression and association. You need political competition and decentralization, in which the ruling elite is held accountable and in which the government is constrained by what it can do to its citizens. We need to keep in mind that innovators were and are deviants, people who in some way are different and abnormal, eccentric perhaps, and in conformist societies such people in some way are suppressed. Europe's advances started in earnest when those who thought "outside the box" were no longer in fear of being accused of "black magic" or heresy. Chinese history is a fascinating story of how incredible creativity and sophistication were essentially wasted after the Song dynasty and China fell behind the West. Mutatis mutandis, the same is true for the Soviet Union. The potential of Soviet Russia was huge, but bad institutions channeled its creativity into Sputniks, MIG's and Katyushas and little else.
Joel MOKYR is the Robert H. Strotz Professor of Arts and Sciences and Professor of Economics and History at Northwestern University and Sackler Professor (by special appointment) at the Eitan Berglas School of Economics at the University of Tel Aviv. He specializes in economic history and the economics of technological change and population change. He is the author of Why Ireland Starved: An Analytical and Quantitative Study of the Irish Economy, The Lever of Riches: Technological Creativity and Economic Progress, The British Industrial Revolution: An Economic Perspective, The Gifts of Athena: Historical Origins of the Knowledge Economy, and The Enlightened Economy. His most recent book is A Culture of Growth, to be published by Princeton University Press in 2016. He serves as editor in chief of a book series, the Princeton University Press Economic History of the Western World. He serves as chair of the advisory committee of the Institutions, Organizations, and Growth program of the Canadian Institute of Advanced Research. Prof. Mokyr has an undergraduate degree from the Hebrew University of Jerusalem and a Ph.D. from Yale University. He has taught at Northwestern since 1974, and has been a visiting Professor at Harvard, the University of Chicago, Stanford, the Hebrew University of Jerusalem, the University of Tel Aviv, University College of Dublin, and the University of Manchester. He is a fellow of the American Academy of Arts and Sciences, a foreign fellow of the Royal Dutch Academy of Sciences, the Accademia Nazionale dei Lincei and a Fellow of the Econometric Society and the Cliometric Society. His books have won a number of important prizes, and in 2006 he was awarded the biennial Heineken Prize by the Royal Dutch Academy of Sciences for a lifetime achievement in historical science. In 2015 he was awarded the Balzan Prize for Economic History.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
French Minister of the Economy, Industry and Digital affairs
In the Digiworld Economic Journal No. 100
ICTs do not constitute a sector of our economy: they are its defining new element. We have indeed rarely seen technological breakthroughs that simultaneously alter the three pillars of an economy: its production, its consumption, its labor relations. Whatever their outcome, they already amount to a new "Great Transformation" of our societies.
First, and most classically, ICTs were the main source of productivity gains in the recent period. From the 1990s on, their production with an ever-increasing efficiency (in the so-called "ICT producing sector") but also their diffusion and their use in the broader economy were a major element in an otherwise moderate output growth environment. Between 2001 and 2007, its contribution to annual GDP growth in eight major EU economies  was estimated by CORRADO & JAEGER (2014)  to be as high as 1 percentage point.
Second, ICTs offer new goods to consume and, more interestingly, even change what "consuming" means, legally, statistically and culturally. Let me provide some examples. "Big data" make tailor-made products always more available, but raise difficult property rightsquestions, at the intersection of privacy, innovation and growth: we can neither wave all personal controls, nor destroy all incentives for the first-collecting firms, nor prevent the rest of the economy from exploiting them to their full value. A new compromise must be forged, with relevant tradeoffs between privacy and innovation being discussed openly. The "platform model", with its natural tendency towards network effects and economies of scale, must be integrated within our competition policies. The "sharing economy" has met a well-deserved enthusiasm, especially in France, but a big part of it is still not included in GDP figures. The "Internet of Things" is an impressive promise, but cannot fit the traditional boundaries between sectors, and will probably run into traditional management culture's resistance.
Third, ICTs create a new demand for untraditional forms of workforce. By reducing and transforming the need for intermediaries, by improving matching efficiency between customers and providers, they make work more flexible and more independent. In France, the secular movement towards payroll-employment has stopped in the early 2000s. Since 2006, the share of independents in total workforce, when excluding agriculture, has even risen by 26%! The status called "autoentrepreneurs", for instance, has found a real success, with one million people now declared, precisely because it allowed for the required simplicity and flexibility.
Our infrastructure is already first rate. Broadband access is higher than the OECD average. Though we lag behind in terms of fiber development (which accounts only for a little less than 4% of high speed subscriptions against 17% for the OECD average), we are rapidly catching up (fiber subscriptions grew by more than 60% in 2013-2014). More generally, in recent years, increased competition has generated lower prices, simpler offers and more innovation.
But our social and political institutions, inherited from a period of Taylorism, mass consumption and catching-up development are ill-suited to meet these new challenges. Their inertia has long been seen as a source of protection, but may now be stifling economic dynamism to a greater extent than we thought, while not even serving well their primary goal of social protection and individual empowerment.
To rejuvenate their spirit, we must ensure that they still support innovation, diffusion and inclusiveness. These are the three targets of my nationwide economic goals: deliver "Nouvelles Opportunités Economiques" (New Economic Opportunities).
Innovation is a complex phenomenon. It requires a subtle mix of flexibility, investment, cooperation and competition: firms must have the means to innovate, the opportunity to learn and the incentive to develop. We already made a historical effort to support the profitability of corporates and indeed profit margins which were falling since 2007 have been up since mid-2014. We also boosted the development of good practices through the "Industries du Futur" initiative. But we need to go further in removing barriers to entry in overregulated sectors and opening up data to competitors. We should also support the development of venture capital, which has proved a key element in the transformation of our numerous startups (where Paris ranks 2nd in Europe) into "unicorns" (where France ranks only 5th in Europe). Banking intermediation is indeed inadequate when risks are high, close screening is required and immaterial collateral is not easily pledgeable.
Diffusion is a slightly related, though different, issue. The productivity slowdown is much less salient at the technological frontier than in the rest of the economy: in OECD countries, output per worker increased annually by 3.5% between 2001 and 2007 for the 100 most productive firms in each manufacturing sector, compared to 0.5% for the others. In services, these figures are respectively 5.5% and 0.3%! This gap is not only very large, it has widened. There is something broken in the diffusion machine. It is also worth remembering that productivity growth does not come from all firms increasing productivity. Around half of the aggregate productivity gains in industrialized countries are generated by faster growth of the most productive firms that attract more workers and more investors. We must encourage this factor reallocation (between firms and between sectors), be they labor – through increased flexibility – or capital – through lower bankruptcy costs.
Inclusiveness is key. The polarization phenomenon, whereby technology destroys "routine" jobs in the middle of the skill distribution and creates opportunities for both skilled and unskilled work, is well known and well documented. France is no exception for the hollowing out of routine jobs (bank clerks and secretaries for example). However, it exhibits a relatively high rate of unemployment among high school dropouts (16,1%) and more generally among low skill workers. It is an apparent paradox, since ICTs either improve their productivity – for instance by improving matching in personal services – or at least cannot act as a substitute– for all activities where social interactions are needed. We are dismantling outdated regulations and lowering labor costs to bring on the board of innovation the outsiders of the "old" industrial society.
Technology is inherently disruptive. But politics is about inclusiveness and trust. Forging a new social pact is not an additional burden on the road to a new economic model: it is a necessary step, for it conditions its long term sustainability. We must allow the necessary flexibility by making social protection better adapted to independent work, multiple activities and diverse careers. We must also provide the necessary skills (through training as well as initial education) to answer both the present and future demands.
At which speed will ICTs develop and what level of growth rate will they help us achieve? Robert GORDON has brilliantly exposed the "supply side" hypothesis of the "secular stagnation" debate. But at the other end, we hear also the arguments of those telling us we are on the verge of massive breakthroughs. Should we turn to statistics? Yes, they seem to show a slowdown in ICTs productivity but at the same time venture capital investments in the US, which were never higher in fifteen years, promise renewed dynamism.
Which employment structure will they foster? The studies on polarization now describe well what happened in the last decades. But in the coming years we may see a new surge in jobs with intermediate skills, for instance in the medical sector where the productivity of nurses could soon be multiplied. For example, by collecting data from a number of wearable devices or sensors, the "internet of me" in the health care sector will mean much more personalized demand from nurses who will become much more effective at responding to this demand. Again this requires investment in training.
All in all, these innovations are paved with uncertainties, as "industrial revolutions" always were. If you had asked an Englishman about the industrial revolution in 1780, he would have asked what you meant. In 1820, he would have expressed his longing for a vanishing agricultural society. In 1860, he would have claimed that it lifted millions out of poverty and opened the way to a supposedly everlasting progress.
I do not assume that present innovations will follow a similar course. But I believe that we cannot foresee, even less enclose, what is yet to be. We must take the best from our past (the ambition of our social protection, the talents of our industries, the quality of our infrastructures), seize the maximum from our present (the renewed demand for work, the widening of opportunities, the creation of new services and new markets) and be ready for the future.
 Austria, Finland, France, Germany, Italy, Netherlands, Spain and the United Kingdom.
 CORRADO, C. and K. JÄGER (2014): "Communication Networks, ICT and Productivity Growth in Europe", The Conference Board, New York, December.
CEO, IDATE DigiWorld
The common perception is that digital innovation is everywhere, and that the pace of innovation is accelerating as it applies to every sector, every business and every organisation.
Despite which, economists are wary. Productivity gains have clearly been slowing since the mid-2000s, even before the economy collapsed in 2008. And this is not a phenomenon that is confined to Europe, which could explain why it lags behind market leaders, but applies to the US as well. We are reminded of the words of Nobel Prize winning economist, Robert Solow, back in the 1980s: “You see the computer age everywhere but in productivity statistics”. Although we are by no means enjoying gains comparable to those of the 1920s or the great post-war boom, the effects of the Internet revolution can still be seen in statistics for 1995 to 2005. In other words, before the iPhone, before the smartphone and mobile Internet explosion, before 4G, the cloud and the onset of Big Data…
So the experts are divided into two camps: the techno-pessimists aligning themselves with Robert J. Gordon are convinced that the potential for digital innovation is dwindling, sinking very quickly into useless innovations, the latest gadget for the latest smartphone. They do not see any disruptive innovation that will impact productivity and growth in a way that is comparable to the steam engine or the electric motor. After all, they point out, history does not end here: up until the latest industrial revolutions, people in Western societies lived with very moderate productivity gains and GDP growth.
Meanwhile, the techno-optimists aligning themselves with Brynjolfsson and McAfee remain confident, pointing to new waves of innovation with artificial intelligence, new generation robots, the Internet of Things and 3D printing. Even Moore’s Law – the Law named after the co-founder of Intel who, fifty years ago, predicted that the number of transistors in an integrated circuit would double every two years, and which, somewhat unfortunately, appears to have caught on as the measuring stick for the digital revolution’s maturity – is expected to continue to hold true for at least another ten years. From a more general perspective, there are some such as Joel Mokyr who express their optimism by saying we underestimate the effect that the Internet has on change and improving human welfare, on accelerating access to knowledge in every scientific and technical field.
Behind this very black and white division, there are those who are interested in the failures of the statistical apparatus, and of price effects (deflation) that can distort the measurement of the different sectors’ ICT spending. Ultimately, however, their attention is focused on the conditions that would help reduce lag time, which is perceived as the time it takes for digital technologies’ productivity potential to kick in. Here, authors such as Gilbert Cette and Philippe Aghion stress the importance of ambitious and efficient public policies on education and training, seeing them as the cornerstone of a successful innovation policy and an answer to the phenomenon of qualified job opportunities being concentrated in a few major cities. They also stress the importance of reforms if we want to see the Schumpeterian cycle of innovation play out in a fluid and positive way, reduce the divide between a small fraction of highly productive businesses and an economic fabric turning in mediocre performances, while building up the majority’s trust in the digital transformation. We will add that it is useful, as Larry Summers does on a regular basis, to stress the importance under these circumstances of investments in infrastructure (think fibre and superfast mobile) and that we are not forbidden, as Daniel Cohen suggests in his latest work, from calling for an examination of the wisdom and quality of innovation policies, by underscoring the ways in which digital technologies can contribute to turning the tide on climate change.
|Digital innovation vs. secular stagnation?
N° 100 - DigiWorld Econcomic Journal
The DigiWorld Economic Journal, is celebrating its 25th anniversary with this issue No. 100. For this jubilee issue, Gilbert Cette and Yves Gassot Editors have collected contributions from leading economists who examine the links between digital innovation and the associated developments, directly or indirectly, in terms of productivity, growth and job creation. The guest authors do not all adopt the same angle of analysis nor do they all share the same theses... But, in reading this issue, you will discover a different way of thinking about the big questions raised by these topics.
Buy the DigiWorld Economic Journal now !
Published in Communications & Strategies n°99
IDEI, Toulouse School of Economics
Interview conducted by Marc BOURREAU,
C&S: The concept of platform is sometimes used in a very broad way in the policy debates. How would you define platform/multi-sided markets? What is the difference between a one-sided and a multi-sided market?
Bruno JULLIEN: It is difficult to provide a formal definition of a platform in economics and there is no consensus on such a definition. As a start I would say that a platform is a bundle of services that are used by several economic agents in order to interact. In such situations, a side represents a particular type of users (say sellers on a B2C marketplace, or merchants dealing with a credit card). Each side's benefits depend what other sides are doing on the platform. Moreover the platform may treat the various sides in a differentiated manner. For instance some may get free services while others pay for the right to access the platform.
From a theoretical perspective, a platform is not necessarily multi-sided. To be so requires two conditions. First the organization of the platform's services involves network externalities, i.e. participation and other actions of a user affect other users of the platform. Second the platform discriminates between different types of users. One criterion sometime used to determine whether an activity is multisided or not is whether the value of the service for each user depends on the whole structure of prices or not.
In a multi-sided platform the customers need to consider interactions with other economic agents to evaluate the value of the good or service and determine their behavior. The final value of the service for the customer is not fully controlled by the platform but results from agents' interactions. By contrast, in a one-sided market, firms choose the product or service characteristics and customers' value depends only on that choice.
The difficulty with the concept is two-fold. First it covers potentially a wide range of goods and services, so that the multi-sided externalities must be significant enough to be relevant. Second, all platforms are not necessarily multi-sided as this may depend on the business model of the platform. Consider for instance retailing: a chain store is typically not a multi-sided platform but Amazon marketplace is one. The chain store decides which products to carry at which prices and then consumers interact only with the store and don't care about suppliers. By contrast, online marketplaces let buyers and sellers jointly determine the products and prices.
The literature on multi-sided markets emerged in the early 2000's (and you were one of the first authors on the topic), but it is still vibrant. What do we learn from the recent research on platforms?
The early literature was mostly focused on price theory, explaining difference between pricing in multi-sided markets and one-sided markets by emphasizing the need to coordinate users and bring all sides on board. A main contribution has been the development of concept of opportunity cost where the cost of providing the service to a user is adjusted to account for the benefits (or costs) accruing to other users. This however needs to be put at work in practice, which is part of what the literature is aiming at. The recent literature developed along several lines. The first is the application of the concept to specific industries as it has been done for instance for the Internet, search engines, ads financed media or credit cards. For instance, in the case of media, the recent literature helps us understand the evolutions in terms of business models or the implications of mergers. Along the same dimension, the research is trying to develop new operational tools for competition policy where traditional results don't apply; there has been for instance work on bundling or econometric models for empirical work and policy evaluation.
At the theory level, what I retain mostly from recent work is the importance of participation patterns of the users (exclusivity, multiple vs single affiliation, switching) in shaping the competition between platforms.
On the other side of the coin, what do we still not know? What are the key questions where more research is still necessary?
While we have made significant progress in price theory and applications, there is a lot we don't know and a large scope for future research. For the theory I think that the main issue that we need to address is that our theories are mostly static. We need to better understand the dynamics of competition between platforms. What determines the emergence of a successful platform? What is the extent of barriers to entry? What are the respective roles of history and actual merit?
I expect also research to move away from price theory into design and organization, where most competition takes place. We need to understand when and how platform decides to interfere in transactions. A recent concrete example is the issue of MFN clauses for online booking systems (Most Favored Nation: this prevents registered hotels from offering lower prices on competing websites or direct sales).
For this we need more empirical work to guide research and applications. Currently we see many data originating from a single platform, so we may expect many studies of agents' behavior on a platform. But we will need also empirical work on platform competition.
For competition/regulation policy, we need more work to propose operational decision tools to competition authorities and regulators. Basic questions such as market definition or tests for predation are still not resolved for platforms. We have difficulties evaluating the optimal market structure, as more competition may not raise welfare and efficiency. This will require developing research at the frontier between law and economics.
There is a hot policy debate today in Europe on the regulation of platforms. What is your opinion on this question? What are the potential market failures in platform markets, which would justify a regulatory intervention?
The issue is not to identify market failures, which occur when there are externalities between users, network effects and market power as is usually the case with platforms. The main question is whether there is a scope for efficient ex ante regulatory intervention. In some cases, ex ante rules or principles are desirable, for instance for privacy issues. But in general I would be cautious and favor ex post intervention for several reasons. Platforms are very heterogeneous: platforms may propose very different activities, the same activities may be proposed by very different platforms, platforms may be more or less integrated vertically. This means that it is extremely complex to define ex ante the perimeter of a regulation. Moreover, the same regulation may affect different platforms in different ways, for instance a pay platform and a free platform are not affected in the same manner by restriction on data usage. Finally, the markets where platforms operate are dynamic and innovative. Market power has to be evaluated from a dynamic competition perspective and regulation should not impede this dynamic process.
Notice that it is in the broad interest of a platform to optimize the quality of interactions between its members and correct externalities because this raises their value. The literature has put some limits to this view, but intervention should occur only for clearly identified failure. I would point out two factors that may be matter for that.
A key distinction should be between situations involving bottlenecks and others where all users can easily switch or use several platforms. A bottleneck arises when each platform enjoys the exclusive rights for the conduct of transactions with some of its users. This gives some monopoly power on these transactions and we know that competition between platforms will not resorb it. We may then want to reduce this market power. This is similar to a one-way access problem familiar to telecommunication regulators.
Second, platforms providing free services to some sides rely on a limited set of instruments to coordinate users, which may not be enough to address issues of externalities. Indeed a good coordination of the sides would require as many prices (or subsidies) as there are sides. Free platforms by nature cannot pass on to consumers the true opportunity cost, which may induce excessive usage or may distort prices charged to other sides. This may induce inefficiencies and calls for special scrutiny.
Do you think that today regulators and competition authorities take sufficiently into account the specificities of multi-sided markets (provided you think they should)?
Regulators and competition authorities are now aware of the concept and its importance in some industries. However they lack tools and knowledge to incorporate this dimension in their analysis. I think this is a reason why we don't see as many applications to cases as we would like and why they prefer to rely on more conventional analysis. Some cases are more obviously two-sided than others, the credit card cases for instance. But even if the concept is not explicitly mentioned in decisions, it is often present in the reasoning (an example is the approval of the merger of the satellite digital radio services Sirius and XM by the FCC in 2008).
In platform markets, we observe some big multi-platform players, such as Apple, Google, Amazon, or Facebook, with distinct core businesses and overlapping activities. Do you think this multi-dimensional feature of the competition affects the ways these firms compete with each other?
I am not a specialist of strategy but I think this is the case. These platforms started with very different objectives and business models. This affects their priorities and strategies in terms of pricing, choice and organization of activities. Clearly Google Shopping is organized in a very different manner than Amazon marketplace, reflecting their different competencies and services. I always thought that part of the initial difference of strategies on e-books between Amazon and Apple was due to the expertise of Amazon in the domain of cultural goods.
Bruno JULLIEN is Senior Researcher at CNRS and the Toulouse School of Economics (TSE), and a senior member at Institut d'Economie Industrielle (IDEI). He is currently Scientific Director of TSE. His interests cover industrial organization, in particular in the domain of network economics, ICT and competition policy, as well as regulation, insurance and contract theory. He is recognized as a world leading academic researcher on the economics of two-sided markets, which he contributed to develop. Bruno Jullien has published numerous articles in renowned scientific reviews such as Econometrica, Journal of Political Economy, Review of Economic Studies, RAND Journal of Economics. He is currently co-editor of Journal of Economics and Management Strategy and associate editor of Geneva Risk and Insurance Review. He is Fellow of the Econometric Society, member of the Steering Committee of Association of Competition Economics and of the Economic Advisory Group on Competition Policy of the European Commission. He is a fellow of CEPR, CESIfo and CMPO. Bruno Jullien has also been advising firms and decision makers on regulatory and competition policy issues for more than 20 years. He graduated from Ecole Polytechnique, ENSAE and EHESS, and holds a Ph.D. in economics from Harvard University. He started his career as a researcher in Paris at CEPREMAP and CREST. He was also a Professor at Ecole Polytechnique. He joined the University of Toulouse in 1996. He has been Director of the research centre GREMAQ (1997-2004) and Deputy Director of Toulouse School of Economics (2010-2011). He received the Bronze Medal of CNRS, the "Palmes Académiques", the ACE best article award and the JIE best article award.
The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" is available!
DigiWorld Summit 2015
IDATE will contribute to the debate at the upcoming DigiWorld Summit on 17, 18 and 19 November, in Montpellier, with:
- Fatima BARROS, Chair BEREC
- Carlo d'ASSARO BIONDO, President EMEA strategic Relationship, Google
- Bruno LASSERRE, Président de l’Autorité de la Concurrence
- Eduardo MARTINEZ RIVERO, Head of Unit « Antitrust Telecom », DG Competition, EC
- Sébastien SORIANO, Président de l’ARCEP
Information & Registration:
Published in Communications & Strategies n°99
Interview conducted by Marc BOURREAU,
C&S: There is a hot policy debate today in Europe on whether we should regulate platforms. Some argue in favor of a "laisser faire" approach, because due to strong innovation dynamics, they say, the dominant platforms of today will soon be replaced by new players, in a Schumpeterian fashion. Others propose to strongly regulate platforms, in terms of neutrality, portability of data, access, etc. Where do you think lies the right level of regulation for platforms?
Sébastien SORIANO: Whether or not an economic activity should have specific regulation is a matter of two cumulative factors: an economic factor (are there market failures?), and a political one (is this activity having a structural impact on our society and economy?).
There is no single answer for all platforms, because the term "platform" covers a great variety of actors and models: e-commerce platforms, social networks, search engines, application stores… The fact that the European Commission is currently investigating on whether Uber is a transport service or a digital platform is actually a striking example of the lack of a consensual definition of what a platform is.
In my opinion, it is obvious that some digital platforms have today acquired such a significant influence over multiple segments of our economy that some kind of regulation is needed. But defining specific economic rules for every type of platform would be inappropriate: it would risk numbing the innovation process without bringing any added value, not to mention the potentially high cost of such a regulation.
In the end, the question is whether we should regulate only a handful of major platforms. I believe that such a regulation would help promoting confidence in the digital economy and thus fast-tracking the development of those markets in Europe.
If platforms, or some platforms, should be regulated, what kind of regulation should be put in place? In other words, what kind of market failures calls for a regulatory intervention? Going further, which form of intervention do you think is preferable: ex ante regulation or ex post competition policy?
General rules already exist in consumer, commercial, competition or privacy laws. The Booking.com case, dealt in France by the Autorité de la concurrence, is an illustration that the current legal tools are often sufficient. The real debate today is whether we need ex ante regulation, that is to say a specific regulatory framework adapted to a certain category of platforms.
To build such a framework, three essential values will be needed in my opinion:
• First, regulation must have the ability to react quickly: the general law provides some answers, but the response times are often totally ill-adapted. Disputes between a platform and a startup or an SME should be settled in no longer than a couple of months.
• Second, the framework must be an agile one: strict and detailed rules would indeed soon become outdated, or simply be bypassed by some actors. Regulation should be articulated around a few general principles, with a regulating institution in charge of ensuring the applications.
• Finally, regulation must form an alliance with the multitude: the digital economy is a complex and shifting sector and regulation must take shape with the help of researching communities, programmers, makers... We need to invent the concept of "crowd-regulation".
The economics literature on platforms and two-sided markets shows that applying insights from the analysis of one-sided markets to two-sided markets might be misleading. For example, we know that it may be profitable (and socially optimal) for a platform to charge a very low price on one side to generate strong network effects for the other side. With "one-sided" glasses, such a price may look predatory, whereas with "two-sided" glasses, it could be viewed as just efficient. How can regulators account for these specificities of two-sided markets?
Infrastructure regulation has existed in France for close to 20 years, and has been applied to a great variety of sectors: railroads, energy, communication... The fundamental issue has always been to deal with network effects, a phenomenon that allows the largest network to constantly reinforce its dominant position. Regulation allows our society to benefit from the positive consequences of these network effects, while minimizing the drawbacks.
The notion of two-sided markets, with cross network effects, is only a refinement of those concepts. Of course, some of our regulation tools will need to be adjusted to the stakes and the specificity of those markets. But the fundamentals are the same, and the issue at stake is to regulate our digital economy's main foundations.
There is at least one area of friction between telecoms and platform markets, which is the competition and/or complementarity between telcos and over-the-top (OTT) players. Can telecommunications regulation have a role in securing a level-playing-field between telcos and OTTs?
Whether it is as a client, a supplier or a competitor, every company subjected to some form of regulation fears having to deal with Internet players who don't play by the same rules. Because there are specific rules in their sector, this is especially true for the telecom or the media industry. Part of this fear is entirely justified: real issues are at stake, especially when telcos and internet players are in direct competition.
However, we won't solve anything with downward alignment or total deregulation: a new balance must be established, and, in my opinion, part of the solution is precisely to be found by building a framework for platform regulation.
A related topic is net neutrality. What is the current status of net neutrality regulation in Europe and in France?
The Internet has become a crucial collaborative space, tremendously important for all our society and economy, and I believe it must now be considered as a common good. The risk today is that some companies manage to distort this essential tool for their own profit and against the interest of other users. This is not science fiction or paranoid delusion: some essential privately-controlled bottlenecks have indeed emerged, and without appropriate regulation, there is a real threat to see some kind of privatization of the Internet.
Net neutrality rules precisely aim at preventing a specific category of actors, the telecom operators, from doing so. An ambitious set of rules on net neutrality is in the process of being adopted in Europe. The European framework will be very protective and will rely on guidelines to be issued by BEREC. ARCEP will contribute actively to these works and will be in charge of its application in France.
But if we really want an open Internet, we also need to prevent a situation where a few Internet giants could take advantage of their current position to dictate their own rules to the World Wide Web. This should be a necessary addition to the net neutrality framework, and without it, the job would only be half done, or maybe even less. Ask yourselves: what actors are the most worrying for the future of the Internet?
Platforms are global players, whereas telcos are usually attached to a local market. Is it possible to regulate platforms at a national level, or should such regulation be supra-national?
The correct level to construct tomorrow's regulation is obviously the European one, and this work is currently underway via the Digital Single Market initiative. But each member state has the responsibility to contribute to this reflection, and I believe it would be appropriate to act first on a national level in order to better observe, understand, compare and assess actor's behaviors in platform markets.
I would however advise against going too far on a national level. Only with a European solution can we avoid a discrepancy of treatment between member states. Moreover, a European solution would be more legible for actors, and we need this legibility if we want actors to invest in innovation in Europe.
Digital platforms, and the digital economy in general, raise new regulatory challenges. Yet, the nature of those challenges, and the potential harm for our society remains poorly understood. France mustn't underestimate the complexity of the issues, and we should give ourselves the means to accumulate the necessary experience and expertise to participate in the debate.
One possible concern in platform markets is that due to the strong dominance of one firm or a few firms, competition might not emerge. What can be done to protect the innovation process and potential entry by new (European?) players?
This ultimately comes back to the issue of dealing with network effects that participate in locking dominant positions over some markets. One of the challenges for every regulation is to bypass those effects in order to maintain an open competitive game. There is no single right answer but the solution typically lies with regulatory tools such as portability, interoperability, open format...
Another crucial aspect is the matter of vertical integration: in the last few years, some Internet giants have been developing new activities related to their core-business and have constructed entirely closed ecosystems. This is not a problem in itself, but it is imperative that this should be done in a loyal manner, without the dominant actor leveraging its position to stifle competition on other markets.
Similar problematics have been dealt with very strong remedies in the past: structural separations were put in place in railway and electrical companies, and some companies were even dismantled. This is not to say we should go that far in platform markets. Most likely, platform regulation can bring more subtle remedies, adapted to platform specificities.
Sébastien SORIANO was appointed Chairman of ARCEP (Autorité de régulation des communications électroniques et des postes) on 15th January 2015, for a six-year term. Born in 1975, Sébastien Soriano is a chief engineer from École des Mines (the French national school of mining engineers) and graduated from École Polytechnique. He then spent most of his career in competition and telecoms regulation. In 2012, he was Head of Fleur Pellerin's cabinet, the then French Minister for SMEs, innovation and digital economy. Prior to his appointment at ARCEP, he was Special Advisor to the French Minister for Culture and Communication.
The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" will be soon available!
More informations about IDATE's expertise and events :
Published in Communications & Strategies n°99
Fabien CURTO MILLET
Director of Economics, Google
Interview conducted by Yves GASSOT,
CEO IDATE DigiWorld
C&S: Is the SMP regulatory framework fit for purpose given the competition among telecom providers and between telecom operators and online service providers?
Fabien CURTO MILLET: Actually, platforms are not an Internet phenomenon. A platform is simply an environment where two or more groups of economic agents come together to transact in some manner, so the concept is extremely generic: an example of a platform commonly used in the economics literature is that of singles bars! There are many economically important platforms outside tech. You can think of a free-to-air television channel as a platform, bringing together viewers and advertisers; the same goes for newspapers. And within tech, there are many platforms that historically had nothing to do with the web. An operating system can be analyzed as a platform, bringing together application developers and users. So the concept has wide applicability.
It is true, however, that the latest crop of web-era platforms has attracted a great deal of public attention. I attribute that in large part to the simplicity of use and degree of innovation of many of these businesses, which revolutionize everyday tasks and disrupt existing approaches. Obvious examples include apps like Uber, BlaBlaCar and Lyft in transportation, or AirBnB for accommodation.
Google operates several platforms, starting with its search engine and Google market . Are there any others you can think of?
Many of Google’s activities involve the creation and/or operation of various platforms. In the ads space, we have for many years run AdSense, an ad network bringing together users and advertisers on third party websites, while allowing publishers to monetize their content. Similarly, YouTube brings together content creators, viewers and advertisers.
Academic works on platform economics invariably come down either to works on multi-sided markets that emphasise the role of an intermediary between multiple parties that platforms play, or analyses of platforms as strategic necessities for capturing innovations created by others. Do you think that is a fair assessment?
Much of the literature is indeed concerned with analyzing the role of platforms as a matchmaking device between their various types of participants. This is not surprising, as the art of a platform operator is precisely to figure out how best to balance the interests of parties on various sides. In the context of web search for example, this often involves search being provided to users for free, but with advertisers on the other side being charged (usually when their ads are clicked on by users, under the so-called Cost Per Click pricing model). This is the case of search services like Google or Bing (which have clearly demarcated spaces for ads) for example; the point also applies to more specialized players like Booking.com or Tripadvisor.
But the literature is vast and touches on many interesting topics. An example is the technical question of how to carry out market definition in a platform context. One issue there is that the standard market definition test normally looks at whether customers switch away in response to a given percentage price rise. But in the context of platforms, the price charged to one side is often zero. In this case, how should the test be adjusted in practice?
These are only examples, and while the literature is already vast it is also evolving, so I think we can look forward to additional insights in this area.
How do you explain the fact that the GAFA quartet (i.e. Google, Apple, Facebook and Amazon) is much less powerful in certain markets – notably Russia, China and even Japan and South Korea?
These four companies have obviously achieved great success in many areas, and are engaged in formidable competition across multiple products and services. Spaces where some or all of these firms compete include search, cloud computing, social networking, operating systems, advertising, mobile phones and tablets. If you take cloud computing, for example, there is currently a great battle between Amazon, Google, Microsoft and other firms like SAP and Rackspace, with many massive rounds of price cuts and quality improvements having characterized the space in recent years. So it is very difficult to give you an overall answer covering such a broad scope of activities!
Since you mention specific countries, it is interesting to note that they have also developed a number of strong competitors in a range of tech areas. To take search, for example, we have Russia’s Yandex, South Korea’s Naver and China’s Baidu. But it would be unfair to label these as local players, since they are also engaged in aggressive plans to expand internationally -- Baidu is developing in Brazil, while Yandex is already present in several countries and has recently expanded by serving searches in Turkey. As for the success of the “quartet” in the countries you highlight, it really depends what you are looking at. Just take the most recent earnings release from Apple -- they reported revenue growth of 112% in “Greater China” (mainland China, Hong Kong, and Taiwan) and iPhone unit growth of 87% in that area.
Some see the eruption of new players in vertical industries – prime examples being Uber in transportation or Airbnb in the tourism business – as the emergence of new platforms and new sources of competition for the Internet’s leading horizontal platforms? Do you share that point of view?
The digital economy is rife with entry and innovation. The two examples you mention are a case in point. Another notable story is that of Snapchat, a mobile-only video and photo sharing service that came from nowhere, and into an already quite busy space. But it became wildly popular at breakneck speed. Snapchat users today share over 700 million photos worldwide per day, which is reportedly larger than the combined volume of Facebook and Instagram -- truly remarkable for a service that did not exist five years ago and that is only available on mobile! So I absolutely agree that these new entrants have further turned up the competitive heat on existing firms, including Google. If you’re looking for a rental property for your next holiday in Provence, you might perhaps go directly to the AirBnB website or app, instead of running a search on Google or Tripadvisor.
This broad phenomenon in itself is not particularly new for the digital economy -- for many years, companies with a more specialized focus have been competing with firms having broader business models, like Google. Google aims to answer any question that a user might have, whereas players like Tripadvisor focus more narrowly on particular content categories (especially the more commercial queries). Another case in point is Amazon, which is of course a very major competitor in shopping queries. Already in 2012, a Forrester study found that some 30% of online shoppers in the US started researching their latest purchase on Amazon, versus 13% on search engines.
Many fundamental factors drive these competitive developments. First, barriers to entry into many digital activities are generally low and dropping fast. One reason for this is the development of cloud computing: it used to be the case that firms needed to invest in their own server infrastructure in order to procure computing power, therefore incurring fixed costs. Cloud computing does away with that, by turning this fixed cost into a variable cost – and a low one at that, given the competition I mentioned earlier in this area. This is precisely one of the ingredients behind Snapchat’s success, as they run entirely on the Google cloud. Second, switching costs are pretty low – it is generally trivially easy and inexpensive for users to try out a new app or website. We often say at Google that “competition is just one click away” – although we should perhaps modify that line for the mobile era and say that it is “one tap away”: according to comScore, almost 90% of mobile Internet time in the US is spent on apps rather than in the browser – truly a revolution. Such ease of access to competing services means that we observe extremely high levels of “multi-homing”, i.e. the presence of a user on multiple competing platforms at the same time (e.g. Twitter and Facebook). I think these fundamental forces are here to stay, so we should have the opportunity to observe many more examples of disruptive entry in the future.
Net neutrality debates have resulted in regulations that limit the risks of ISPs discriminating against certain kinds of content. How do you respond to those who want to see these neutrality obligations extended to platforms? For instance in the choice of applications that app stores host, or the neutrality of algorithms?
Things like the choice of applications hosted or the operation of algorithms go to the very heart of what a platform does. “Neutrality” is a nice-sounding word, but it’s essentially in the eye of the beholder. The purpose of an algorithm is precisely to rank things from more to less relevant. Who is to say that one choice is better than another? And on what criteria? Is it neutral to rank restaurants by reference to distance to the user, or should we use review counts instead? Or maybe both? And how should one compare restaurant results and web page results? You very quickly get into rather abstract and arcane debates as to whether a particular approach is really treating like-with-like and so on.
Fortunately I believe these are questions which do not need resolving. Most economists would agree that regulatory intervention is only appropriate in circumstances where competition fails as a disciplining force. And there is frankly very little indication of problems across the digital economy. In addition to the rapid entry I discussed in my previous answer, I think any objective observer would agree that the speed of innovation in the digital economy is extremely high. This is for me a fundamental indicator of the competitive health of a sector – it ought to act a bit like a thermometer to determine whether a patient is sick and guide enforcement. After all, as the famous English economist and Nobel laureate John Hicks once observed: “The best of all monopoly profits is a quiet life”. There is preciously little that seems quiet about the digital economy today.
What differences do you see in the exchange of ideas taking place in Europe and the United States over platforms and the inherent risks of dominant positions?
I think that the exchange is a lot more nuanced in both places than it is often portrayed. From a Google perspective, we have faced antitrust scrutiny on both sides of the Atlantic -- the Federal Trade Commission in the US thoroughly investigated many parts of our business in great depth (notably touching on search, patents and ad campaign portability), leading to voluntary commitments in some areas in January 2013. In Europe, we are obviously currently working with the European Commission today in the context of its own ongoing antitrust investigation.
And while many commentators would like to cast current events in terms of various arm wrestling matches between European regulators and American tech companies, this unduly simplifies reality. For example, Germany’s Monopolkommission (Monopolies Commission) recently concluded a wide-ranging investigation into competition in digital markets. In the context of search platforms, this independent agency noted that “search engines’ low degree of user lock-in in comparison with other platform services (e.g. social networks), and the low degree of advertiser lock-in caused by network effects means that the search platform’s attractiveness from a user perspective is of key competitive importance, and this explains why even search engines with high market shares have an interest to further develop their offering with their users in mind, in order to secure their market position going forward”. Moreover, they expressed a clear view with regard to intervention: “The Monopolies Commission takes the view that a purely preventive regulation – irrespective of potential abuses – is not currently warranted. This holds true in particular for a regulation of search algorithms or regulatory unbundling instruments”.
Finally, I would take issue with the idea that there is an “inherent” risk to the emergence of dominant positions. I am sure that companies like MySpace or the now-defunct Friendster have views on the question, given how at one point they both towered over the social networking space. And I am always greatly amused by old press cuttings calling winners in one area or the other -- for example, Fortune declared in a 1998 article that “This much is clear: Yahoo! has won the search-engine wars and is poised for much bigger things”. 1998 was of course also the year when Google was founded... If there is anything certain in the digital economy, it’s that competition often comes from where you least expect it and failure to innovate faster than your competitors is the real “inherent risk.”
Fabien CURTO MILLET is Director of Economics at Google, where he has worked since 2011. He reports to and works closely with Chief Economist Hal Varian on the development of data-driven insights and on research to evaluate the economic value of Google and the Internet. He also leads economic analysis in all competition and regulatory processes involving Google at a global level. Fabien was previously a Senior Consultant in the European Competition Policy Practice of NERA Economic Consulting, where we worked from 2004. During that time, he advised in major European merger control processes such as ABF/GBI, Thomson/Reuters and Universal/BMG. His experience spans a wide variety of business sectors, including: airports, financial services, mining, music publishing, pay TV, print media, retailing, and satellite communications. Fabien was educated at Oxford University, where he obtained a BA in Economics and Management, an MPhil in Economics, and a Doctorate in Economics. For two years he was a Lecturer in Economics at Balliol College, Oxford. He further obtained a Postgraduate Diploma in EC Competition Law from King’s College, London.
The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" will be soon available!
More informations about IDATE's expertise and events :