Published in DigiWorld Economic Journal DWEJ No. 102
"Mobile dynamics: the path to 5G"
Interview with Wassim CHOURBAJI
Vice-President, Public Policy and Government Affairs, Europe, Middle East and North Africa, Qualcomm
Conducted by Denis LESCOP, Télécom Ecole de Management, Evry, France
DW Economic Journal: "What do you really mean by 5G from a technology perspective?"
Wassim CHOURBAJI: As we did with 3G and 4G, Qualcomm is leading development of technologies for 5G. We are designing a unified, more capable 5G platform to meet expanded and radically diverse requirements. 5G will be much more than just a new generation with faster peak rates. We are building a 5G platform to connect new industries, enable new services and empower new user experiences in the next decade and beyond. The foundation of this platform is a new OFDM-based 5G Unified Air Interface that is scalable across all services and spectrum. 5G will usher in the next era of enhanced mobile broadband experience with more uniform high data rates everywhere, lower latency and lower cost per bit. It will connect massive numbers of things through the ability to scale down in data rates, power and mobility. It will enable new mission critical services with ultra-reliable low latency links. It will provide edgeless connectivity with new ways for devices and things to connect and interact. 5G will be also a platform for all spectrum bands and types, designed for licensed spectrum from below 1 GHz for coverage to mmWave for extreme bandwidth as well as for unlicensed and shared spectrum.
How will 5G impact the everyday life of people?
Wireless connectivity transformed human communication. With 5G, we're extending its reach and adding intelligence to transform everything else. 3G and 4G have enabled people to experience broadband on their smartphones and tablets, wherever they are, indelibly changing the way we communicate with one another. We take this for granted now, but it was actually science fiction less than two decades ago. The next step, which is quintessential to the long-term realisation of 5G, is the massive social and economic impact of the tens of billions of devices and things that will get connected to each other, to the cloud and to people, unlocking greater efficiencies, personalized services and new user experiences. This will profoundly change our lives.
Where devices such as smartphones and tablets are now still the endpoint of communication, countless methods of connectivity and interaction will emerge in homes, cars, cities, healthcare and more. Where data services are now limited to certain providers and insights, there will be near unlimited insight available thanks to a broad expansion of all kinds of discovery services. It will not just be devices that will be "smart", it will be the connectivity itself. Intelligence will be found at the place where interactions are happening and will no longer be buried in the data centre or confined to a wall garden – it will make those interactions more intuitive, immersive and secure for people.
How will 5G impact the everyday life of enterprises? Can we say that 5G will open tremendous business opportunities?
The transition that businesses will experience towards 5G will be as sweeping as that experienced by consumers, and arguably even more so because the stakes in terms of competitiveness, economic growth and job creation are extremely high. I think it is fair to say that there are tremendous opportunities for businesses big and small, but the value created by spurring technological innovation with 5G will strongly depend on the policies under which industry at large will digitize and evolve.
Businesses have so far had to adapt to a changing environment where the internet has expanded to cover most, although not all, processes related to selling and distributing goods and content. To name but two obvious examples: e-commerce has metamorphosed retail and wholesale distribution operations; and the web has completely revolutionised publishing and journalism. These changes were basically driven by the fact that people could suddenly buy things and access content online. It's a process that started in the early days of landline internet connectivity, but which has really been boosted by mobile thanks to anywhere, anytime connectivity.
But as I said, with 5G the change will not simply be about connecting people to the Internet – more people, in more places and at faster speeds – but crucially about bringing intelligent connectivity to everything. So it is not just the sale and distribution of goods that will come into play – it is the very products you are developing as a business that will be affected. You used to be a company that was top-notch at designing and manufacturing this great product, but now you need to think in terms of your connected product – what you want to do in this new environment is deliver greater efficiencies, personalised services and new user experiences. You need to stay relevant to the user or people will be drawn elsewhere. You need to be skilful in doing that because there are many other companies out there which will take any opportunity they have to disrupt your market.
How are regulators – and especially the European Union – supporting (or not) the emergence of 5G?
I think the European Commission has really embraced the vision of 5G as a cornerstone of Europe's competitiveness. In April, the Commission earmarked 5G as a technology standards priority. The fact that Europe has leadership positions in so many key industrial sectors and that European industry needs to take advantage of the business opportunities that will potentially be enabled by 5G connectivity is not lost on Vice President ANSIP, Commissioner OETTINGER and the Commission as a whole.
I see the Digital Single Market as essentially a statement that Europe cannot afford to waste this opportunity. I like the fact that it goes back to the concept of the Single Market, one of the greatest achievements not just for Europe but arguably for humanity – there is no other place that equals Europe's level of social and economic unity, imperfect though it may be, between peoples, countries and interests that used to be so disparate. Implicitly, what it says is that the key to making the Single Market stronger for Europe and the world in the 5G digital era is to stay true to its core values of integrating differences.
When you transpose integration from different countries to different industrial players, the process is actually not that different. And when you translate integration into digital terms, you are talking about interoperability. That is why I think we see a strong emphasis on facilitating more cross-sector partnerships in the European Commission's recent Communication on ICT Standardisation Priorities for the Digital Single Market. We need more collaboration and strategic vision to bring together "traditional" non-ICT industries, the telecoms industry and the rest of the value chain to deliver on the promise of interoperable 5G connectivity. Europe can turn its apparent complexity into an asset.
There are a lot of initiatives that the Commission is facilitating with a view to 5G, such as the Alliance for Internet of Things Innovation (AIOTI) and the 5G Action Plan. Where I think Europe needs to act more quickly is spectrum and the review of the regulatory framework. Notably, I think Europe should decide fast and by 2017 on a list of "pioneering" 5G bands in the low, mid and high ranges, as well as a roadmap for the harmonisation and coordinated release of these bands across Europe. This will help industry players to invest and develop interoperable 5G standards globally and pave the way for commercial deployment in 2020. In Europe, there is a lot of potential in bands such as 700 MHz and 3.4-3.8 GHz, which are suited for IoT and "Industry 4.0"-type deployments, as well as in the 24 GHz and 31 GHz bands, which can deliver extreme mobile broadband bandwidth.
How should the framework be modified to better support 5G initiatives?
In terms of policy direction, I think we need to be aware of the paradigm shift between the old Digital Agenda for Europe and the new Digital Single Market strategy, which should very much reflect the shift from connecting people to connecting everything that I've talked about earlier.
We are used to having Digital Agenda targets that are linked exclusively to "fast internet access for all", called broadband objectives, like for instance 30 Mbps for 99% of people. That is good if you are trying to connect more people, in more places and at faster speeds, but if our aim is to bring, with 5G, intelligent, reliable and secure connectivity to new industries, which have different kinds of requirements, then these targets are no longer sufficient and we need new ones. These new targets should also address the vicious circle of the 3 "lows" the mobile industry is facing in Europe, and which I and others have talked extensively about. Low revenues, low use and thus low investment. The current targets solely address the supply side, with network coverage and speed obligations, and I think new targets should also address the demand side. This is key for takeup and revenues, bringing both the mobile industry and verticals together.
So I think the Digital Single Market targets should be specified for example as 1-Gigabit connectivity by 2030; 70% penetration of connected vehicles by 2025 and 100% by 2030; 100% road coverage by 2025; 60% penetration of remote monitoring for chronic patients in 2025 and 100% of low latency, very high data rate cloud access by the same date. I think these targets are far more meaningful from both a societal and economic perspective. I believe there are ways to incorporate these new elements in the upcoming review of the EU telecoms regulatory framework to make it futureproof and 5G-ready.
Does 5G raise issues pertaining to standardisation?
Yes, the main issue being that we'll need standardisation like we've never needed it before. As you expand the need for connectivity beyond people to literally everything, you can easily imagine that there is going to be a need to invest billions and billions of euros to create and evolve interoperable solutions that can cater to the many different requirements coming from the different sectors. The 5G platform is expected to be introduced with 3GPP release 15, forecast to be complete in 2018 for 5G commercial launches in the 2020 timeframe.
We will need high-performance standards, incorporating intensive levels of interoperability. If we only end up with extremely basic functionality incorporated in standards, we'll see much less interoperability, follow-on innovation and competition along the value chain. The bulk of the technology that consumers will be interested in may end up being developed by one or a very limited number of players that will control it in full. That is going to be bad for consumers and the rest of the market.
What this means is that standardisation needs to remain a priority for Europe. As I said, given that 5G will be about complexity, thanks to its leadership in standards Europe has a real chance of turning what many perceive as a weakness – the need to intermediate between contrasting interests, be it Member States or industrial sectors in our case – into an asset. So I welcome the Commission's intention of facilitating cross-sector partnerships for standardisation – I think this initiative can unlock situations where market players aren't naturally inclined to sit together at the table, which results in them losing commercial opportunities and the entire market not moving forward.
At the same time, one cannot forget that the investment needed to develop and evolve highly interoperable standards will come first and foremost from industry players. If standardisation is not an appealing option for them, they will not participate and we won't have the standards we need. And as one can easily imagine, fair return on investment is a top priority for businesses, including when the decision has to be made as to whether or not they want to contribute their inventions to standards and thus allow access to those inventions. There is always the option of going proprietary if participating in standardisation is not generating fair value for you. And, as I said, this would represent a risk for society in that it would lead to less interoperability, less follow-on innovation and less competition.
How are actors positioning themselves around the question of standardisation and intellectual property?
Balanced and effective intellectual property rules are essential, on the one hand, to incentivise companies to contribute their technology to standards and, on the other, to enable access to standardised technology. It is a balance that we absolutely need to get right as there is too much at stake.
I think the dynamics of the IP and standards debate haven't fundamentally changed in the last few years. Repeatedly, concerns are raised about Standard-Essential Patents (SEPs) and Fair, Reasonable, and Non-Discriminatory (FRAND) licensing. These concerns, which took centre stage during the ill-famed "smartphone wars", have proven to be tragically unfounded when it comes to smartphones and tablets. As mobile communications standards have improved and included more and more patented technology in the various iterations of 3G and 4G, average device prices have been falling dramatically and we have witnessed a proliferation of new products with new features. Irrespective of any theoretical debate about "patent thickets" and "royalty stacking", it is quite clear we simply haven't seen any thickets or stacking in the actual market, which on the contrary has been incredibly successful in achieving innovation, competition and consumer choice.
That being said, there are now what I think are valid discussions about standards and intellectual property in the new context of the IoT. As the number of players who will need to implement standards in their different industrial products grows, including SMEs, there is a need to simplify access to standards for them. In this context, the Commission has announced plans in its Communication on ICT Standardisation Priorities to facilitate fast, predictable and efficient access that can keep in place the right incentives for companies to contribute technology to standards. We welcome this approach, which I think is shared among the major standards contributors, and we look forward to working with the Commission and other stakeholders to this end.
Key to a balanced environment for investment in and access to IoT and 5G standards is flexibility. The IoT and 5G are going to be new markets, and the different parts of the value chain are still in the process of figuring out how best to structure new business models and how to create and reward value. The proverbial "one size fits all" will really not work here. However, some vested interests are promoting inflexible interpretations of FRAND that would force companies to license their technology to lower parts of the value chain or at the level of the smallest-saleable unit. This would for sure devalue standardisation – what it amounts to is guaranteed destruction of value for technology contributors to standards. And as I said earlier, companies will not contribute their technology to standards if standardisation is not generating fair value for them. If we in Europe care about interoperability, we really shouldn't go down that route.
Wassim CHOURBAJI is Vice President and head of Government Affairs for Europe, the EU and MENA. He is the Managing Director of the EU Brussels Office and oversees Qualcomm's public policy, regulatory affairs and senior government relations. Wassim is member of Qualcomm Europe leadership. He leads an EMENA-wide senior team responsible for innovation, technology, intellectual property, telecoms & digital economy, spectrum, standardization, security, data protection and antitrust policy. Wassim is chairman of the Communication Policy Council of TechUK, the policy arm of the UK digital industry. He was previously chairman of the spectrum group at DigitalEurope, the Brussels-based EU industry association, and chairman of the European spectrum group at the GSMA. Prior to joining Qualcomm in 2006, Wassim was the head of spectrum for the France Telecom Group, overseeing the group's fixed, mobile and satellite spectrum strategy across its operating companies. He was also designated by European administrations as lead coordinator on 4G spectrum for Europe at the ITU World Radio Conference. Previously, he served as regulatory manager for SkyBridge, Alcatel Space global Internet satellite project. He started his career as a spectrum engineer at French mobile operator Bouygues Telecom. Wassim holds a master's degree in wireless communications and is a graduate engineer from Supelec France.
More information on DigiWorld Economic Journal No. 102 "What do you really mean by 5G from a technology perspective? on our website
Published in DigiWorld Economic Journal DWEJ No. 101 "Towards a single digital audiovisual market?"
Interview with Adam MINNS
Executive Director, COBA, London
Conducted by Sally BROUGHTON MICOVA
The Commercial Broadcasters Association (COBA) is an industry association whose members include digital, cable and satellite broadcasters, both linear and on-demand. The association is active on policy and regulatory issues primarily in the UK, and also in Europe.
DW Economic Journal: When the Audiovisual Media Services directive was drafted it was designed to be platform neutral, maintaining a distinction only between linear and on-demand services with the intention of future-proofing it for potential changes in technology and markets. To what extent has that held up?
Adam MINNS: The European broadcasting sector is a success story, worth more than 74.6 billion euros annually, according to the European Audiovisual Observatory. Audiences have more choice than ever before, with the number of linear channels growing across the EU and the gradual emergence of on-demand services (a recent study by the European Audiovisual Observatory put the number of on-demand audiovisual services established in Europe at 2,563) .
We therefore see no need to tamper with the fundamental principles of the directive, i.e. a technology-neutral approach that applies varying levels of regulation according to consumer expectations and the nature of different services. Indeed, radical change creates a risk of damaging the successful growth of the European audiovisual sector. That said, there is a case for a moderate level of reform regarding certain, specific aspects of the rules for commercial communications for linear services. In some areas, these are overly prescriptive and it is difficult to see the consumer purpose these are serving in a world of rapidly changing behaviour and the ability to access content from a multitude of different devices and services.
Does it still make sense to regulate linear and on-demand differently?
Yes. The directive's two tier approach to regulation has helped underpin this growth and innovation. In comparison with linear channels, non-linear services, while growing, generate a relatively small amount of revenue for COBA members, and the regulatory burden must reflect this if it is not to dampen investment. Many "Catch-Up" VoD services are loss leaders, for example, and are provided to viewers at no additional cost.
In addition, one of the directive's guiding principles, that consumers exercise more control in regard to non-linear services and therefore a lower level of regulation is appropriate, holds true today.
There have been calls to revisit the "country of origin" principle that is at the core of how audiovisual media services in Europe are regulated. How important is that principle to the business of commercial broadcasters?
Few pieces of regulation are more important for our members' businesses than the Country of Origin principle set out in the Audiovisual Media Services directive – but the key point I would like to make is the benefit to EU audiences.
For the avoidance of doubt, I am referring throughout this piece to the principle set out in the AVMS directive, not to any other directive. The AVMSD's Country of Origin rule enables a broadcast or on-demand service – licensed in one EU Member State – to be made available in another country without having to separately obtain another licence at the service's destination. Where, for example, costs and content can be shared amongst channels tailored to multiple Member States, because they comply with a single set of rules, a channel is viable for a more niche audience in each market. This creates more choice for audiences, and supports media pluralism and freedom of expression.
For example, the British Sign Language and Broadcasting Trust (BSLBT) is an organisation in the UK that, supported by broadcasters, provides sign-presented content to the deaf community. It makes a range of signed content available on its on-demand service to viewers in Member States across Europe. Deaf communities in Germany, France, Estonia, Spain and many more countries are watching this content, which is made available under a UK-based notification under the Country of Origin principle in the AVMSD.
The example of the BSLBT is from an independent report COBA recently commissioned on the AVMSD Country of Origin principle from Olsberg SPI. Olsberg are still finalising the report, but their clear conclusions are that the AVMSD's Country of Origin principle has supported the growth of the European broadcasting sector and is critical for unlocking the potential of European non-linear services. Testifying to this, some 41% of linear channels established in Europe are available under the Country of Origin rule, and 34% of on-demand services (this excludes services licensed from outside the EU).
So-called Catch-Up VoD services are particularly dependent on the Country of Origin rule. These are provided by broadcasters to give their audiences on-demand access to their programming for a given period after the original transmission. These are some of the most popular VoD services in Europe (accounting for 29% of all VoD services), but are in general provided to viewers at no additional charge, so there is a real need to keep the costs of providing them down. As you would expect, they are nearly always licensed (or notified) in the same Member State as their parent channel so they can re-use content complied for the linear channel.
Around a third of these (nearly 300 services) are made available under the Country of Origin (mirroring their parent linear services). In a situation where non-linear services were not able to benefit from the Country of Origin rule, these services would clearly be at risk.
As you might also expect, smaller Member States in particular stand to be harmed by the loss of the Country of Origin principle. According to Olsberg's analysis, 41% of linear channels across the EU operate under non-domestic licences supported by the AVMSD's Country of Origin principle. In the ten smallest markets (by population), however, that rises to 75%, reflecting the greater need for economies of scale in markets that might not be able to support a stand-alone channel. To give you an idea of the kind of range and choice these channels offer, in some smaller markets the only children's channels available are provided under non-domestic licences.
COBA's view is that the AVMSD's Country of Origin principle has underpinned economic growth, consumer choice and media plurality in the European audiovisual sector to date, and for the same reasons is set to be pivotal in the on-demand era.
What do you think are the prospects for creating a single market for audiovisual media service in Europe? Is it even desirable?
I would say that it depends on how you define single market. The AVMSD has successfully enshrined an important set of European values, providing for a minimum level of standards and protection for consumers and, through the Country of Origin principle, safeguarding freedom of speech and media plurality, and supporting innovation and the growth of Europe's creative industries, as I have outlined above. At the same time, Member States rightly have the flexibility to prioritise according to national sensibilities. The current balance seems right.
In some of your recent policy papers and consultation responses you have reported impressive growth in the investments of your members in UK original content. As some of your members are large transnational players that operate in multiple European Countries, to what extent is that trend mirrored in the rest of Europe?
These are hugely exciting times for European television content. It's almost a cliché now but television has become the new film, with a range of players all investing in ambitious, high quality original content. Funding has become more fragmented than ever before, flowing from broadcasters, on-demand services, and the production companies, not to mention public support, but that is the new reality.
The most important factor to remember is that it is a mixed ecology. Many COBA members are multi-national, but others are focused on the UK, and some are relatively small. All are investing in different ways, and that mixed approach builds strength into the overall ecosystem, which is less reliant on any one funding stream. QVC, for example, is a shopping channel, that creates 17 hours a day of live television. That high volume of production provides an exceptional training ground for crews and technical staff who go on to work across the industry. It is all part of a mixed ecology, continually building critical mass.
Our analysis of content investment has been focused on the UK so I don't have detailed figures for other Member States. But you can see that investment growing across other markets. Take, the recent European Film Market at the Berlin Film Festival, which held a television drama event to promote investment in European production. At that one event we saw announcements from HBO Europe, Sony Pictures Entertainment and Sky Deutschland involving production in Scandinavia, Germany, Italy and the UK. There is a lot more.
What can be done to boost investment by transnational commercial broadcasters in original content in Europe?
Again, I am not just referring to transnational broadcasters, but to commercial sector broadcasters generally. For COBA, there are two key factors in encouraging investment, and both take time. Firstly, encourage a mixed ecology, where a genuine range of players can grow. That increases creative competition, plurality in commissioning and strengthens the sector as a whole by diversifying funding streams. Frankly, in the world today, where so many different players are investing in content, and production more than ever relies on a patchwork of funding sources, fostering such a mixed ecology seems like common sense.
The second point I would make is to allow the industry to make content that audiences want to watch. That sounds obvious, but it doesn't always happen when companies are forced into quotas or other relatively blunt regulatory instruments. In the UK we have recently experienced something of a transformation, with non-domestic European drama now appearing on our screens in prime time slots, backed by significant marketing. Most importantly, they are achieving record audiences – most recently, German drama Deutschland '83 went out in prime time on Sunday evening and was watched by more than 2 million people.
This didn't happen to fulfil a quota; it is the result of a steady stream of high quality European dramas like Gomorrah on Sky, The Killing on BBC, and The Returned on Channel 4 – broadcast on a range of channels, including both commercial and public interest - breaking down UK audience's preconceptions about foreign-language content.
Of course, it takes time – a lot of time – to develop an industry capable of making shows that resonate with audiences on any consistent basis. I don't mean the funding, which is perhaps more available than ever now, but the creative skills. I found it fascinating, for example, that Denmark has consciously reproduced the American model of the "writer's room" and the primacy of the writer/creator, with of course its own vision. As much as anything, that creative process has established Denmark as one of Europe's key creators of high quality drama, and in the process done far more to promote Danish and European culture abroad than a quota would ever achieve.
And of course underlying these points, the principle of territoriality is still an absolute cornerstone in how production is financed, so needs to be maintained. Undermining the ability of rights owners to tailor how they licence their rights from market to market would harm their ability to generate a return, and so reduce the incentive to invest in creating that content in the first place.
Adam MINNS is Executive Director of the Commercial Broadcasters Association (COBA), the trade association for UK multichannel broadcasters and on-demand services. He leads COBA's work on a range of UK and European legislative and regulatory matters, reporting to COBA's board. He joined from Pact, the trade association for UK independent production companies, where he was Director of Policy and played a key role in Pact's work on the Terms of Trade and a range of other UK and European issues. Prior to Pact, Adam was UK film editor of Screen International, the film business publication, covering the British and European film industries. He has written for the Financial Times and the Independent on Sunday.
 The Development of the European Market for On-Demand Audiovisual Services, European Audiovisual Observatory, March 2015.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 101
Interview with Nicolas CURIEN & Nathalie SONNAC
Commissioners, Conseil supérieur de l'audiovisuel (CSA) (*)
Conducted by Alexandre JOLIN
(*) This interview only reflects the views of the contributors, not the CSA's official positions.
C&S: Since the late 70's, the European Commission has aimed to harmonize the regulatory landscape for audio-visual in Europe. The TVWF then the AVMS directives have created a legal framework allowing the circulation of linear TV and on-demand audio-visual media services in Europe. As part of the European Commission's Digital Single Market strategy, a review of the AVMSD has begun in 2015 and legislative proposals are due to be set out in 2016. Being the regulatory body for France, as a member state, how is the CSA involved in those consultations? According to you, which issues are to be primarily resolved?
Nicolas CURIEN & Nathalie SONNAC: Intending to bring its regulator's viewpoint and its expertise in the practice of regulation, the CSA contributed to the European Commission's consultation about the review of the AVMS directive, entitled: "A framework for the audiovisual media in the 21th century". The CSA also participated in the cross-ministerial preparation of the French Authorities' positions and it provided a contribution to the French answer to the AVMS consultation. Mostly, the CSA plays a very active role in the European Regulators Group for Audiovisual Media Services (ERGA), which was chaired, during its two first years of existence (2014-2015), by Olivier Schrameck, the chairman of the CSA. Created in February 2014, by the European Commission as an advisory body examining issues related to media services, the ERGA stands now as a key institutional innovation, pushing forward European audio-visual policy matters. For us, as national regulators, working together within this structure represents a strong opportunity to carry out an in depth future analysis of the audio-visual sector and to stimulate the emergence of common initiatives. The ERGA is in charge of assisting the Commission in the revision of its legislative instruments, as it is now the case for the AVMS directive.
The audiovisual services drastically changed since the adoption of the previous directive in 2007. The present situation, resulting from the dynamics of "convergence", was not anticipated in the first place and it calls for several substantial adjustments in order to take into account the development of on demand non linear services, of interactivity, as well as usage of associated data. Moreover, the irruption into the French and European audio-visual markets of large and worldwide OTT players, such as Netflix or Google, raises a new kind of issues, which must be solved at the European scale. As specifically regards the revision of the AVMS directive, the ERGA produced three reports, published in January 2016, respectively about the independence of national audio-visual regulatory authorities, about the possible extension of the directive to new online players, and about minors' protection. These reports include recommendations which were unanimously approved by the 28 regulators of the European Union's member States. The ERGA thus invites the Commission to incorporate its proposals in the revised directive. An additional report about the territorial competency of regulators will be issued in the course of spring 2016.
One of the proposals on the table is to apply the same obligations placed on TV broadcasters and on-demand TV-like services to online video sharing platforms as well. Is this a realistic solution to complete the existing film and audio-visual financing system?
This issue goes well beyond the particular case of video sharing platforms, as it also includes all digital intermediaries which are commonly designated under the generic term of "platforms", such as content distributors, content aggregators, providers of applications, sharing platforms or suppliers of devices; that is, all players which hold a position between content and usage, making them gatekeepers of the access to content. All actors who develop a strategy around content and/or are involved in the exposition and the promotion of content, especially through algorithms are concerned. Since these new operators do orient consumers and deliver prescriptions to them, they doubtless play an editorial role which is similar, up to a certain extent, to that played by traditional audio-visual editors. Then, it seems both sensible and in line with the driving principles of audio-visual regulatory policy to set up for new players an adapted regime of obligations. However, such a regime should not of course ignore the necessity of sustaining the pace of innovation: when contemplating a new deal and a new toolkit for audio-visual regulation, one must not at the same time slow down the growth of innovative services which largely contribute to widen the exposition of works and do constitute a major source of creativity in the audio-visual sector.
One size does not fit all and all platforms should not be subject to the same degree of regulation: namely, a small platform should not be treated as YouTube. Proportionality should thus be set as a guideline and the regulator should focus in priority on platforms which bear a significant impact onto the market. Moreover, as it would clearly prove inefficient to set local obligations to global players, a common harmonized framework has to be defined within the European Union. Achieving proportionality, within a renewed regulatory scheme designed for digital intermediaries, also requires that rules existing for traditional editors be adapted in order to reach a satisfactory matching between obligations and the specific characteristics of the new actors. More generally, traditional regulation should not be transposed unchanged onto the digital world, a world in which the speed of evolution is very high, in which some players are active at an international scale and in which the business models greatly differ from classical ones. Accordingly, an effective regulation should be based on a triptych associating public policy, users and operators and could mainly rely upon co-regulation and self regulation. Such a perspective is precisely consistent with ERGA's present undertakings, which consist in identifying audio-visual centric platforms, rather than all platforms, with the objective to align their behaviour with the traditional goals of audio-visual public policy, although under a proportionate regulatory approach. Indeed, the public policy goals, which underlie the existing obligations set for traditional actors, such as minors' protection, copyright enforcement, investment in creation, or fair competition, do still prevail for digital platforms. In the Digiworld, goals remain the same; modalities may differ!
With the rise of international OTT services and the ongoing consolidation of the European content industry, how can policymakers best safeguard and promote cultural diversity across Europe?
Reaching a critical size through consolidation is a necessary step to preserve a model of diversified content in Europe. This does not amount to geographic confining, but rather calls for a more extensive and international approach, strongly based upon European cultural specificities. This global strategy should concern production, traditional edition and new digital platforms as well. Europe holds a solid position in terms of local content production and it must derive benefit from it. However, the momentum has to be generated through a coordinated policy, as it cannot result from the separate actions of isolated national players. In this regard, regulators also are at stake and they must rapidly come towards a more inter-institutional approach.
In their efforts to promote the diversity of content, the European editors should use linear TV, which is still by far the dominant mode in consumers' practice, as a kind of "factory" in order to produce pieces of original content promised to become brands of their own and move towards non linear usage on electronic platforms, after a first lifetime spent inside the grids of linear TV in order to get some notoriety. As access through networks is a necessary condition for access to content, synergies between medias and telcos should also be considered in order to extend the scope of content distribution and to reduce its cost. Moreover, promoting diversity heavily depends on the ability of creators to finance their content and make it available to consumers. In this respect, fair access to all distribution channels, especially online platforms, stands as a key enabling factor: hence, the strong attention of regulators to the net neutrality and the content visibility issues
Today, the OTT video industry is mainly driven by non European players such as Netflix, Apple or Liberty Global, which, despite its British implantation, is controlled by a US holding company. According to you, what could be done to ensure the development of a strong European OTT players and ensuring the sustainability of the traditional broadcasting market?
This question relates in part to the issue of rights' territoriality. A right balance has to be found between the two conflicting objectives of maximizing rights' monetization, on the one hand, and extending content's exposition, on the other hand, in a fast moving context where the growth of digital platforms makes territorial enclosure unsustainable against bypass or piracy. Since reaching such a balance likely means substantial change in the present contractual arrangements, a concerted sectorial process is needed gathering together rights holders, editors and distributors.
At least, large national players should contract partnerships and launch together digital Pan-European services, with a strong identity. As already mentioned above, these developments cannot take place at a national scale, while the main international competitors, such as Netflix, do operate worldwide, do offer worldwide content, and are less and less subject to territorial constraints; it is especially the case as regards TV series available in SVOD services, such as House of Cards, exploited under a "free" regime. In this revolutionary context, where the historical category of TV channel might sooner or later be replaced by the upcoming category of brand-content, the sustainability of traditional players is clearly conditioned to their ability and willingness to co-design adaptive and cooperative ways of deriving as much value as possible from their content.
On demand video services are currently regulated in their "country of origin". Some players are denouncing this as a distortion of competition because legal obligations can differ highly from one Member State to another. As was already done for the VAT last year, would it be recommended or possible to apply a "user-centric" approach, setting the focal point on the end-user instead of the service publisher?
The country of origin's principle certainly helped to create a common audio-visual market, as it facilitated the cross-border circulation of services, warranting legal security to broadcasters. In practice, however, this principle proves insufficient to set the conditions of a fair competition across service providers, since the AVMS directive is a framework for coordination, not harmonization, and some member States chose to adopt stricter rules than those prescribed in the directive. This may lead to a particularly critical situation, whenever a service is explicitly directed towards a given State within the Union, although it is established in another one: such as they are today libelled in the directive, the present procedures do not actually allow a member State to apply its possibly stricter rules to a foreign service aiming to reach its citizens. As a consequence, a severe imbalance is potentially created across operators competing in a same local market, some being subject to stronger obligations than others. Then, in order to avoid damageable "regulatory shopping" strategies, a fair and effective competition across all European operators must be guaranteed. In this regard, it is proposed that the European regulation be modified, by introducing an exception to the country of origin's principle, which would allow a given destination country to apply its own rules to those services which specifically address its population. This proposal does not intend to abolish the country of origin's setting, which would remain the general ruling, but just to amend it at the margin, to deal with circumstances where its application would obviously result in a harmful distortion in the marketplace.
The European Commission has also made a legislative proposal to change the copyright framework to allow cross-border portability of online video services, ensuring that consumers can access content they bought when they travelled in other EU countries. Could content portability be a structural threat for national TV industries? What could be the right balance between protecting right holders' revenues and guaranteeing access to consumers?
The European ruling about portability, issued last December, is a most appropriate initiative and it brings very good news to all European citizens, who will have access to their national offers of digital content when they travel abroad within the Union. Yielding such a significant benefit to the travelling and nomadic citizens should nevertheless not threaten the principle of rights' territoriality, which remains a very important piece in the framework in order to preserve a fair remuneration of authors. The application of rights' portability should also not hinder the commercial development of European players. Therefore, the precise conditions of portability now have to be carefully designed, through a clear specification of the criteria, characterizing temporary versus permanent residence. Finally, a realistic time frame should be set, that is not too short a one, in order to ease the operational implementation by operators.
Over the last years, linear TV revenues growth has tended to stagnate in Western Europe while on demand services, mainly SVOD, have been generating increasing traffic with low monetization rate. On the other hand, traditional broadcasters currently face stricter rules than on demand video services in some areas, such as promoting European cultural works. According to you, what would be the right balance between promoting European OTT players and protecting the traditional broadcasting market?
Seeking here for a "right" balance is maybe not fully appropriate, for the consumers do not show a same and unique profile of usage. Consumption practices vary greatly indeed, especially according to age and to social class, which leads to a wide scope of expectations in terms of kind of content, modality of usage and type of viewing device: television, tablet or smartphone. Linear TV and OTT services are likely more complements than substitutes, since they don't address the same audience and are operated under different business models. Therefore, the relevant issue is less that of balancing efforts between online versus traditional supply, than that of designing tailored offers, well fitted to individual contrasted needs, and identifying efficient synergies as regards, for instance, works' circulation and cross-promotion. In this direction, a major difficulty must be overcome: market prices of online services are established at a low level, those of SVOD lying around 10€ per month, in such a way they do not enable a single player to make the substantial investment which is required to produce attractive, competitive and self viable content. Hence, a consolidation of means at the European scale appears as a necessity. Finally, demand must be stimulated as well as supply and, in this respect education to media and to European culture is a key factor of success.
Is there any need for concentration in both service publishing and distribution sectors in order to make European champions emerge? Should this solution be supported by national regulators?
A process of concentration across players located at different links within the audio-visual chain of value, or even between actors present within that chain and outsiders, may already be observed in France, just as it is in other European countries. In France, major recent examples are the fusion of Numericable and SFR, the agreement between Altice and NextRadioTV, the acquisition of Newen by TF1, the integration of Canal+ within Vivendi. Public policy should of course encourage all industrial strategies which favour a cultural rebalancing, enhance the exposition of the French and the European cultural patrimonies and increase their value. Regulators should nevertheless be most attentive in ensuring that major transformations in the audio-visual industry do not bear a threat against fundamental ethical principles, such as liberty of expression, editorial freedom and independence of information.
Nicolas CURIEN, a member of Corps des Mines, sits at the board of the French Regulatory Body for Radio and Television (CSA), since 2015. He also is Emeritus professor at Conservatoire National des Arts et Métiers, where he held the chair "Telecommunications Economics and Policy" from 1992 to 2011, before being Commissioner in the French Regulatory Body for Telecommunications and Post from 2005 to 2011. An expert in digital economics, he taught at École Polytechnique from 1985 to 2007 and is a founding member of the French National Academy of Engineering.
Nathalie SONNAC (Doctor of Economics) chaired the Information and Communication Department of Paris 2 from 2009 to 2015 and was in charge of the professional Master 2 "Media & Public". As a media economy expert, culture and digital technology, she is also the author of numerous scientific books and articles in this field. More specifically she analyses the issues of competition and regulation in the digital age, market interaction, new business models, and monetization of digital content. She was appointed Commissioner at the Conseil supérieur de l'audiovisuel by the President of the French National Assembly on January 5, 2015 for a six-year mandate.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 101
Interview with Lorena Boix Alonso, EC-DG Connect, Brussels
Conducted by Sally BROUGHTON MICOVA
DW Economic Journal: You recently completed a comprehensive consultation on audiovisual media services with a view to possible revision of the EU's regulatory framework in this area. How much of a call for change is there from stakeholders?
Lorena BOIX ALONSO: The Audiovisual Media Services Directive (AVMSD) was adopted in 2007 and replaced the Television Without Borders Directive of 1989.
Since 2007 – and let alone since the '80s –, the audiovisual media landscape has changed significantly, in particular due to the phenomenon of media convergence. In light of these changes, we are currently reviewing the Directive and assessing its regulatory fitness, with a view to presenting a new legislative proposal later this year.
The public consultation we organised last year is an important part of this exercise and informs our future actions.
Currently, the AVMSD regulates television broadcasts and on-demand services. It applies to programmes that are "TV-like" and for which providers have editorial responsibility. The preliminary trends of the consultation show some convergence of stakeholders' views on the need to revise the scope of application of the rules. However, respondents are not always clear as to how to do this, what new services should be involved and to what type of rules they should be subject. The main concern seems to be viewers' protection, including minors.
A crucial pillar of the Directive is the so-called country of origin principle. Thanks to this principle, service providers only need to abide by the rules of a Member State rather than of multiple countries - making things simpler for businesses, especially those wishing to develop cross-border. Quite unsurprisingly, most of the respondents to the consultation want to maintain the country of origin principle.
De facto, the country of origin principle has facilitated the growth and proliferation of those services. As of end 2013, 5,141 TV channels were established in the EU. Almost 2,000 of them targeted foreign markets. This share has increased from 28 % in 2009 - year of implementation of the Directive - to 38 % in 2013 (from 45% to 68 % for the UK). As far as VoD services are concerned, in 2015, on average in Member States, 31 % of the VoD services available were established in another EU country.
Another subject on which we observed a clear trend in the responses to the public consultation is the importance to ensure the independence of national audiovisual regulators.
We have however observed less clear trends regarding other areas covered by the Directive, for example on the way forward for the rules on protection of minors, commercial communications and promotion of EU works.
The independence of regulatory authorities has historically been a touchy subject for some member states, thus it was not really dealt with in the current Directive nor the one before it. However, things seem to be different this time around, particularly with the regulators themselves taking a stand on the issue. Why have things changed and what exactly is on the table?
The independence of audiovisual regulatory bodies is key for the implementation of legislation in an impartial manner (i.e., free from influence by political players or industry). When regulatory bodies lack independence, this has a direct impact on the effective transposition and application of EU legislation. This is why many EU regulatory frameworks in other domains (i.e. telecom, gas, electricity, postal services, personal data protection) require from Member States regulatory independence. In the field of media, regulatory independence is also important for the preservation of a free and pluralistic media system.
However, the Audiovisual Media Services Directive does not impose an explicit obligation on the Member States to create an independent regulatory body.
The currently-running review of the AVMSD is assessing whether the Directive should be reinforced explicitly by requiring Member States to ensure independence of audiovisual regulatory bodies. As I said, the preliminary results of the public consultation indicate that the majority of respondents would support this position.
The Commission has established the European Regulators Group for Audiovisual Media Services (ERGA), which is – among other tasks – looking precisely into the issue of independence. And yes – in particular following the newly approved amendments to the Polish media law – the Group has recently pointed to the importance of independence.
ERGA called "upon all Member States of the European Union to act to uphold the principle of independence of the media across all European Member States." The Group also called on the Commission "to continue to actively monitor developments and to take all necessary steps to support a free and independent media, including the taking of firm action against the weakening of the necessary institutional arrangements".
How does what your team is working on in relation to audiovisual media services interact with other elements of the Digital Single Market plans such as copyright reform and addressing online intermediaries?
The Digital Single Market (DSM) strategy for Europe calls for a modernisation of the AVMSD to reflect market, consumption, and technological changes. It requires the Commission to focus the scope of the AVMSD and on the nature of the rules applicable to all market players, in particular those for the promotion of European works, protection of minors, and advertising rules.
The overall vision of the DSM strategy is to create an internal market for digital content and services and ensure that Europe is a leader in the global digital economy. To meet this objective, the DSM puts forward a range of initiatives beyond the AVMSD review.
The AVMSD review is being coordinated with these other DSM initiatives such as the assessment of the role of online platforms and intermediaries as well as the evaluation of the telecoms framework. Besides, the Commission continues to work on the modernisation of the copyright framework as well as on the implementation of a set of support measures accompanying these legislative changes in order to facilitate cross border access to European content within the digital single market.
What can we do about "the Netflix problem"? Have any good ideas come to light in your consultations in relation to OTT audiovisual services?
We are well aware of the concerns, raised by some in the public consultation, related to the lack of a level playing field, resulting from the different level of requirements introduced by Member States. This relates particularly to the field of promotion of European works.
New players start investing in new content. This is already a trend in the US. US players active on the EU market, e.g. Netflix and Amazon, also start investing in European productions. European VoD players are also more and more financing European content, also often in the form of co-financing.
However, it is true that these players do not contribute to the financing of new European content to the same extent as traditional players (television and cinema) do.
All these aspects are considered in the context of the AVMSD review. In that view and even though all the options are open at this stage, during our assessment we are looking in particular into the best ways to ensure the promotion of European works in on-demand services.
How do you think we are going to be able to encourage European content production and distribution in the future?
The promotion of European works is a key value of the Directive. The current provisions of the Directive have contributed to the cultural diversity in Europe though the production and distribution of valuable European content. For instance, the 66th Berlinale film festival that took place in February was a very good example of the creative power and diversity of cinema, with a new attendance record. I believe we can celebrate the fruits of the work of the European audiovisual and film industry that we can all be very proud of.
However, it is undeniable that the market and viewing habits have changed since the last review of the Directive, in particular regarding the rapid developing of Video on Demand. Young people consume audiovisual content increasingly on-line. People want access to audiovisual content whenever and wherever they are, on the device of their choice. Technology has made this possible.
I believe this can be a great opportunity to increase the production and circulation of European films. The Commission is very much keeping this objective in mind in the revision of the AVMSD rules on promoting European works as well as in the context of the implementation of the Creative Europe MEDIA programme. In addition, the Commission is launching other coordinated initiatives to exploit all synergies available to increase attractiveness of European films. This require measures in various areas on which the Commission is working together with all interested parties including the audiovisual sector ( film producers, authors, distributors, sales agents, VOD services, broadcasters, etc.) as well as public authorities and film funds in the frame of the European Film Forum.
On December 2015, the Commission adopted the Copyright Communication "Towards a modern, more European copyright framework" which sets an agenda of non-legislative measures meant to accompany the legislative agenda in order to ensure a wider access to audiovisual content across borders. The rationale for these measures is that audiovisual works and films require investment in order to really benefit from the DSM and to be widely accessible. Audiovisual works and films need to be available in formats and catalogues ready for use and to be understood (issue of language versions).
Finally, the Commission is also deeply engaged into the Creative Europe Media programme, which this year celebrates its 25th anniversary. Through this programme the EU invests roughly €100 million per year in European films and audiovisual industries and supports projects which are aimed at enhancing the prominence of European films on VOD Platforms.
Lorena BOIX ALONSO is the Head of Unit for Converging Media and Content Unit, Directorate General for Communications Networks Content and Technology since July 2012. Formerly, she was Deputy Head of Cabinet of Vice President Neelie Kroes, European Commissioner for the Digital Agenda. During Ms Kroes' mandate as Commissioner for Competition, Lorena Boix Alonso commenced in October 2004 as a member of her Cabinet and became Deputy Head of Cabinet in May 2008. She holds a Master of Laws, with a focus on Antitrust Law and Intellectual Property, from the Harvard Law School. She graduated in Law from the University of Valencia (Spain) and then obtained a Licence Spéciale en Droit Européen from the Université Libre de Bruxelles. She joined the European Commission Directorate-General for Competition in 2003. Prior to that, she has worked for Judge Rafael García Valdecasas, at the European Court of Justice, as well as Deputy Director and Legal Coordinator of the IPR-Helpdesk Project and in private practice in Brussels.
More information on DigiWorld Economic Journal No. 101 "Towards a single digital audiovisual market" on our website
Published in DigiWorld Economic Journal DWEJ No. 100
Interview with Mark T Bohr
Intel Senior Fellow, Technology and Manufacturing Group Director, Process Architecture and Integration Conducted by Gilbert CETTE & Yves GASSOT
C&S: Moore's Law is turning 50. Can you comment on and characterise the progress so far? How important is this in the amazing digital development that we're witnessing? Mark T. BOHR:
Moore's Law is a driving force of technological, economic and social change and is a foundational force in modern life. While most people have never seen a microprocessor, we use countless devices every day that are made possible by microprocessors and Moore's Law. Microprocessors and related technologies have become so integrated into daily life that they've become indispensable, yet nearly invisible.
Against the regular predictions of its demise, Moore's Law endures and remains essential to today's generation, which has come to expect and enjoy the experiences and opportunities defined by the observation.
Moore's Law will enable us to continuously shrink technology and make it more power efficient, allowing Intel and the industry to rethink where – and in what situations – computing is possible and desirable. Computing can disappear into the objects and spaces that we interact with – even the fabric of our clothes or shoes. New devices can be created with powerful, inexpensive technology and combining this with the ability to pool and share more information, new experiences become possible.
Moore, in a recent interview, said he thought that in the coming 5 to 10 years his "law" would be validated… Other observers think it will have seen a period of acceleration in the decade after 1990, followed by a sharp slowdown in the 2000s. Do you share this view? How do you account for the different analyses? Do you think Moore's Law has been slowed down because of the physical limitations to increasing the number of transistors per chip? Because of the 'diversion' of some R&D spending on the part of chip producers toward the fight against heat generation? Because of the exponential and hence unsustainable increase in the R&D spending it would take to extend Moore's Law? Or for other reasons?
The demise of Moore's Law has been predicted many times. Continuing Moore's Law is getting tougher, but we believe we have a lead versus our competitors. We remain confident in our ability to deliver Moore's Law and expect to continue true cost reduction through leading-edge process technology and generating real product improvements that apply across our product portfolio.
What other constraints might contribute to questioning the validation of Moore's Law?
We can't speak for others in the industry. Intel recognizes that the continuation of Moore's Law provides us with a competitive differentiator and the ability to bring higher-performance and lower-cost technologies to market quicker than our competition. Over the last several decades, we've said that we can see Moore's Law continuing for the next 10 years, and that is still the case.
Faced with these difficulties, what are the various alternative options (3-tier architecture, superconductivity technologies, biochips...) that researchers are working on? Which ones do you find the most promising?
In addition to making the features on a chip smaller, Intel is exploring numerous technologies, including:
1) Heterogeneous integration in which elements such as radios and sensors are integrated onto one piece of silicone or package;
2) Three-dimensional manufacturing with multiple layers of transistors;
3) Approaches beyond traditional CMOS including high mobility materials and new transistor structures with improved electrostatics;
4) New ways of computing including neuromorphic, or brain-inspired, computing and in-memory computing.
In 1966, the cost of constructing a plant for a new chip was $14 million. In 1995, it took $1.5 billion. Today we talk in terms of $10 billion… What is the justification for this cost explosion? Will the trend become established? What impact will this have on the price of components?
Pursuing Moore's Law is getting more expensive in part because the job is getting more difficult. For Intel, the fundamental rationale of Moore's Law continues – even though it's more expensive overall, the price-per-transistor for Intel continues to decrease with each new generation. Intel will continue investing as long as we see a positive return and a competitive advantage.
Intel and some other U.S. firms dominate the microprocessor industry… how do you explain the continued U.S. leadership in this area?
The semiconductor industry started in the U.S. but it certainly isn't a U.S.-only industry today. Intel's chip-making plants can be found in the U.S., Europe, Israel and China and large manufacturers – Samsung and TSMC – are headquartered in Asia. It's a competitive industry, and we're proud that Intel is the world's largest chip company by revenue and is recognized as the leader in the pursuit of Moore's Law.
Mark T. BOHR is an Intel Senior Fellow and director of Process Architecture and Integration at Intel Corporation. He is a member of Intel's Logic Technology Development group located in Hillsboro, Oregon, where he is responsible for directing process development activities for Intel's advanced logic technologies. He joined Intel in 1978 and has been responsible for process integration and device design on a variety of process technologies for memory and microprocessor products. He is currently directing development activities for Intel's 7 nm logic technology. BOHR is a Fellow of the Institute of Electrical and Electronics Engineers and was the recipient of the 2012 IEEE Jun-ichi Nishizawa Medal and 2003 IEEE Andrew S. Grove award. In 2005 he was elected to the National Academy of Engineering. He holds 73 patents in the area of integrated circuit processing and has authored or co-authored 49 published papers.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
Published in DigiWorld Economic Journal DWEJ No. 100
Interview with Philippe AGHION
College de France, London School of Economics
Conducted by Gilbert CETTE & Yves GASSOT
C&S: Is more competition always favourable to boost innovation? Many representatives of the telecom industry are arguing that the innovation and investment in this sector is badly impacted by the intensity of competition, do you share this analysis?
Philippe AGHION: My work with Richard Blundell and co-authors shows that competition boosts innovation for firms that are close to the technological frontier (this is the escape competition effect) whereas it may discourage innovation in firms far below the technological frontier (this is the discouragement effect). Overall, the effect of competition on innovation is an inverted-U: innovation increases with competition at low levels of competition and it decreases with competition at high initial levels of competition.
Productivity has slowed down in the U.S. and in the main developed countries since the mid 2000s. How do you explain this slow-down when we consider the dramatic momentum we know in the digital economy? Are you optimistic about a new productivity surge in the near future?
Part of the slowdown in the U.S. may be due to the fact the fact that the ICT wave has partly run out of steam. But I also believe that innovation is not properly taken into account when measuring productivity growth, and this is particularly true in sectors that experience a high degree of firm turnover and where innovations are made by newcomers in the market. In the long run I am optimistic for at least two reasons. First, the ICT revolution has improved the technology for producing new ideas. Second, with the advent of globalization, the returns to innovation have greatly increased.
Are ICTs the main driver for innovation allowing for a productivity surge in the future?
I think that with the 3D printing and the clouds, the ICT sector still has glorious days ahead. But I also anticipate breakthroughs in other sectors, for example in the renewable energy and in the health/biotech sector.
Is according to you innovation a factor of inequality increase?
My recent work shows that innovation contributes to increasing the fraction of income earned by the top richest 1% or 0.1%. By this inequality is temporary as innovation rents are eroded by imitation and disappear when current innovations are eventually replaced by newer innovations (the Schumpeterian process of “creative destruction”). Moreover, my co-authors and I show that innovation does not increase overall inequality and that it enhances social mobility (again as a result of creative destruction).
Philippe AGHION is a Professor at the College de France and at the London School of Economics, and a fellow of the Econometric Society and of the American Academy of Arts and Sciences. His research focuses on the economics of growth. With Peter HOWITT, he pioneered the so-called Schumpeterian Growth paradigm which was subsequently used to analyze the design of growth policies and the role of the state in the growth process. Much of this work is summarized in their joint book Endogenous Growth Theory (MIT Press, 1998) and The Economics of Growth (MIT Press, 2009), in his book with Rachel GRIFFITH on Competition and Growth (MIT Press, 2006), and in his survey "What Do We Learn from Schumpeterian Growth Theory" (joint with U. AKCIGIT & P. HOWITT). In 2001, Philippe Aghion received the Yrjo Jahnsson Award of the best European economist under age 45, and in 2009 he received the John Von Neumann Award.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
Published in DWEJ No. 100
Interview with Joel MOKYR
Professor of Arts and Sciences and Professor of Economics and History, Northwestern University, USA
Sackler Professor (by spec. appt.), Tel Aviv University, Israel
Conducted by Gilbert CETTE & Yves GASSOT
C&S: As a well-known economic historian, you have done extensive work and research on industrial revolutions and the conditions of emergence of British leadership in the 19th century. This could have led you, like your colleague and friend from Northwestern University - Robert Gordon - to relativizing digital innovation, with the fear that in the absence of breakthrough inventions, the world is returning to a long period of stagnation. But this isn't the case. And while some people recognize the power of the digital transformation yet tend to focus on the damage and suffering it can cause, in your own case, while you don't deny the short-term consequences, you see the typical characteristics of creative destruction so dear to Schumpeter.
How do you justify your optimism in regard to the digital revolution at a time when productivity has been slowing down in all developed countries since the early 2000s, and the pace of productivity growth is very low? To what extent can this slowdown be accounted for by the deficit of our statistical system (the limits of what is taken into account by GDP)? By the delay in spreading the digital innovation throughout the various sectors? By the delay in adapting and training the workforce? Or the fact that digital innovation potential (AI, 3D printing, ...) will essentially be realized in the future?
Joel MOKYR: To start off, I don't see the future of technological progress as merely defined by the "digital revolution." AI, robots, 3D printing and such will be an important part of our technological future, but I see progress on a much broader front. Technology will continue to develop at an ever faster rate. But much of that will be necessary to repair the damage that previous innovation has caused. Climate change is only the best known of a whole array of phenomena in which past advances have had unknown and hidden costs that now have to be paid. These costs will be lower if we get better technology, but then that technology will have unintended and unpredicted consequences. And so on. There is progress, of course, but it is not linear, it is not even monotonic. If we knew precisely in advance what every innovation implied, it would not be much of an innovation.
You have on occasion emphasized the interactions between the progress of instruments, breakthrough innovation in technology and scientific invention. How would you apply the analyses you developed for the 18th and 19th century to the components of the digital revolution today?
Compared to the tools we have today for scientific research, Galileo's and Pasteur's look like stone age tools. Yes, we build far better microscopes and telescopes and barometers today, but digitalization has penetrated every aspect of science. It has led to the re-invention of invention. It is not just "IT" or "communications." Huge searchable databanks, quantum chemistry simulation, and highly complex statistical analysis are only some of the tools that the digital age places at science's disposal. Vastly more sophisticated tools – just think of the Betzig-Hell nanoscopes for which the inventors earned a Nobel Prize last year – will allow us to work at smaller and smaller levels of both materials and living things.
Materials are the core of our production. The terms bronze and iron ages signify their importance; the great era of technological progress between 1870 and 1914 was wholly dependent on cheap and ever-better steel. But what is happening to materials now is nothing short of a sea change, with new resins, ceramics, and entirely new solids designed in silico, developed at the nano-technological level. These promise the development of materials nature never dreamed of and that deliver custom-ordered properties in terms of hardness, resilience, elasticity, and so on. New resins, advanced ceramics, carbon nanotubes and other new solids have all come on line. Graphene, the new super-thin wonder material is another substance that promises to revolutionize production in many lines. The new research tools in material science have revolutionized research.
Of perhaps even more revolutionary importance is the powerful technology developed by Stanley Cohen and Herbert Boyer in the early 1970s, in which they succeeded in creating transgenic organisms through the use of micro-organisms. Genetic selection is an old technology: nature never intended to create poodles. But genetic engineering is to artificial selection what a laser driven fine-tuned surgical instrument is to a meat-axe. The potential economic significance of genetic engineering is simply staggering, as it completely changes the relationship between humans and all other species on the planet. Ever since the emergence of agriculture and husbandry, people have "played God" and changed their biological and topographical environment, creating new phenotypes in plants and animals. Genetic engineering means we are just far better at it.
Do you think that in the long-term future, productivity gains will be mainly driven by breakthrough innovations like the creation of new microprocessors with enhanced performance or the implementation of existing innovations in several areas? And in the latter case, isn't there a risk that the induced productivity gains will gradually dwindle?
I don't believe they will ever dwindle. But I think that productivity growth as traditionally measured will become largely irrelevant in describing what is really going on. Such techniques were designed to measure process innovation, that allowed firms to produce wheat and steel with fewer inputs. It is much harder to use it to measure quality improvements, many of them subtle and often hard to quantify (e.g. the introduction of airbags into cars or more sophisticated diagnostic machinery). It is even harder for traditional NIPA to deal with entirely new products such as anesthesia or microwave ovens or online encyclopedias.
For some, the collaborative economy is one of the most fruitful products of the internet. Should we see this primarily as an illustration of the capacity of digital to reduce transaction costs or as the sign of a possible surpassing of the market economy?
Technology will change the market economy. The "share economy" (now already known to some as the "uber-economy") has transformed urban transportation, and airbnb is transforming tourism. But these will be dwarfed by the impact of digital technology on the labor market, as already illustrated by taskrabbit handimen, upcounsel on-demand attorneys, urbansitter for babysitting and healthtap for on-line doctors. But this is just scratching the surface. Digital technology will change the labor market as much as the factory did during the Industrial Revolution. The factory eventually replaced the home as the main location where production took place. That pendulum may swing back, especially if mass customization through home manufacturing (misnomered as 3D printing) starts spreading. If both Robert Reich and Jeremy Rifkin are panicking about this, it cannot be all bad.
Your work has been partly guided by the question as to why the industrial revolution primarily took place in the UK rather than in Germany or France? Can you draw a parallel with the North American domination that we are seeing today in microprocessors, software and the internet? What conditions have favored this supremacy? What factors could threaten it? What priority changes could enable Europe to acquire the necessary conditions to compete with the digital domination of the US?
I am not sure that I am still all that overawed by the question of "why Britain first". The parallel is the putative "domination" of Americans today in high-tech. Rather than seeing the leader as the locomotive that pulls the entire train forward, I think of this as an electric train, in which the motive power is external, and the lead car is there more or less by accident. Technology today is the result of a multinational effort in which boundaries mean less and less. Finland led in cellphones, Israel in flash storage, France in nuclear power – so what? Does that mean they alone can use it? Let's face it, in today's world, if an invention is made somewhere, it is made everywhere. Silicone valley is in the US, but half of the people working there are foreign-born. They could be anywhere (as long as they are together). Of course, if a country has really terrible institutions, such as Putin's Russia or Khamenei's Iran, they are not only not likely to generate new technology, but may even find it hard to absorb it. But nations such as Norway or Switzerland will always be at the frontier even if they are contributing relatively little to pushing it out.
Many observers agree that the 21st century will be marked by the emergence of China in the forefront of the global economy. Do you think this country has the necessary conditions or is developing the conditions to establish its supremacy with new leadership in digital technology sectors?
No. Their institutions are not quite as bad as Russia or Nigeria, which are corrupt to the core and where a small kleptocracy extinguishes entrepreneurship. But to have technological progress and not just a thriving and well-functioning market economy more is needed. What you need is not only the rule of law, respect for property and human rights, and the enforcement of contracts. What you need is pluralism, tolerance, and freedom of expression and association. You need political competition and decentralization, in which the ruling elite is held accountable and in which the government is constrained by what it can do to its citizens. We need to keep in mind that innovators were and are deviants, people who in some way are different and abnormal, eccentric perhaps, and in conformist societies such people in some way are suppressed. Europe's advances started in earnest when those who thought "outside the box" were no longer in fear of being accused of "black magic" or heresy. Chinese history is a fascinating story of how incredible creativity and sophistication were essentially wasted after the Song dynasty and China fell behind the West. Mutatis mutandis, the same is true for the Soviet Union. The potential of Soviet Russia was huge, but bad institutions channeled its creativity into Sputniks, MIG's and Katyushas and little else.
Joel MOKYR is the Robert H. Strotz Professor of Arts and Sciences and Professor of Economics and History at Northwestern University and Sackler Professor (by special appointment) at the Eitan Berglas School of Economics at the University of Tel Aviv. He specializes in economic history and the economics of technological change and population change. He is the author of Why Ireland Starved: An Analytical and Quantitative Study of the Irish Economy, The Lever of Riches: Technological Creativity and Economic Progress, The British Industrial Revolution: An Economic Perspective, The Gifts of Athena: Historical Origins of the Knowledge Economy, and The Enlightened Economy. His most recent book is A Culture of Growth, to be published by Princeton University Press in 2016. He serves as editor in chief of a book series, the Princeton University Press Economic History of the Western World. He serves as chair of the advisory committee of the Institutions, Organizations, and Growth program of the Canadian Institute of Advanced Research. Prof. Mokyr has an undergraduate degree from the Hebrew University of Jerusalem and a Ph.D. from Yale University. He has taught at Northwestern since 1974, and has been a visiting Professor at Harvard, the University of Chicago, Stanford, the Hebrew University of Jerusalem, the University of Tel Aviv, University College of Dublin, and the University of Manchester. He is a fellow of the American Academy of Arts and Sciences, a foreign fellow of the Royal Dutch Academy of Sciences, the Accademia Nazionale dei Lincei and a Fellow of the Econometric Society and the Cliometric Society. His books have won a number of important prizes, and in 2006 he was awarded the biennial Heineken Prize by the Royal Dutch Academy of Sciences for a lifetime achievement in historical science. In 2015 he was awarded the Balzan Prize for Economic History.
More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :
French Minister of the Economy, Industry and Digital affairs
In the Digiworld Economic Journal No. 100
ICTs do not constitute a sector of our economy: they are its defining new element. We have indeed rarely seen technological breakthroughs that simultaneously alter the three pillars of an economy: its production, its consumption, its labor relations. Whatever their outcome, they already amount to a new "Great Transformation" of our societies.
First, and most classically, ICTs were the main source of productivity gains in the recent period. From the 1990s on, their production with an ever-increasing efficiency (in the so-called "ICT producing sector") but also their diffusion and their use in the broader economy were a major element in an otherwise moderate output growth environment. Between 2001 and 2007, its contribution to annual GDP growth in eight major EU economies  was estimated by CORRADO & JAEGER (2014)  to be as high as 1 percentage point.
Second, ICTs offer new goods to consume and, more interestingly, even change what "consuming" means, legally, statistically and culturally. Let me provide some examples. "Big data" make tailor-made products always more available, but raise difficult property rightsquestions, at the intersection of privacy, innovation and growth: we can neither wave all personal controls, nor destroy all incentives for the first-collecting firms, nor prevent the rest of the economy from exploiting them to their full value. A new compromise must be forged, with relevant tradeoffs between privacy and innovation being discussed openly. The "platform model", with its natural tendency towards network effects and economies of scale, must be integrated within our competition policies. The "sharing economy" has met a well-deserved enthusiasm, especially in France, but a big part of it is still not included in GDP figures. The "Internet of Things" is an impressive promise, but cannot fit the traditional boundaries between sectors, and will probably run into traditional management culture's resistance.
Third, ICTs create a new demand for untraditional forms of workforce. By reducing and transforming the need for intermediaries, by improving matching efficiency between customers and providers, they make work more flexible and more independent. In France, the secular movement towards payroll-employment has stopped in the early 2000s. Since 2006, the share of independents in total workforce, when excluding agriculture, has even risen by 26%! The status called "autoentrepreneurs", for instance, has found a real success, with one million people now declared, precisely because it allowed for the required simplicity and flexibility.
Our infrastructure is already first rate. Broadband access is higher than the OECD average. Though we lag behind in terms of fiber development (which accounts only for a little less than 4% of high speed subscriptions against 17% for the OECD average), we are rapidly catching up (fiber subscriptions grew by more than 60% in 2013-2014). More generally, in recent years, increased competition has generated lower prices, simpler offers and more innovation.
But our social and political institutions, inherited from a period of Taylorism, mass consumption and catching-up development are ill-suited to meet these new challenges. Their inertia has long been seen as a source of protection, but may now be stifling economic dynamism to a greater extent than we thought, while not even serving well their primary goal of social protection and individual empowerment.
To rejuvenate their spirit, we must ensure that they still support innovation, diffusion and inclusiveness. These are the three targets of my nationwide economic goals: deliver "Nouvelles Opportunités Economiques" (New Economic Opportunities).
Innovation is a complex phenomenon. It requires a subtle mix of flexibility, investment, cooperation and competition: firms must have the means to innovate, the opportunity to learn and the incentive to develop. We already made a historical effort to support the profitability of corporates and indeed profit margins which were falling since 2007 have been up since mid-2014. We also boosted the development of good practices through the "Industries du Futur" initiative. But we need to go further in removing barriers to entry in overregulated sectors and opening up data to competitors. We should also support the development of venture capital, which has proved a key element in the transformation of our numerous startups (where Paris ranks 2nd in Europe) into "unicorns" (where France ranks only 5th in Europe). Banking intermediation is indeed inadequate when risks are high, close screening is required and immaterial collateral is not easily pledgeable.
Diffusion is a slightly related, though different, issue. The productivity slowdown is much less salient at the technological frontier than in the rest of the economy: in OECD countries, output per worker increased annually by 3.5% between 2001 and 2007 for the 100 most productive firms in each manufacturing sector, compared to 0.5% for the others. In services, these figures are respectively 5.5% and 0.3%! This gap is not only very large, it has widened. There is something broken in the diffusion machine. It is also worth remembering that productivity growth does not come from all firms increasing productivity. Around half of the aggregate productivity gains in industrialized countries are generated by faster growth of the most productive firms that attract more workers and more investors. We must encourage this factor reallocation (between firms and between sectors), be they labor – through increased flexibility – or capital – through lower bankruptcy costs.
Inclusiveness is key. The polarization phenomenon, whereby technology destroys "routine" jobs in the middle of the skill distribution and creates opportunities for both skilled and unskilled work, is well known and well documented. France is no exception for the hollowing out of routine jobs (bank clerks and secretaries for example). However, it exhibits a relatively high rate of unemployment among high school dropouts (16,1%) and more generally among low skill workers. It is an apparent paradox, since ICTs either improve their productivity – for instance by improving matching in personal services – or at least cannot act as a substitute– for all activities where social interactions are needed. We are dismantling outdated regulations and lowering labor costs to bring on the board of innovation the outsiders of the "old" industrial society.
Technology is inherently disruptive. But politics is about inclusiveness and trust. Forging a new social pact is not an additional burden on the road to a new economic model: it is a necessary step, for it conditions its long term sustainability. We must allow the necessary flexibility by making social protection better adapted to independent work, multiple activities and diverse careers. We must also provide the necessary skills (through training as well as initial education) to answer both the present and future demands.
At which speed will ICTs develop and what level of growth rate will they help us achieve? Robert GORDON has brilliantly exposed the "supply side" hypothesis of the "secular stagnation" debate. But at the other end, we hear also the arguments of those telling us we are on the verge of massive breakthroughs. Should we turn to statistics? Yes, they seem to show a slowdown in ICTs productivity but at the same time venture capital investments in the US, which were never higher in fifteen years, promise renewed dynamism.
Which employment structure will they foster? The studies on polarization now describe well what happened in the last decades. But in the coming years we may see a new surge in jobs with intermediate skills, for instance in the medical sector where the productivity of nurses could soon be multiplied. For example, by collecting data from a number of wearable devices or sensors, the "internet of me" in the health care sector will mean much more personalized demand from nurses who will become much more effective at responding to this demand. Again this requires investment in training.
All in all, these innovations are paved with uncertainties, as "industrial revolutions" always were. If you had asked an Englishman about the industrial revolution in 1780, he would have asked what you meant. In 1820, he would have expressed his longing for a vanishing agricultural society. In 1860, he would have claimed that it lifted millions out of poverty and opened the way to a supposedly everlasting progress.
I do not assume that present innovations will follow a similar course. But I believe that we cannot foresee, even less enclose, what is yet to be. We must take the best from our past (the ambition of our social protection, the talents of our industries, the quality of our infrastructures), seize the maximum from our present (the renewed demand for work, the widening of opportunities, the creation of new services and new markets) and be ready for the future.
 Austria, Finland, France, Germany, Italy, Netherlands, Spain and the United Kingdom.
 CORRADO, C. and K. JÄGER (2014): "Communication Networks, ICT and Productivity Growth in Europe", The Conference Board, New York, December.
CEO, IDATE DigiWorld
The common perception is that digital innovation is everywhere, and that the pace of innovation is accelerating as it applies to every sector, every business and every organisation.
Despite which, economists are wary. Productivity gains have clearly been slowing since the mid-2000s, even before the economy collapsed in 2008. And this is not a phenomenon that is confined to Europe, which could explain why it lags behind market leaders, but applies to the US as well. We are reminded of the words of Nobel Prize winning economist, Robert Solow, back in the 1980s: “You see the computer age everywhere but in productivity statistics”. Although we are by no means enjoying gains comparable to those of the 1920s or the great post-war boom, the effects of the Internet revolution can still be seen in statistics for 1995 to 2005. In other words, before the iPhone, before the smartphone and mobile Internet explosion, before 4G, the cloud and the onset of Big Data…
So the experts are divided into two camps: the techno-pessimists aligning themselves with Robert J. Gordon are convinced that the potential for digital innovation is dwindling, sinking very quickly into useless innovations, the latest gadget for the latest smartphone. They do not see any disruptive innovation that will impact productivity and growth in a way that is comparable to the steam engine or the electric motor. After all, they point out, history does not end here: up until the latest industrial revolutions, people in Western societies lived with very moderate productivity gains and GDP growth.
Meanwhile, the techno-optimists aligning themselves with Brynjolfsson and McAfee remain confident, pointing to new waves of innovation with artificial intelligence, new generation robots, the Internet of Things and 3D printing. Even Moore’s Law – the Law named after the co-founder of Intel who, fifty years ago, predicted that the number of transistors in an integrated circuit would double every two years, and which, somewhat unfortunately, appears to have caught on as the measuring stick for the digital revolution’s maturity – is expected to continue to hold true for at least another ten years. From a more general perspective, there are some such as Joel Mokyr who express their optimism by saying we underestimate the effect that the Internet has on change and improving human welfare, on accelerating access to knowledge in every scientific and technical field.
Behind this very black and white division, there are those who are interested in the failures of the statistical apparatus, and of price effects (deflation) that can distort the measurement of the different sectors’ ICT spending. Ultimately, however, their attention is focused on the conditions that would help reduce lag time, which is perceived as the time it takes for digital technologies’ productivity potential to kick in. Here, authors such as Gilbert Cette and Philippe Aghion stress the importance of ambitious and efficient public policies on education and training, seeing them as the cornerstone of a successful innovation policy and an answer to the phenomenon of qualified job opportunities being concentrated in a few major cities. They also stress the importance of reforms if we want to see the Schumpeterian cycle of innovation play out in a fluid and positive way, reduce the divide between a small fraction of highly productive businesses and an economic fabric turning in mediocre performances, while building up the majority’s trust in the digital transformation. We will add that it is useful, as Larry Summers does on a regular basis, to stress the importance under these circumstances of investments in infrastructure (think fibre and superfast mobile) and that we are not forbidden, as Daniel Cohen suggests in his latest work, from calling for an examination of the wisdom and quality of innovation policies, by underscoring the ways in which digital technologies can contribute to turning the tide on climate change.
|Digital innovation vs. secular stagnation?
N° 100 - DigiWorld Econcomic Journal
The DigiWorld Economic Journal, is celebrating its 25th anniversary with this issue No. 100. For this jubilee issue, Gilbert Cette and Yves Gassot Editors have collected contributions from leading economists who examine the links between digital innovation and the associated developments, directly or indirectly, in terms of productivity, growth and job creation. The guest authors do not all adopt the same angle of analysis nor do they all share the same theses... But, in reading this issue, you will discover a different way of thinking about the big questions raised by these topics.
Buy the DigiWorld Economic Journal now !
Published in Communications & Strategies n°99
IDEI, Toulouse School of Economics
Interview conducted by Marc BOURREAU,
C&S: The concept of platform is sometimes used in a very broad way in the policy debates. How would you define platform/multi-sided markets? What is the difference between a one-sided and a multi-sided market?
Bruno JULLIEN: It is difficult to provide a formal definition of a platform in economics and there is no consensus on such a definition. As a start I would say that a platform is a bundle of services that are used by several economic agents in order to interact. In such situations, a side represents a particular type of users (say sellers on a B2C marketplace, or merchants dealing with a credit card). Each side's benefits depend what other sides are doing on the platform. Moreover the platform may treat the various sides in a differentiated manner. For instance some may get free services while others pay for the right to access the platform.
From a theoretical perspective, a platform is not necessarily multi-sided. To be so requires two conditions. First the organization of the platform's services involves network externalities, i.e. participation and other actions of a user affect other users of the platform. Second the platform discriminates between different types of users. One criterion sometime used to determine whether an activity is multisided or not is whether the value of the service for each user depends on the whole structure of prices or not.
In a multi-sided platform the customers need to consider interactions with other economic agents to evaluate the value of the good or service and determine their behavior. The final value of the service for the customer is not fully controlled by the platform but results from agents' interactions. By contrast, in a one-sided market, firms choose the product or service characteristics and customers' value depends only on that choice.
The difficulty with the concept is two-fold. First it covers potentially a wide range of goods and services, so that the multi-sided externalities must be significant enough to be relevant. Second, all platforms are not necessarily multi-sided as this may depend on the business model of the platform. Consider for instance retailing: a chain store is typically not a multi-sided platform but Amazon marketplace is one. The chain store decides which products to carry at which prices and then consumers interact only with the store and don't care about suppliers. By contrast, online marketplaces let buyers and sellers jointly determine the products and prices.
The literature on multi-sided markets emerged in the early 2000's (and you were one of the first authors on the topic), but it is still vibrant. What do we learn from the recent research on platforms?
The early literature was mostly focused on price theory, explaining difference between pricing in multi-sided markets and one-sided markets by emphasizing the need to coordinate users and bring all sides on board. A main contribution has been the development of concept of opportunity cost where the cost of providing the service to a user is adjusted to account for the benefits (or costs) accruing to other users. This however needs to be put at work in practice, which is part of what the literature is aiming at. The recent literature developed along several lines. The first is the application of the concept to specific industries as it has been done for instance for the Internet, search engines, ads financed media or credit cards. For instance, in the case of media, the recent literature helps us understand the evolutions in terms of business models or the implications of mergers. Along the same dimension, the research is trying to develop new operational tools for competition policy where traditional results don't apply; there has been for instance work on bundling or econometric models for empirical work and policy evaluation.
At the theory level, what I retain mostly from recent work is the importance of participation patterns of the users (exclusivity, multiple vs single affiliation, switching) in shaping the competition between platforms.
On the other side of the coin, what do we still not know? What are the key questions where more research is still necessary?
While we have made significant progress in price theory and applications, there is a lot we don't know and a large scope for future research. For the theory I think that the main issue that we need to address is that our theories are mostly static. We need to better understand the dynamics of competition between platforms. What determines the emergence of a successful platform? What is the extent of barriers to entry? What are the respective roles of history and actual merit?
I expect also research to move away from price theory into design and organization, where most competition takes place. We need to understand when and how platform decides to interfere in transactions. A recent concrete example is the issue of MFN clauses for online booking systems (Most Favored Nation: this prevents registered hotels from offering lower prices on competing websites or direct sales).
For this we need more empirical work to guide research and applications. Currently we see many data originating from a single platform, so we may expect many studies of agents' behavior on a platform. But we will need also empirical work on platform competition.
For competition/regulation policy, we need more work to propose operational decision tools to competition authorities and regulators. Basic questions such as market definition or tests for predation are still not resolved for platforms. We have difficulties evaluating the optimal market structure, as more competition may not raise welfare and efficiency. This will require developing research at the frontier between law and economics.
There is a hot policy debate today in Europe on the regulation of platforms. What is your opinion on this question? What are the potential market failures in platform markets, which would justify a regulatory intervention?
The issue is not to identify market failures, which occur when there are externalities between users, network effects and market power as is usually the case with platforms. The main question is whether there is a scope for efficient ex ante regulatory intervention. In some cases, ex ante rules or principles are desirable, for instance for privacy issues. But in general I would be cautious and favor ex post intervention for several reasons. Platforms are very heterogeneous: platforms may propose very different activities, the same activities may be proposed by very different platforms, platforms may be more or less integrated vertically. This means that it is extremely complex to define ex ante the perimeter of a regulation. Moreover, the same regulation may affect different platforms in different ways, for instance a pay platform and a free platform are not affected in the same manner by restriction on data usage. Finally, the markets where platforms operate are dynamic and innovative. Market power has to be evaluated from a dynamic competition perspective and regulation should not impede this dynamic process.
Notice that it is in the broad interest of a platform to optimize the quality of interactions between its members and correct externalities because this raises their value. The literature has put some limits to this view, but intervention should occur only for clearly identified failure. I would point out two factors that may be matter for that.
A key distinction should be between situations involving bottlenecks and others where all users can easily switch or use several platforms. A bottleneck arises when each platform enjoys the exclusive rights for the conduct of transactions with some of its users. This gives some monopoly power on these transactions and we know that competition between platforms will not resorb it. We may then want to reduce this market power. This is similar to a one-way access problem familiar to telecommunication regulators.
Second, platforms providing free services to some sides rely on a limited set of instruments to coordinate users, which may not be enough to address issues of externalities. Indeed a good coordination of the sides would require as many prices (or subsidies) as there are sides. Free platforms by nature cannot pass on to consumers the true opportunity cost, which may induce excessive usage or may distort prices charged to other sides. This may induce inefficiencies and calls for special scrutiny.
Do you think that today regulators and competition authorities take sufficiently into account the specificities of multi-sided markets (provided you think they should)?
Regulators and competition authorities are now aware of the concept and its importance in some industries. However they lack tools and knowledge to incorporate this dimension in their analysis. I think this is a reason why we don't see as many applications to cases as we would like and why they prefer to rely on more conventional analysis. Some cases are more obviously two-sided than others, the credit card cases for instance. But even if the concept is not explicitly mentioned in decisions, it is often present in the reasoning (an example is the approval of the merger of the satellite digital radio services Sirius and XM by the FCC in 2008).
In platform markets, we observe some big multi-platform players, such as Apple, Google, Amazon, or Facebook, with distinct core businesses and overlapping activities. Do you think this multi-dimensional feature of the competition affects the ways these firms compete with each other?
I am not a specialist of strategy but I think this is the case. These platforms started with very different objectives and business models. This affects their priorities and strategies in terms of pricing, choice and organization of activities. Clearly Google Shopping is organized in a very different manner than Amazon marketplace, reflecting their different competencies and services. I always thought that part of the initial difference of strategies on e-books between Amazon and Apple was due to the expertise of Amazon in the domain of cultural goods.
Bruno JULLIEN is Senior Researcher at CNRS and the Toulouse School of Economics (TSE), and a senior member at Institut d'Economie Industrielle (IDEI). He is currently Scientific Director of TSE. His interests cover industrial organization, in particular in the domain of network economics, ICT and competition policy, as well as regulation, insurance and contract theory. He is recognized as a world leading academic researcher on the economics of two-sided markets, which he contributed to develop. Bruno Jullien has published numerous articles in renowned scientific reviews such as Econometrica, Journal of Political Economy, Review of Economic Studies, RAND Journal of Economics. He is currently co-editor of Journal of Economics and Management Strategy and associate editor of Geneva Risk and Insurance Review. He is Fellow of the Econometric Society, member of the Steering Committee of Association of Competition Economics and of the Economic Advisory Group on Competition Policy of the European Commission. He is a fellow of CEPR, CESIfo and CMPO. Bruno Jullien has also been advising firms and decision makers on regulatory and competition policy issues for more than 20 years. He graduated from Ecole Polytechnique, ENSAE and EHESS, and holds a Ph.D. in economics from Harvard University. He started his career as a researcher in Paris at CEPREMAP and CREST. He was also a Professor at Ecole Polytechnique. He joined the University of Toulouse in 1996. He has been Director of the research centre GREMAQ (1997-2004) and Deputy Director of Toulouse School of Economics (2010-2011). He received the Bronze Medal of CNRS, the "Palmes Académiques", the ACE best article award and the JIE best article award.
The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" is available!
DigiWorld Summit 2015
IDATE will contribute to the debate at the upcoming DigiWorld Summit on 17, 18 and 19 November, in Montpellier, with:
- Fatima BARROS, Chair BEREC
- Carlo d'ASSARO BIONDO, President EMEA strategic Relationship, Google
- Bruno LASSERRE, Président de l’Autorité de la Concurrence
- Eduardo MARTINEZ RIVERO, Head of Unit « Antitrust Telecom », DG Competition, EC
- Sébastien SORIANO, Président de l’ARCEP
Information & Registration: