The economics of platforms in the digital transformation: IDEI views


Published in Communications & Strategies n°99



IDEI, Toulouse School of Economics

Conducted by Marc BOURREAU,
Telecom ParisTech


C&S:  The concept of platform is sometimes used in a very broad way in the policy debates. How would you define platform/multi-sided markets? What is the difference between a one-sided and a multi-sided market?

Bruno JULLIEN: It is difficult to provide a formal definition of a platform in economics and there is no consensus on such a definition. As a start I would say that a platform is a bundle of services that are used by several economic agents in order to interact. In such situations, a side represents a particular type of users (say sellers on a B2C marketplace, or merchants dealing with a credit card). Each side's benefits depend what other sides are doing on the platform. Moreover the platform may treat the various sides in a differentiated manner. For instance some may get free services while others pay for the right to access the platform.

From a theoretical perspective, a platform is not necessarily multi-sided. To be so requires two conditions. First the organization of the platform's services involves network externalities, i.e. participation and other actions of a user affect other users of the platform.  Second the platform discriminates between different types of users.  One criterion sometime used to determine whether an activity is multisided or not is whether the value of the service for each user depends on the whole structure of prices or not.

In a multi-sided platform the customers need to consider interactions with other economic agents to evaluate the value of the good or service and determine their behavior. The final value of the service for the customer is not fully controlled by the platform but results from agents' interactions. By contrast, in a one-sided market, firms choose the product or service characteristics and customers' value depends only on that choice.

The difficulty with the concept is two-fold. First it covers potentially a wide range of goods and services, so that the multi-sided externalities must be significant enough to be relevant. Second, all platforms are not necessarily multi-sided as this may depend on the business model of the platform. Consider for instance retailing: a chain store is typically not a multi-sided platform but Amazon marketplace is one. The chain store decides which products to carry at which prices and then consumers interact only with the store and don't care about suppliers. By contrast, online marketplaces let buyers and sellers jointly determine the products and prices.

The literature on multi-sided markets emerged in the early 2000's (and you were one of the first authors on the topic), but it is still vibrant. What do we learn from the recent research on platforms?

The early literature was mostly focused on price theory, explaining difference between pricing in multi-sided markets and one-sided markets by emphasizing the need to coordinate users and bring all sides on board. A main contribution has been the development of concept of opportunity cost where the cost of providing the service to a user is adjusted to account for the benefits (or costs) accruing to other users. This however needs to be put at work in practice, which is part of what the literature is aiming at.  The recent literature developed along several lines. The first is the application of the concept to specific industries as it has been done for instance for the Internet, search engines, ads financed media or credit cards. For instance, in the case of media, the recent literature helps us understand the evolutions in terms of business models or the implications of mergers. Along the same dimension, the research is trying to develop new operational tools for competition policy where traditional results don't apply; there has been for instance work on bundling or econometric models for empirical work and policy evaluation.

At the theory level, what I retain mostly from recent work is the importance of participation patterns of the users (exclusivity, multiple vs single affiliation, switching) in shaping the competition between platforms.

On the other side of the coin, what do we still not know? What are the key questions where more research is still necessary?

While we have made significant progress in price theory and applications, there is a lot we don't know and a large scope for future research. For the theory I think that the main issue that we need to address is that our theories are mostly static. We need to better understand the dynamics of competition between platforms. What determines the emergence of a successful platform? What is the extent of barriers to entry? What are the respective roles of history and actual merit?

I expect also research to move away from price theory into design and organization, where most competition takes place. We need to understand when and how platform decides to interfere in transactions. A recent concrete example is the issue of MFN clauses for online booking systems (Most Favored Nation: this prevents registered hotels from offering lower prices on competing websites or direct sales).

For this we need more empirical work to guide research and applications. Currently we see many data originating from a single platform, so we may expect many studies of agents' behavior on a platform. But we will need also empirical work on platform competition.

For competition/regulation policy, we need more work to propose operational decision tools to competition authorities and regulators. Basic questions such as market definition or tests for predation are still not resolved for platforms. We have difficulties evaluating the optimal market structure, as more competition may not raise welfare and efficiency. This will require developing research at the frontier between law and economics.

There is a hot policy debate today in Europe on the regulation of platforms. What is your opinion on this question? What are the potential market failures in platform markets, which would justify a regulatory intervention?

The issue is not to identify market failures, which occur when there are externalities between users, network effects and market power as is usually the case with platforms. The main question is whether there is a scope for efficient ex ante regulatory intervention. In some cases, ex ante rules or principles are desirable, for instance for privacy issues. But in general I would be cautious and favor ex post intervention for several reasons. Platforms are very heterogeneous:  platforms may propose very different activities, the same activities may be proposed by very different platforms, platforms may be more or less integrated vertically. This means that it is extremely complex to define ex ante the perimeter of a regulation. Moreover, the same regulation may affect different platforms in different ways, for instance a pay platform and a free platform are not affected in the same manner by restriction on data usage. Finally, the markets where platforms operate are dynamic and innovative. Market power has to be evaluated from a dynamic competition perspective and regulation should not impede this dynamic process.

Notice that it is in the broad interest of a platform to optimize the quality of interactions between its members and correct externalities because this raises their value. The literature has put some limits to this view, but intervention should occur only for clearly identified failure. I would point out two factors that may be matter for that.

A key distinction should be between situations involving bottlenecks and others where all users can easily switch or use several platforms. A bottleneck arises when each platform enjoys the exclusive rights for the conduct of transactions with some of its users. This gives some monopoly power on these transactions and we know that competition between platforms will not resorb it. We may then want to reduce this market power. This is similar to a one-way access problem familiar to telecommunication regulators.

Second, platforms providing free services to some sides rely on a limited set of instruments to coordinate users, which may not be enough to address issues of externalities. Indeed a good coordination of the sides would require as many prices (or subsidies) as there are sides. Free platforms by nature cannot pass on to consumers the true opportunity cost, which may induce excessive usage or may distort prices charged to other sides. This may induce inefficiencies and calls for special scrutiny.

Do you think that today regulators and competition authorities take sufficiently into account the specificities of multi-sided markets (provided you think they should)?

Regulators and competition authorities are now aware of the concept and its importance in some industries. However they lack tools and knowledge to incorporate this dimension in their analysis. I think this is a reason why we don't see as many applications to cases as we would like and why they prefer to rely on more conventional analysis. Some cases are more obviously two-sided than others, the credit card cases for instance. But even if the concept is not explicitly mentioned in decisions, it is often present in the reasoning (an example is the approval of the merger of the satellite digital radio services Sirius and XM by the FCC in 2008).

In platform markets, we observe some big multi-platform players, such as Apple, Google, Amazon, or Facebook, with distinct core businesses and overlapping activities. Do you think this multi-dimensional feature of the competition affects the ways these firms compete with each other?

I am not a specialist of strategy but I think this is the case. These platforms started with very different objectives and business models. This affects their priorities and strategies in terms of pricing, choice and organization of activities. Clearly Google Shopping is organized in a very different manner than Amazon marketplace, reflecting their different competencies and services. I always thought that part of the initial difference of strategies on e-books between Amazon and Apple was due to the expertise of Amazon in the domain of cultural goods.

Bruno JULLIEN is Senior Researcher at CNRS and the Toulouse School of Economics (TSE), and a senior member at Institut d'Economie Industrielle (IDEI). He is currently Scientific Director of TSE. His interests cover industrial organization, in particular in the domain of network economics, ICT and competition policy, as well as regulation, insurance and contract theory. He is recognized as a world leading academic researcher on the economics of two-sided markets, which he contributed to develop. Bruno Jullien has published numerous articles in renowned scientific reviews such as Econometrica, Journal of Political Economy, Review of Economic Studies, RAND Journal of Economics. He is currently co-editor of Journal of Economics and Management Strategy and associate editor of Geneva Risk and Insurance Review. He is Fellow of the Econometric Society, member of the Steering Committee of Association of Competition Economics and of the Economic Advisory Group on Competition Policy of the European Commission. He is a fellow of CEPR, CESIfo and CMPO. Bruno Jullien has also been advising firms and decision makers on regulatory and competition policy issues for more than 20 years. He graduated from Ecole Polytechnique, ENSAE and EHESS, and holds a Ph.D. in economics from Harvard University. He started his career as a researcher in Paris at CEPREMAP and CREST. He was also a Professor at Ecole Polytechnique. He joined the University of Toulouse in 1996. He has been Director of the research centre GREMAQ (1997-2004) and Deputy Director of Toulouse School of Economics (2010-2011). He received the Bronze Medal of CNRS, the "Palmes Académiques", the ACE best article award and the JIE best article award.

The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" is available!

Order n°99      Discover IDATE's publications & studies

DigiWorld Summit 2015

IDATE will contribute to the debate at the upcoming DigiWorld Summit on 17, 18 and 19 November, in Montpellier, with:

  • Fatima BARROS, Chair BEREC
  • Carlo d'ASSARO BIONDO, President EMEA strategic Relationship, Google
  • Bruno LASSERRE, Président de l’Autorité de la Concurrence
  • Eduardo MARTINEZ RIVERO, Head of Unit « Antitrust Telecom », DG Competition, EC
  • Sébastien SORIANO, Président de l’ARCEP

Information & Registration:



The economics of platforms in the digital transformation: ARCEP views


Published in Communications & Strategies n°99

Sébastien SORIANO

Chairman ARCEP

Conducted by Marc BOURREAU,
Telecom ParisTech


C&S: There is a hot policy debate today in Europe on whether we should regulate platforms. Some argue in favor of a "laisser faire" approach, because due to strong innovation dynamics, they say, the dominant platforms of today will soon be replaced by new players, in a Schumpeterian fashion. Others propose to strongly regulate platforms, in terms of neutrality, portability of data, access, etc. Where do you think lies the right level of regulation for platforms?

Sébastien SORIANO: Whether or not an economic activity should have specific regulation is a matter of two cumulative factors: an economic factor (are there market failures?), and a political one (is this activity having a structural impact on our society and economy?).

There is no single answer for all platforms, because the term "platform" covers a great variety of actors and models: e-commerce platforms, social networks, search engines, application stores… The fact that the European Commission is currently investigating on whether Uber is a transport service or a digital platform is actually a striking example of the lack of a consensual definition of what a platform is.

In my opinion, it is obvious that some digital platforms have today acquired such a significant influence over multiple segments of our economy that some kind of regulation is needed. But defining specific economic rules for every type of platform would be inappropriate: it would risk numbing the innovation process without bringing any added value, not to mention the potentially high cost of such a regulation.

In the end, the question is whether we should regulate only a handful of major platforms. I believe that such a regulation would help promoting confidence in the digital economy and thus fast-tracking the development of those markets in Europe.

If platforms, or some platforms, should be regulated, what kind of regulation should be put in place? In other words, what kind of market failures calls for a regulatory intervention? Going further, which form of intervention do you think is preferable: ex ante regulation or ex post competition policy?

General rules already exist in consumer, commercial, competition or privacy laws. The Booking.com case, dealt in France by the Autorité de la concurrence, is an illustration that the current legal tools are often sufficient. The real debate today is whether we need ex ante regulation, that is to say a specific regulatory framework adapted to a certain category of platforms.

To build such a framework, three essential values will be needed in my opinion:

First, regulation must have the ability to react quickly: the general law provides some answers, but the response times are often totally ill-adapted. Disputes between a platform and a startup or an SME should be settled in no longer than a couple of months.

Second, the framework must be an agile one: strict and detailed rules would indeed soon become outdated, or simply be bypassed by some actors. Regulation should be articulated around a few general principles, with a regulating institution in charge of ensuring the applications.

Finally, regulation must form an alliance with the multitude: the digital economy is a complex and shifting sector and regulation must take shape with the help of researching communities, programmers, makers... We need to invent the concept of "crowd-regulation".

 The economics literature on platforms and two-sided markets shows that applying insights from the analysis of one-sided markets to two-sided markets might be misleading. For example, we know that it may be profitable (and socially optimal) for a platform to charge a very low price on one side to generate strong network effects for the other side. With "one-sided" glasses, such a price may look predatory, whereas with "two-sided" glasses, it could be viewed as just efficient. How can regulators account for these specificities of two-sided markets?

Infrastructure regulation has existed in France for close to 20 years, and has been applied to a great variety of sectors: railroads, energy, communication... The fundamental issue has always been to deal with network effects, a phenomenon that allows the largest network to constantly reinforce its dominant position. Regulation allows our society to benefit from the positive consequences of these network effects, while minimizing the drawbacks.

The notion of two-sided markets, with cross network effects, is only a refinement of those concepts. Of course, some of our regulation tools will need to be adjusted to the stakes and the specificity of those markets. But the fundamentals are the same, and the issue at stake is to regulate our digital economy's main foundations.

 There is at least one area of friction between telecoms and platform markets, which is the competition and/or complementarity between telcos and over-the-top (OTT) players. Can telecommunications regulation have a role in securing a level-playing-field between telcos and OTTs?

Whether it is as a client, a supplier or a competitor, every company subjected to some form of regulation fears having to deal with Internet players who don't play by the same rules. Because there are specific rules in their sector, this is especially true for the telecom or the media industry. Part of this fear is entirely justified: real issues are at stake, especially when telcos and internet players are in direct competition.

However, we won't solve anything with downward alignment or total deregulation: a new balance must be established, and, in my opinion, part of the solution is precisely to be found by building a framework for platform regulation.

 A related topic is net neutrality. What is the current status of net neutrality regulation in Europe and in France?

The Internet has become a crucial collaborative space, tremendously important for all our society and economy, and I believe it must now be considered as a common good. The risk today is that some companies manage to distort this essential tool for their own profit and against the interest of other users. This is not science fiction or paranoid delusion: some essential privately-controlled bottlenecks have indeed emerged, and without appropriate regulation, there is a real threat to see some kind of privatization of the Internet.

Net neutrality rules precisely aim at preventing a specific category of actors, the telecom operators, from doing so. An ambitious set of rules on net neutrality is in the process of being adopted in Europe. The European framework will be very protective and will rely on guidelines to be issued by BEREC. ARCEP will contribute actively to these works and will be in charge of its application in France.

But if we really want an open Internet, we also need to prevent a situation where a few Internet giants could take advantage of their current position to dictate their own rules to the World Wide Web. This should be a necessary addition to the net neutrality framework, and without it, the job would only be half done, or maybe even less. Ask yourselves: what actors are the most worrying for the future of the Internet?

 Platforms are global players, whereas telcos are usually attached to a local market. Is it possible to regulate platforms at a national level, or should such regulation be supra-national?

The correct level to construct tomorrow's regulation is obviously the European one, and this work is currently underway via the Digital Single Market initiative. But each member state has the responsibility to contribute to this reflection, and I believe it would be appropriate to act first on a national level in order to better observe, understand, compare and assess actor's behaviors in platform markets.

I would however advise against going too far on a national level. Only with a European solution can we avoid a discrepancy of treatment between member states. Moreover, a European solution would be more legible for actors, and we need this legibility if we want actors to invest in innovation in Europe.

Digital platforms, and the digital economy in general, raise new regulatory challenges. Yet, the nature of those challenges, and the potential harm for our society remains poorly understood. France mustn't underestimate the complexity of the issues, and we should give ourselves the means to accumulate the necessary experience and expertise to participate in the debate.

 One possible concern in platform markets is that due to the strong dominance of one firm or a few firms, competition might not emerge. What can be done to protect the innovation process and potential entry by new (European?) players?

This ultimately comes back to the issue of dealing with network effects that participate in locking dominant positions over some markets. One of the challenges for every regulation is to bypass those effects in order to maintain an open competitive game. There is no single right answer but the solution typically lies with regulatory tools such as portability, interoperability, open format...

Another crucial aspect is the matter of vertical integration: in the last few years, some Internet giants have been developing new activities related to their core-business and have constructed entirely closed ecosystems. This is not a problem in itself, but it is imperative that this should be done in a loyal manner, without the dominant actor leveraging its position to stifle competition on other markets.

Similar problematics have been dealt with very strong remedies in the past: structural separations were put in place in railway and electrical companies, and some companies were even dismantled. This is not to say we should go that far in platform markets. Most likely, platform regulation can bring more subtle remedies, adapted to platform specificities.

Sébastien SORIANO was appointed Chairman of ARCEP (Autorité de régulation des communications électroniques et des postes) on 15th January 2015, for a six-year term. Born in 1975, Sébastien Soriano is a chief engineer from École des Mines (the French national school of mining engineers) and graduated from École Polytechnique. He then spent most of his career in competition and telecoms regulation. In 2012, he was Head of Fleur Pellerin's cabinet, the then French Minister for SMEs, innovation and digital economy. Prior to his appointment at ARCEP, he was Special Advisor to the French Minister for Culture and Communication.

The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" will be soon available!

Order n°99      Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.comstrat.org    www.digiworldsummit.com    www.digiworldweek.com


The economics of platforms in the digital transformation: What does Google think?


Published in Communications & Strategies n°99


Director of Economics, Google

Interview conducted by Yves GASSOT,

C&S:  Is the SMP regulatory framework fit for purpose given the competition among telecom providers and between telecom operators and online service providers?
Fabien CURTO MILLET: Actually, platforms are not an Internet phenomenon.  A platform is simply an environment where two or more groups of economic agents come together to transact in some manner, so the concept is extremely generic: an example of a platform commonly used in the economics literature is that of singles bars!  There are many economically important platforms outside tech.  You can think of a free-to-air television channel as a platform, bringing together viewers and advertisers; the same goes for newspapers.  And within tech, there are many platforms that historically had nothing to do with the web.  An operating system can be analyzed as a platform, bringing together application developers and users.  So the concept has wide applicability.

It is true, however, that the latest crop of web-era platforms has attracted a great deal of public attention.  I attribute that in large part to the simplicity of use and degree of innovation of many of these businesses, which revolutionize everyday tasks and disrupt existing approaches.  Obvious examples include apps like Uber, BlaBlaCar and Lyft in transportation, or AirBnB for accommodation.

Google operates several platforms, starting with its search engine and Google market . Are there any others you can think of?
Many of Google’s activities involve the creation and/or operation of various platforms.  In the ads space, we have for many years run AdSense, an ad network bringing together users and advertisers on third party websites, while allowing publishers to monetize their content.  Similarly, YouTube brings together content creators, viewers and advertisers.

Academic works on platform economics invariably come down either to works on multi-sided markets that emphasise the role of an intermediary between multiple parties that platforms play, or analyses of platforms as strategic necessities for capturing innovations created by others. Do you think that is a fair assessment?
Much of the literature is indeed concerned with analyzing the role of platforms as a matchmaking device between their various types of participants.  This is not surprising, as the art of a platform operator is precisely to figure out how best to balance the interests of parties on various sides.  In the context of web search for example, this often involves search being provided to users for free, but with advertisers on the other side being charged (usually when their ads are clicked on by users, under the so-called Cost Per Click pricing model).  This is the case of search services like Google or Bing (which have clearly demarcated spaces for ads) for example; the point also applies to more specialized players like Booking.com or Tripadvisor.

But the literature is vast and touches on many interesting topics.  An example is the technical question of how to carry out market definition in a platform context.  One issue there is that the standard market definition test normally looks at whether customers switch away in response to a given percentage price rise.  But in the context of platforms, the price charged to one side is often zero.  In this case, how should the test be adjusted in practice?

These are only examples, and while the literature is already vast it is also evolving, so I think we can look forward to additional insights in this area.

How do you explain the fact that the GAFA quartet (i.e. Google, Apple, Facebook and Amazon) is much less powerful in certain markets – notably Russia, China and even Japan and South Korea?
These four companies have obviously achieved great success in many areas, and are engaged in formidable competition across multiple products and services.  Spaces where some or all of these firms compete include search, cloud computing, social networking, operating systems, advertising, mobile phones and tablets.  If you take cloud computing, for example, there is currently a great battle between Amazon, Google, Microsoft and other firms like SAP and Rackspace, with many massive rounds of price cuts and quality improvements having characterized the space in recent years.  So it is very difficult to give you an overall answer covering such a broad scope of activities!

Since you mention specific countries, it is interesting to note that they have also developed a number of strong competitors in a range of tech areas.  To take search, for example, we have Russia’s Yandex, South Korea’s Naver and China’s Baidu.  But it would be unfair to label these as local players, since they are also engaged in aggressive plans to expand internationally -- Baidu is developing in Brazil, while Yandex is already present in several countries and has recently expanded by serving searches in Turkey.  As for the success of the “quartet” in the countries you highlight, it really depends what you are looking at.  Just take the most recent earnings release from Apple -- they reported revenue growth of 112% in “Greater China” (mainland China, Hong Kong, and Taiwan) and iPhone unit growth of 87% in that area.

Some see the eruption of new players in vertical industries – prime examples being Uber in transportation or Airbnb in the tourism business – as the emergence of new platforms and new sources of competition for the Internet’s leading horizontal platforms? Do you share that point of view?
The digital economy is rife with entry and innovation.  The two examples you mention are a case in point.  Another notable story is that of Snapchat, a mobile-only video and photo sharing service that came from nowhere, and into an already quite busy space.  But it became wildly popular at breakneck speed.  Snapchat users today share over 700 million photos worldwide per day, which is reportedly larger than the combined volume of Facebook and Instagram -- truly remarkable for a service that did not exist five years ago and that is only available on mobile!  So I absolutely agree that these new entrants have further turned up the competitive heat on existing firms, including Google.  If you’re looking for a rental property for your next holiday in Provence, you might perhaps go directly to the AirBnB website or app, instead of running a search on Google or Tripadvisor.

This broad phenomenon in itself is not particularly new for the digital economy -- for many years, companies with a more specialized focus have been competing with firms having broader business models, like Google.  Google aims to answer any question that a user might have, whereas players like Tripadvisor focus more narrowly on particular content categories (especially the more commercial queries).  Another case in point is Amazon, which is of course a very major competitor in shopping queries.  Already in 2012, a Forrester study found that some 30% of online shoppers in the US started researching their latest purchase on Amazon, versus 13% on search engines.

Many fundamental factors drive these competitive developments.  First, barriers to entry into many digital activities are generally low and dropping fast.  One reason for this is the development of cloud computing: it used to be the case that firms needed to invest in their own server infrastructure in order to procure computing power, therefore incurring fixed costs. Cloud computing does away with that, by turning this fixed cost into a variable cost – and a low one at that, given the competition I mentioned earlier in this area.  This is precisely one of the ingredients behind Snapchat’s success, as they run entirely on the Google cloud.  Second, switching costs are pretty low – it is generally trivially easy and inexpensive for users to try out a new app or website.  We often say at Google that “competition is just one click away” – although we should perhaps modify that line for the mobile era and say that it is “one tap away”: according to comScore, almost 90% of mobile Internet time in the US is spent on apps rather than in the browser – truly a revolution.  Such ease of access to competing services means that we observe extremely high levels of “multi-homing”, i.e. the presence of a user on multiple competing platforms at the same time (e.g. Twitter and Facebook).  I think these fundamental forces are here to stay, so we should have the opportunity to observe many more examples of disruptive entry in the future.

Net neutrality debates have resulted in regulations that limit the risks of ISPs discriminating against certain kinds of content. How do you respond to those who want to see these neutrality obligations extended to platforms? For instance in the choice of applications that app stores host, or the neutrality of algorithms?
Things like the choice of applications hosted or the operation of algorithms go to the very heart of what a platform does.  “Neutrality” is a nice-sounding word, but it’s essentially in the eye of the beholder.  The purpose of an algorithm is precisely to rank things from more to less relevant.  Who is to say that one choice is better than another?  And on what criteria?  Is it neutral to rank restaurants by reference to distance to the user, or should we use review counts instead?  Or maybe both?  And how should one compare restaurant results and web page results?  You very quickly get into rather abstract and arcane debates as to whether a particular approach is really treating like-with-like and so on.

Fortunately I believe these are questions which do not need resolving.  Most economists would agree that regulatory intervention is only appropriate in circumstances where competition fails as a disciplining force.  And there is frankly very little indication of problems across the digital economy.  In addition to the rapid entry I discussed in my previous answer, I think any objective observer would agree that the speed of innovation in the digital economy is extremely high.  This is for me a fundamental indicator of the competitive health of a sector – it ought to act a bit like a thermometer to determine whether a patient is sick and guide enforcement.  After all, as the famous English economist and Nobel laureate John Hicks once observed: “The best of all monopoly profits is a quiet life”.  There is preciously little that seems quiet about the digital economy today.

What differences do you see in the exchange of ideas taking place in Europe and the United States over platforms and the inherent risks of dominant positions?
I think that the exchange is a lot more nuanced in both places than it is often portrayed.  From a Google perspective, we have faced antitrust scrutiny on both sides of the Atlantic -- the Federal Trade Commission in the US thoroughly investigated many parts of our business in great depth (notably touching on search, patents and ad campaign portability), leading to voluntary commitments in some areas in January 2013.  In Europe, we are obviously currently working with the European Commission today in the context of its own ongoing antitrust investigation.

And while many commentators would like to cast current events in terms of various arm wrestling matches between European regulators and American tech companies, this unduly simplifies reality.  For example, Germany’s Monopolkommission (Monopolies Commission) recently concluded a wide-ranging investigation into competition in digital markets.  In the context of search platforms, this independent agency noted that “search engines’ low degree of user lock-in in comparison with other platform services (e.g. social networks), and the low degree of advertiser lock-in caused by network effects means that the search platform’s attractiveness from a user perspective is of key competitive importance, and this explains why even search engines with high market shares have an interest to further develop their offering with their users in mind, in order to secure their market position going forward”.  Moreover, they expressed a clear view with regard to intervention: “The Monopolies Commission takes the view that a purely preventive regulation – irrespective of potential abuses – is not currently warranted. This holds true in particular for a regulation of search algorithms or regulatory unbundling instruments”.

Finally, I would take issue with the idea that there is an “inherent” risk to the emergence of dominant positions.  I am sure that companies like MySpace or the now-defunct Friendster have views on the question, given how at one point they both towered over the social networking space.  And I am always greatly amused by old press cuttings calling winners in one area or the other -- for example, Fortune declared in a 1998 article that “This much is clear: Yahoo! has won the search-engine wars and is poised for much bigger things”.  1998 was of course also the year when Google was founded...  If there is anything certain in the digital economy, it’s that competition often comes from where you least expect it and failure to innovate faster than your competitors is the real “inherent risk.”


Fabien CURTO MILLET is Director of Economics at Google, where he has worked since 2011. He reports to and works closely with Chief Economist Hal Varian on the development of data-driven insights and on research to evaluate the economic value of Google and the Internet. He also leads economic analysis in all competition and regulatory processes involving Google at a global level. Fabien was previously a Senior Consultant in the European Competition Policy Practice of NERA Economic Consulting, where we worked from 2004. During that time, he advised in major European merger control processes such as ABF/GBI, Thomson/Reuters and Universal/BMG. His experience spans a wide variety of business sectors, including: airports, financial services, mining, music publishing, pay TV, print media, retailing, and satellite communications. Fabien was educated at Oxford University, where he obtained a BA in Economics and Management, an MPhil in Economics, and a Doctorate in Economics. For two years he was a Lecturer in Economics at Balliol College, Oxford. He further obtained a Postgraduate Diploma in EC Competition Law from King’s College, London.

The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" will be soon available!

Order n°99      Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.comstrat.org    www.digiworldsummit.com    www.digiworldweek.com


Interview with Steve UNGER, ‎Group Director and Board Member at Ofcom





‎Group Director and Board Member at Ofcom;
UK Regulator, London


C&S:  Is the SMP regulatory framework fit for purpose given the competition among telecom providers and between telecom operators and online service providers?
Dr UNGER: The SMP framework has served us well over the years. It is a good starting point for the Framework review that is about to start. However, there are areas where we need to build on it.
For example, we need to ensure that when analysing the market power of traditional network operators, we take into account the presence of new communications providers, delivering services such as voice and messaging over the top of the internet. This is something we can do within the current Framework, but it is also part of a broader debate about the need for a level playing field between network operators and internet-based providers. It may be that expanding the scope of the SMP analysis in the manner results in a reduction of market power, and it’s clearly important that we consider this possblity. There is a separate question about whether or not there are new bottlenecks created by internet-based providers and whether we have the right tools to deal with this .
A more difficult issue is that whilst the SMP framework is an effective means of addressing concerns arising from single firm dominance, it does not deal effectively with oligopolies. This is a problem because a key market trend for both fixed and mobile is towards a limited number of end-to-end competitors – more than one, but not many. In some circumstances this is fine, in that a limited number of competitors is sufficient to deliver a good consumer outcome, and there is no need to intervene. In other circumstances however the outcome might be poor. We need the right tools within the framework to distinguish these cases, and intervene where appropriate. At present the only tool available is the concept of joint dominance, and I don’t think this is sufficient.

What are the most important skills sets for those who need to make sense of results of big data analytics?
Statistics and machine learning are most obvious.  But in order to put analysis to work, communication skills are critically important.  To be effective, a data analyst needs to turn data into information, information into knowledge, and knowledge into action.  You can't do this without communication.

What are the biggest opportunities for business and are businesses able to make effective use of big data to improve their margins?
As in every business, it is imperative to understand your customer.  When you can draw on computer mediated transactional data, it is possible to gain a deeper understanding of the customers' needs than was previously the case.

Are there already some ideas for developing appropriate tools for dealing with oligopolies?
BEREC has recently published a report on this matter, and I think this provides a good starting point for the debate we need to have. The report distinguishes two questions: whether there is joint dominance, associated with tacit collusion, and what is the threshold to prove it; and whether we have situations where there is no tacit collusion, but uncoordinated behavior within a tight oligopoly still results in a poor outcome. The report then focusses on the second of these questions, which is the one that is not currently addressed by the European Framework
It is important to emphasise that tight oligopolies may still result in a good outcome. For example, you may have a small number of networks but we still observe effective competition, including for example the provision of wholesale access on a commercial basis. However, it is not difficult to imagine circumstances where there is more limited retail competition, either between a single incumbent telco and a cable operator, or between a small number of vertically integrated MNOs.
The BEREC report sets out a number of criteria which one might use to distinguish between ‘good’ and ‘bad’ tight oligopolies. It also draws an interesting parallel between these criteria and the SIEC test applied in merges, which serves a similar purpose. What we now need to do is consider in more detail how this thinking might be applied in practice, and what evidence would be required to do so.

Could symmetrical regulation replace or complement asymmetrical regulation in these matters?
I’m afraid I don’t even like the term ‘symmetric regulation’. It sounds benign, since it implies consistency of approach, but that is often not what it means in practice. What it means in practice is that regulation is applied to all service providers in the market, regardless of whether a particular provider has market power.
Such a blanket approach to regulation might be appropriate in circumstances where there is a market-wide market failure. For example, high barriers to switching might necessitate a market-wide intervention to improve switching processes. But where there is not a market-wide market failure, I believe very strongly in the principle that any regulatory intervention should be proportionate, and targeted at the problem you’re trying to solve
I therefore find it odd that, within the current framework we are able to address concerns arising from single firm dominance, or a rather narrowly defined joint dominance, but that where these fail our backstop position is to regulate everyone in the market. We need to find a more sensible middle ground.

Some observers have argued that symmetric regulation has already proven its value for dealing with access problems (e.g. interconnection). Can we apply this comparison to issues dealing with access networks?
I’m not sure the comparison is valid. Remedies which mandate interconnection (or other forms of interoperability) are I think usually imposed because there is a risk that network effects will result in the market tipping to a single provider. In those circumstances it may well be appropriate to impose a market-wide remedy, since the problem you’re trying to address is one that arises from the way that the market as a whole operates
To put it another way, I think we need to distinguish between those forms of network access which are designed to address market failures associated with network effects, and those forms of network access designed to address market power. The former may have to be market-wide, the latter should be targeted at the source of market power.

Steve UNGER is Ofcom's Chief Technology Officer, and is also the Group Director responsible for Ofcom's strategic approach to communications regulation. His group is responsible for critically evaluating external market and regulatory developments, and leading the process of setting Ofcom's strategic priorities. He is also responsible for several specific policy areas, including Ofcom's work on Communications Infrastructure. Steve previously worked in industry – for two technology startups, both of whom designed and operated their own communications networks, and as a consultant advising a variety of other companies on the commercial application of new wireless technologies. He has a Physics MA and a Ph.D. in Astrophysics.

The Communications & Strategies No. 98 "A review of SMP regulation : Options for the future" is now available !

Order now Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.idate.org    www.digiworldsummit.com    www.digiworldweek.com


Interview with Hal VARIAN, Chief Economist at Google

interview Hal VARIAN GOOGLE



Chief Economist at Google;
Emeritus professor at the University
of California, Berkeley

C&S:  What are the biggest challenges for governance/regulation created by growth of the big data market? Are there big differences between the US/Chinese and European approaches to big data opportunities?
Hal VARIAN:  There are policy issues relating to data access and control that arise constantly.  This generates a lively debate, to say the least.  As an economist, I would like to see serious benefit-cost analysis guide regulatory policy.

What are the most important skills sets for those who need to make sense of results of big data analytics?
Statistics and machine learning are most obvious.  But in order to put analysis to work, communication skills are critically important.  To be effective, a data analyst needs to turn data into information, information into knowledge, and knowledge into action.  You can't do this without communication.

What are the biggest opportunities for business and are businesses able to make effective use of big data to improve their margins?
As in every business, it is imperative to understand your customer.  When you can draw on computer mediated transactional data, it is possible to gain a deeper understanding of the customers' needs than was previously the case.

What has big data analytics to learn from mainstream econometrics and what can big data analytics contribute to mainstream econometrics?
Econometrics can draw on some of the powerful techniques of predictive analytics that have been developed by the machine learning community.   These tools are particularly helpful when dealing with data involving nonlinearities, interactions, and thresholds.
Econometrics, on the other hand, has focused on causal inference from its very early days.  Techniques such as instrumental variables, regression discontinuity, and difference-in-differences have been widely used in econometrics but, to date, have not been used in the machine learning community.
Finally, the statistical field of experimental design will be valuable to both communities, as computer mediated transactions enable true randomized treatment-control experiments, which are the gold standard for causal inference.

What should be added to standard US Ph.D. programs in economics to make the students big data literate?
There are now very good textbooks, online tutorials, and tools that make it relatively easy to put together a course on machine learning.   In addition virtually all computer science departments and many statistics departments offer such courses.

Hal R. VARIAN is the Chief Economist at Google. He started in May 2002 as a consultant and has been involved in many aspects of the company, including auction design, econometric, finance, corporate strategy and public policy. He is also an emeritus professor at the University of California, Berkeley in three departments: business, economics, and information management. He received his S.B. degree from MIT in 1969 and his MA and Ph.D. from UC Berkeley in 1973. Professor Varian has published numerous papers in economic theory, econometrics, industrial organization, public finance, and the economics of information technology.

The Communications & Strategies No. 97 "Big Data : Economic, business & policy challenges" is now available !

Order now     Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.idate.org      www.digiworldsummit.com      www.digiworldweek.com


[ITW] Jean-Louis MISSIKA, Deputy Mayor of Paris in charge of urban planning




Interview with Jean-Louis MISSIKA, Deputy Mayor of Paris in charge of urban planning

Conducted by Yves Gassot, CEO, IDATE-DigiWorld Institute

C&S: The Smart City concept is often criticized for seeking new markets for digital technology rather than tackling the phenomena that make the management of our cities increasingly complex. What is your view?
Jean-Louis MISSIKA:  I do not think it is a fair criticism. Digital technologies have undeniably created the conditions for important changes in our ways of living, inhabiting and consuming. They are now part of our everyday lives and, surely, their impact will increasingly spread throughout the multiple ways we, as humans, interact.
Beyond what they create as opportunities for individuals, digital technologies are fundamental for cities – and among them the city of Paris. Urban systems are confronted with major challenges on the economic, social and environmental fronts. Energy transition, and more generally the management of scarce resources, climate change and the biodiversity challenges drive us to analyze all the solutions available now and in the future to build a more sustainable city - the city of tomorrow. Digital technologies and, in particular, their potential in terms of coordination and rational use of scarce resources, are high on the policy agenda. This is not simply to create a market for them; this is about using all the possibilities offered by technology.
I definitely think it can be a win – win development for both the city and the companies if these firms are working with those involved in the challenges of the city like urban planners and system operators.
Additionally, we are witnessing a boom of young, innovative companies and startups, but also the citizens themselves – both from Paris and outside – who develop digital solutions for the city. This is clear evidence of what is at stake here: it is for local authorities to allow the digital revolution to spread in the society so that innovation does not only occur through large companies but also thanks to citizens' initiatives.

C&S: How would you rate the strategy of Paris, using a broad comparison between the very holistic, top-down approach of projects emerging in the context of new towns and in Asia, and the more bottom-up approach that seems to be primarily based on using multiple data repositories ('open data') associated with urban systems?
J.-L. M:
We are definitely leaning towards the "bottom up" approach to building Paris as a smart city.
Collective intelligence is an effective way to source the best ideas. And it does work well in Paris in part because we provide people with the appropriate means to implement projects: workspaces, coaching, financing, public spaces to experiment… and data.
This is one of the pillars of a smart and sustainable city: a place where the technology is used for people, by people, to include them in the life of the city and in the process of public decisions.
Let me refer to a recent project. We have worked over the last 6 months since the election to reach a greater transparency and citizen involvement in the City operations, by creating a platform for the development, discussion and adoption of community projects. These are chosen by the Parisians and are financed through a participatory budget. 5% of the total investment program, which represents 426 million euros, has been flagged for programs chosen directly, through vote, by the Parisians.
Within the next months, Parisians will even be able to share the benefit of their expertise and creativity by suggesting investment ideas directly.
Another way to involve people is crowdsourcing. We have developed the "DansMaRue" mobile application which Parisians use to signal local problems and even identify spots for "urban greening" (buildings, walls, squares, abandoned urban places). It is this type of exchanges with Parisians we want to implement to make our City better.
This is a genuine urban revolution in the making: the role of local governments of world-cities is to understand, support and leverage the benefits of this revolution. European cities, I believe, have a major role to play in leading this transformation. Their governance is well geared towards citizen involvement and this should alleviate the risks of the "systemic city" or the "cybernetic city".

C&S: Do you have any models or at least references to guide your project for Paris?
J.-L. M: Many interesting models exist throughout the world and we are discussing extensively with many cities facing the same challenges.
That being said, from our discussions we retain one key conclusion: each of these cities has developed its own good practices with its own cultural frame. I think there is no single model of smart city and it would be ineffective to copy-and-paste alien models or ready-to-use solutions in a fast-changing environment.
We have our own model based on an iterative approach that uses successful experiments in Paris. We have been working for several years to make Paris a strong city in the digital sector and a breeding ground for innovation. I would say that over the last 10 years or so we have created the conditions for the emergence and development of a strong ecosystem. Thanks to all these efforts, Paris has experienced a lot in recent years and is now a world leader in innovation and most certainly the top European city.
There are well-known examples of successes such as Velib ', Autolib', Paris Wifi, among other experiments such as heating a residential building thanks to the energy produced by data centers, data vizualisations of the Paris transport system, smart street furniture, … Many of those locally-grown success stories are helping to build our own project of smart city and to deploy these experiments on a larger scale as standards for the city of tomorrow.
Paris is actually creating international benchmarks for smart city, though it is not as recognized as it should be. Through calls for innovative projects led by the Paris Region Lab at the initiative of the City, we facilitate the emergence of intelligent solutions on subjects as diverse as intelligent street furniture, energy efficiency or assistance home support for seniors. Paris provides entrepreneurs and businesses of all sizes with a single territory and open trials. It also runs a network – an open innovation club – that organizes meetings between the largest companies and startups. We are even deploying this initiative in other French cities, at their own request.

C&S: What priority initiatives have been selected for the Smart City project in Paris?
J.-L. M: One billion euros will be invested by 2020 in order to make Paris the international benchmark in innovation related to land use, the participatory democracy, sustainable development, the digital economy and energy transition.
Our smart city approach is threefold: open city (open data), digital city (potential of digital technologies and their application to improve the quality of life of Parisians) and the inventive city (which is built by transversal networks and innovation).
Each of these pillars shall contribute to our 4 main targets.
One of the most important is the food supply because no city in the world is capable of ensuring its food self-sufficiency in the present state of our know-how and our food is responsible for almost 40% of our ecological footprint. We have recently launched a call for projects titled: "Innovative Urban Greening" which consists, among other objectives, in experimenting with the urban agriculture of the future.
Another challenge is the energy of the city. 90% of the energy of the Paris metropolis is provided by fossil fuel or nuclear energy. From a territorial point of view, it is an imported energy. In addition to the on-going effort on renewable energies (with a certain success for geothermal energy), the focus is increasingly on energy recovery. We must go ahead and draw from their hidden resources. These resources are at the core of the circular economy: a waste produced by someone is a resource for someone else.
An example in Paris is the Qarnot Computing start-up which has invented a radiator-computer: by dissipating all the energy consumed  by data processors in the form of heat, the Q-rads make it possible to heat free of charge and ecologically any type of building (housing, professional premises, collective buildings) according to the needs of their users. A low rent housing building has been fitted out with these Q.rads radiators: the inhabitants do not have to pay for their heating anymore and their ecological footprint is zero.
The third challenge is urban mobility. This can no longer be dealt with through the option of car versus collective transport. New systems of mobility are emerging: they concern the technology of vehicles (electric cars, rubber-tired tram), but above all the technology of services (rental among individuals, sharing, car-pooling, multi modal applications, etc.), and they often open the way for the emergence of new chains of values and new players.
In Paris, the massive adoption of Autolib' and Velib' shows the power of attraction of sharing and self-service.
Last challenge is planning for the future of urban spaces and architecture. In order to take into account new ways of working, living or trading we need to be able to test multifunction buildings that combine housing, offices, community spaces, show-rooms and services to people. This mixed use on the scale of a building implies more flexible Local Urban Plans and an adaptation of safety rules. The new way of working implies home-office, mobile office, co working and remote working centers. The new way of living requires community spaces in the building, a greater use of roofs, community gardens, shared utility rooms, services to the person, sorting and recycling. New trading methods integrate ephemeral shops, shared showrooms and fablabs.

C&S: Paris as a city, and you in particular, have worked hard to ensure that digital is also an opportunity to redevelop business in Paris, which is threatened to become a purely residential city. What connection do you see between support for start-ups, incubators and nurseries, and a policy of the Smart City type?
J.-L. M: The City of Paris is an innovative city at the forefront of digital technology, as evidenced by the ranking of PricewaterhouseCoopers. The emergence of Silicon Sentier in the heart of Paris in recent years, or important events such as Futur en Seine and the Open World Forum illustrate the growing dynamism of our city in terms of digital innovation.
Notably, in our incubators, many innovations are related to digital technologies. They create value in all areas of the city and aim to serve people in a better way.
As an example, the Moov'in city competition launched in June 2013 by the City of Paris in partnership with the RATP, SNCF, JC Decaux and Autolib' aimed at bringing out new web-based and mobile services focused on mobility in Paris and the Ile de France region. One hundred ideas were generated through this process; seven of them were awarded a prize. Among them, the Paris Moov' solution is a route calculation application that integrates all public transport modes available in the Ile de France region and suggestions of activities once arrived at destination.
Some incubators and clusters that we support are directed specifically to the city and urban services (energy, transport, water, logistics, etc.).
This is for example the case of the Paris Innovation Massena incubator where we work with large corporations like SNCF or Renault. We help them and they accompany us to build our Smart City project.
In addition, the creation of incubators or Fab Lab continues with determination and ambition displayed, particularly with the MacDonald converted warehouse or the Halle Freyssinet, the future world's largest incubator (1000 start-up companies). New places at the forefront of innovation combining incubators, coworking spaces will continue to be created and its ecosystem of innovation will be internationalized. This is the only way for Paris to be in the top attractive and competitive cities in the world.

C&S: How do you pilot a 'Smart City' project? (Is it through a task force outside the main city services? Or through a cross-functional structure involving all the services?) How did you structure management of the Paris project?
J.-L. M: The smart city is a cross-cutting subject, which means we have no other way to do it than keeping good interaction among the administrative units.
All large cities are confronted with the issue of finding the appropriate scale of governance and new governance tools. The model of organization of local administrations is outdated. The large vertically-organised departments (urban planning, roadways, housing, architecture, green spaces) are facing the challenges of intelligent networks, project management, citizen participation that require a much more cross-cutting and horizontal coordination.
Paris has historically been organized in large vertical services to deal, for example with roads, architecture, urban planning and so on. For this reason, we have chosen to address the question of the Smart City within the City of Paris through a steering committee composed of elected officials and a cross-cutting taskforce driven at the General Secretariat - the body that oversees all directions.
This "smart city" mission is a project accelerator. Its aim is to raise awareness on this subject within and throughout the services but also to manage the relationship with our key partners of major urban infrastructure. It supports the deputy mayors on each of their missions and brings global thinking to structure a coherent overall strategy in the multiplicity of initiatives and concrete actions led by all the services.

C&S: On a more mundane level, the deployment of digital applications in the city is also organized on the basis of a telecommunications infrastructure (fiber access, 4G, WiFi, ...). Are you satisfied with the existing equipment and deployments underway at the initiative of private operators? How do you cooperate with them particularly in light of concerns over radio transmitters?
J.-L. M: While the City of Paris has no formal jurisdiction over this subject, we consider it is our role to ensure that all Parisians can access clear and transparent information on the deployment of base stations, and to take their concerns into account while ensuring the development of new technologies. This led us to sign a mobile telephony charter in 2003 with the telecom operators. His latest release in 2012 has set maximum exposure levels to radiofrequency fields and clear procedures for consultation with residents.

Jean-Louis MISSIKA is deputy mayor of Paris in charge of urbanism, architecture, projects of Greater Paris,  economic development and attractiveness. From 2008 to 2014, he was deputy mayor of Paris in charge of innovation, research and universities. Prior to his local mandates, his professional career included various managerial positions in the public and private sectors.




The future of patents in communication technologies : interview with Ruud PETER, Philips


Ruud Peters


Conducted by Yann MÉNIÈRE
Professor of economics at MINES ParisTech,
head of the Mines-Telecom Chair on "IP and Markets for Technology", France



C&S:  Could you please introduce yourself and the organisation you are working for/have been working for?
Ruud PETERS: I first joined the Philips Intellectual Property & Standards (IP&S) organisation in 1977, with a background in physics. After taking various positions in the technology and consumer electronics sectors, I was appointed CEO of Philips IP&S in 1999. There I have been responsible for managing Philips' worldwide IP portfolio creation and value capturing activities, and responsible for technical and formal standardization activities in the fields of consumer lifestyle, healthcare, lighting and technology until my retirement at the end of 2013.
I remain affiliated to Philips as a Strategy & IP adviser reporting to the board member responsible for Strategy and Innovation. I also represent Philips in the board of various companies, which I created or in which I took a share as Philips in the past. Beside my Philips affiliation, I devote about half of my time to other governing and consultancy roles as board member of a number of international companies and organisations related to IP.

C&S:  What is your/your organisation's approach to IP and patents from a business perspective?
R.P.: Philips has an integrated approach to IP asset management. This includes trademark, domain names and designs, while they are often treated separately in other companies. Philips also has a proactive view of the role of IP as a creator of value. In this view, building an IP portfolio should not be a goal per se, but a lever to support growth and profitability. Accordingly, Philips IP&S is closely involved in the business decisions being made around IP rights. It is responsible for the creation and management of these rights, but also anti-counterfeiting strategy, financial aspects of licensing agreements and formal standards-setting issues.

C&S:  What is your opinion about the role of the patent system in the economy, and the benefits it can bring to the society?
R.P.: Today more than ever, the economy needs people who are prepared to take the financial risk to invest in new ideas and innovative activities that contribute to welfare. Those people need a reward for the risk they take, and it is the role of the patent system to provide such incentives.
This incentive function of patents should be understood in a broad meaning. Patents are highly flexible instruments that open a broad set of strategic choices. Recouping investments by securing an exclusive use of inventions is certainly one of these options, but patents can also be used more proactively. They can be opened up for use by others though licensing programmes or the creation of joint ventures, creating valuable economic activity in the process. In other words, they are the necessary currency for the exchange of ideas and for collaboration.

C&S:  Recent years have seen frequent patent battles and controversy in the digital area. Is there something specific to this technology field with respect to patents and IP?
R.P.: Yes and no. On one hand, the digital area has indeed some specific features with respect to patents and IP. It is first subject to a continuous trend towards higher IP density, with many devices each embodying a growing number of patented technologies. It is moreover organized around a limited number of platform products – such as operating systems– that enable devices to interoperate. These platforms are subject to strong network effects: they become more attractive the more users and the more available compatible products (such as apps in the case of smart phones). They can also generate strong economies of scale in manufacturing. As a result, the competition between platforms is “tippy”: only a few companies that manage to quickly capture enough market shares can eventually establish a profitable business. Against this background it is not surprising that companies compete fiercely to promote their platforms. This includes inter alia a heavy use of patents in the first step. One can yet expect patent battles to recede once market positions will be stabilized.
On the other hand, similar evolutions may take place in other sectors – such as the automotive, healthcare or pharmaceutical industries – where digital technologies are becoming pervasive. In the future, I expect products in these sectors to reach substantially higher and in some sectors, like automotive, similar levels of patent density as in the IT industry. Patents may then become a battleground of the competitive process in these areas too. Patent battles are indeed an inevitable consequence of translating innovative merit into a competitive advantage or, conversely, a disadvantage for the company that pays royalties for borrowing a competitor's technology. They are one part of the market forces that eventually shape industries.

C&S:  What are the key challenges or trends that the patent system is currently facing?
R.P.: The key challenge for the patent system is to raise the bar for the quality of patents. The last decades have seen a sharp increase of patent filings around the world, inducing backlogs in patent offices and a drop in patent quality. Based on results of recent court decisions and inter parties reviews in the USA, it is estimated by some experts that about 50% of all patents can be assumed to be invalid. As a result, one cannot assume nowadays anymore that a granted patent is a valid right.
This legal uncertainty fuels lawsuits, but also criticism of the patent system. I think that both can be avoided with enhanced patent quality. To raise the bar, better searches for prior art should be a priority. While various other regulations are currently being discussed, this is the most obvious and effective way to improve the patent system.
Innovative, market-based means can help patent offices to fight the abuse of low-quality patents. I am thinking, for example, of crowd sourcing based searches for prior art to help defendants against assertion of low-quality patents. Article One Partners is a good example of a company providing exactly this service.

C&S:  Where are the main differences in the patents/IPR thinking and practice between both sides of the Atlantic, and between the Western world and Asia?
R.P.: The basics of the system – that is, patent law – are the same everywhere. Hence there are no significant differences in the way companies obtain IP rights. However, important differences remain at the level of the judicial system, in the way national systems are operated.
The U.S. patent system is more judiciary. It has a very complex judicial system, with high costs of using patents. By contrast, the European system is more balanced. It is less costly for its users despite the persistence of national patent systems. I am confident that this system will further improve in future years with the creation of the unitary patent and patent court.
Asian countries are modernising their patent systems, although not all of them are at the same stage. This is a very important evolution, especially as regards China. As of today, legal uses of IP remain less developed in this country than in the Western world. Local companies and IP institutions are less experienced, but they are catching up rapidly. I expect China to be at the same level as Europe in about five to ten years.

C&S:  What will be the most important developments regarding patents for the coming 5-10 years?
R.P.: The evolution of accounting rules towards a better financial valuation of IP should be a major development in future years. Currently, these rules tend to focus on the cash benefits of licensing income while there are many other ways in which IP assets create value in the knowledge economy. IP makes it possible to protect products and markets from competition, enter new markets, facilitate deal making or create freedom to operate and thus enable higher and more profits or less cost. Because such uses of IP rights do not appear explicitly on the P&L account and the value of the IP portfolio is not on the balance sheet, companies ignore the real value of their intangibles. In practice, this means that IP assets are dealt with at the IP department only, while they should be considered as strategic assets at the board level.
Financial valuation is necessary to convince corporate executives of the real value of intellectual assets, just as for other important assets on a company's balance sheet. This requires new international accounting frameworks that better reflect the true economic importance of intangibles. This is a challenging task for the next ten to fifteen years. Eventually, better accounting rules will facilitate IP recognition within companies, but also in society. The way IP works in the knowledge economy is still not well understood. We still apply the rules of the traditional hardware based economy to the knowledge economy. As an example, courts still calculate royalties as a percentage of the cost price of products, while they should consider the value that IP brings to the product. A new framework will be needed for financial, legal, tax and competition rules in the global, knowledge economy.
I also expect the maturation of markets for IP to be an important development for future years. The current system of bilateral negotiations of licensing deals is quite primitive. It is especially opaque and inefficient when the same patent needs to be licensed to multiple companies, with replicated costs of due diligence, negotiation and monitoring for each deal. A transition towards a more transparent and efficient organization of IP markets is possible, just as happened for stock markets in the past. With market-based pricing of unit licence rights, based on centralised due diligence, the creation of international IP exchange IPXI in Chicago is for instance an important step in this direction.

  • Ruud PETERS was appointed Chief Intellectual Property Officer (CIPO) of Royal Philips in 1999, in which position he was responsible for managing the worldwide IP portfolio, and the technical and formal standardisation activities of Philips. In this role, he turned the company's IP department from a cost centre into a successful revenue-generating operation, while at the same time integrating all the different IP activities within various parts of the company into one IP centralised organisation. He further developed and introduced a new concept for intellectual asset management, in which all the different forms of IP are handled together in an integrated manner, and advanced methods and systems used for determining the total return on IP investment by measuring direct and indirect profits. Ruud joined Philips in 1977. He retired from his role as CIPO at the end of 2013, but continues to work for the company as a part-time adviser on strategy and IP matters. He is also a board member of a number of technology /IP licensing /trading companies. Ruud has a background in physics (Technical University Delft, The Netherlands). He was inducted into the IP Hall of Fame in 2010 and in 2014 he received an Outstanding Achievement Award for his lifetime contributions to the field of IP from MIP magazine. He frequently speaks at major international IP conferences and also writes articles regularly for leading IP and business magazines.


  • Yann MÉNIÈRE is professor of economics at MINES ParisTech (France) and head of the Mines-Telecom Chair on "IP and Markets for Technology". His research and expertise relate to the economics of innovation, competition and intellectual property. In recent years, he has been focusing more specifically on IP and standards, markets for technology and IP issues in climate negotiations. Besides his academic publications, he produced various policy reports for the European Commission, French government, and other private and public organisations. Outside MINES ParisTech, he teaches the economics of ICT Standards at the Imperial College Business School. He is associated as an economic expert with Microeconomix and Ecorys, two consulting firms specialised respectively in economics applied to law, and public policies.

If you want to buy Communications & Strategies n°95 "The future of patents in communication technologies", this way, please.


Kerstin JORNA, Director for Intellectual property Directorate European Commission, DG Market


Interview with Kerstin JORNA - Director for Intellectual property Directorate
European Commission, DG Market

Conducted by Theon van DIJK, Chief Economist, European Patent Office, Munich, Germany



C&S:  Could you please introduce yourself and the Intellectual Property Directorate at DG Market?
Kerstin JORNA:  I became Director for Intellectual property Directorate at DG MARKET in 2012. An exciting and challenging job! I work with a team of roughly 50 very dedicated colleagues.
Our job is to make sure that inventors and creators in the Single market are successful on the "inventor trail". Inventors and creators turn ideas into innovation: be it a new "green" technology, a lifesaving drug, or a new film. This requires ideas in the first place. But it also requires good laws, efficient registration procedures, the capacity to leverage capital for developing an idea, a framework for branding, a clear and predictable legal environment for distributing and licensing innovations to consumers and customers, efficient jurisdictions for ensuring respect for rights and investments, trade agreements with third countries that offer a stable and predictable environment for exporting innovation. Intellectual property is not a purpose in itself. It is a tool to stimulate innovation and dissemination of knowledge. My team's role is to calibrate the policy and the single market tools in such way that these objectives are achieved.  There is clear evidence, that a well calibrated intellectual property system creates qualified jobs and growth in Europe.
The European legislator is currently discussing our proposals for the reform of the European Trademark system and European rules for Trade secret protection.  An action plan on a more linked-up approach for ensuring respect for intellectual property rights was presented a month ago.  A Green paper on European rules for non-agricultural indications of origin is under public consultation.  And we are also working on the review of our copyright rules. Some 9500 respondents replied to our online consultation and the results were published recently.  And of course, there is the implementation of the unitary patent!

C&S: What is your opinion about the role of the patent system in the economy and the benefits it can bring to the society? How do you see the European Commission's role here?
K.J.: Knowledge is the currency of the future. A recent study carried out by OHIM and the EPO provides compelling evidence on the economic importance of intellectual property rights (IPR) in Europe. Patent-intensive industries in Europe contribute to 10% of EU direct employment (22 million) and 14% of EU GDP (€ 1.7 trillion).
Today the patent system in Europe is complex, fragmented and costly.  And this is also true for litigation. While big companies might be able to afford the price to validate and defend their patent across multiple jurisdictions, small innovative companies cannot.  The Commission and Vice-President Michel Barnier have made the unitary patent package a top priority for the single market. After the landmark political agreement in 2012, we work together with Member States and the European Patent Office to ensure that the unitary patent and the unified patent court will become an attractive option for innovative companies.

C&S: What do you consider to be the main achievements by the European Commission in the area of patents over the past five years?
K.J.: Without a second of doubt: The agreement of the European Parliament and the Member States on the Unitary patent. Unitary patent protection will permit significant cost savings and simplify administrative procedures. In addition, as the single jurisdiction competent for all European patents, the Unified Patent Court will ensure the consistency of judgments, thereby increasing legal certainty. It will also considerably reduce the complexity and cost of patent litigation.
These political decisions now need to translate into reality. We are working together with Member States and the European Patent Office to set up the system for a unitary title that is sufficiently attractive in terms of price and legal certainty, as well as a Court that has the trust of users. The first unitary patent grant in 2015 is possible – if all actors involved collectively continue to deliver on the list of things to do and national parliaments engage the remaining ratification procedures.

C&S: Recent years have seen smart phone patent battles and competition policy scrutiny in the area of electronic communications. How do you see the interplay between IP and the patent system in particular on the one hand, and competition law and enforcement on the other hand?
K.J.: The globalisation of markets and increasing complexity of products with overlapping technology has modified the business environment in certain sectors. Companies have acquired important patent portfolios to safeguard their product lines in a given market segment. This has also led to increasing costs for IPR enforcement and litigation.
As the competition watch dog in the Single Market, the Commission must make sure that companies have a clear understanding on where the dividing line runs between legitimate exercise of intellectual property rights and anti-competitive behaviour.
Standard essential patents are a hot topic. We need clarity for companies that hold standard essential patents, but also for companies that need to use standard essential patents for enhancing innovation. The Samsung and Motorola cases are two recent examples. FRAND license terms should be guaranteed to all market participants. The Commission's intervention has also clarified that a licensee can challenge the validity of the patent object of the licensee agreement at any moment. This is in the public interest.
While competition law enforcement is an effective ex post method to stop anticompetitive behaviours, the Commission is also exploring possible ex ante means to prevent abuses. We are participating in the discussions on guidelines in ETSI and ITU as standard setting organisations.

C&S: What are the key challenges that the patent system is currently facing?
K.J.: We already spoke about the implementation of the unitary patent package. This is not only a challenge for public authorities. It is also a challenge for companies who will have to review their patent strategies and their portfolio policies. Some might be tempted to stay with the old, fragmented and costly system, "because we are used to it and we know it". Still, I hope that the new features of the unitary patent and the unified patent court with a unique combination of international, specialised and multidisciplinary expertise will convince companies to make use of these opportunities in their future innovation strategies. Of course there is no one size fits all and different sectors may see different types of opportunities. However, looking back at the success story of the European Patent Convention, I am confident about the success of the unitary patent "innovation".
Another issue that I see coming to the fore is the question how Intellectual property titles, and patents in particular can leverage capital for further development of the innovation – a recent study called it the "bankable IP's".
And then there are issues around the implications of patent law in biotechnology. We recently created a multidisciplinary expert group to look into this, in the light of the development of recent jurisprudence.
Finally, I also see a need to further explore some basic common features for efficient patent systems globally.  Today challenges such as climate change, food security, aging population are global. We need innovation to address these challenges on a global scale. In addition, supply chains for delivering innovation are also increasingly global. WIPO and trade discussions can be instrumental to this.

C&S: Where are the main differences in the IP thinking and practice between both sides of the Atlantic, and between the Western world and Asia?
K.J.: Both the US and the EU increasingly focus on economic evidence as a basis for calibrating their patent systems and its deliveries.
Leaving aside that the EU still is a fragmented market of 28 patent jurisdictions, there are a number of differences in approach. This is true, for example, for the grace period concept, the notion of protected subject matter and the publication of patent applications. Patent quality is a key issue for Europe. We believe in quality patents because they create the right conditions UPstream for bringing innovation to the market. Less is more! Too many low quality patents prompt litigation DOWNstream and stifle innovation because the good patent needs to "weed out" bad patents first.

C&S: What will be the most important developments regarding patents and new technologies in Europe for the coming five to ten years?
K.J.: New technologies have a profound effect on the current economic landscape, shaping the way we live and challenging our traditional framework. The internet of things (i.e. connected cars), big data, 3D printing, synthetic biology and robotics offer us unrivalled opportunities to progress. But they also pose challenges and intellectual property is one of them.
With respect to patents more specifically, I see debate about the delay for obtaining patent protection for inventions for fast developing technologies.
The increasing complexity of new innovative products has prompted the activities of Non-practicing entities, sometimes referred to as patent trolls. In certain cases such single component "hold-up" can delay the bringing to the market of innovative products apart from raising the cost for litigation. Europe, with a different policy on patent examination is less affected than the US and I am confident that the unified Patent Court will permit to contain excesses, should they occur.
Another issue is also linked to complexity. How can we promote "match-making" for different but linked technologies and patents. Our proposal for European trade secret protection is part of the answer, because it gives a solid legal framework to exchange information at an early stage of the innovation process. But there is more to consider.


Kerstin JORNA is a German national. She joined the Commission in 1990 as a civil servant. During the last 20 years Kerstin held various positions in the internal market directorate, amongst others as assistant of the director general as well as in the secretariat general as member of the negotiating team for the Nice treaty. After a stint as commission spokeswoman for regional policy and institutional affairs, Kerstin joined successively the cabinets of Michel Barnier, Günther Verheugen and Jacques Barrot. Kerstin studied law in Bonn, Hamburg and Bruges.

Theon van DIJK is Chief Economist of the European Patent Office, where he is responsible for carrying out economic research in the area of patents and providing general economic advice to support the various EPO activities. Prior to joining the EPO in August 2013, Theon was an economic consultant specialised in competition and regulation matters. He has held senior positions in leading international economic consultancies in London and Brussels, and founded his own consultancy in 2005. Theon has extensive experience in providing expert economic advice to private companies, competition authorities and government organisations. Theon holds an MA and Ph.D. in Economics from Maastricht University in the Netherlands, where he carried out academic research on the economics of patent protection at the UNU-MERIT institute. Theon was a Postdoctoral Fellow at the Institut D'Économie Industrielle in Toulouse, France. He has published extensively in academic and applied journals in the area of intellectual property, competition policy and regulation.


[1] Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure (COM/2013/0813 final. http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52013PC0813&from=EN
[2] COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL AND THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE Towards a renewed consensus on the enforcement of Intellectual Property Rights: An EU Action Plan (COM/2014/0392 final). http://ec.europa.eu/internal_market/iprenforcement/action-plan/index_en.htm
[3] http://ec.europa.eu/internal_market/indprop/geo-indications/index_en.htm#maincontentSec1. Proposals for a Regulation of the European Parliament and of the Council amending Council Regulation (EC) no 207/2009 of 26 February 2009 on the Community Trade Mark and for a Directive of the European Parliament and of the Council to approximate the laws of the Member States relating to trade marks (recast) – references COM(2013)161 and COM/2013/162
[4] http://ec.europa.eu/internal_market/consultations/2013/copyright-rules/index_en.htm



Alain Le Diberder, Arte:”The video game industry is facing a deep shift of its main business model”



Alain Le Diberder - ARTE,
Managing director 
and head of programs, Strasbourg, France

Conducted by Laurent MICHAUD, IDATE, Montpellier, France



C&S:How do you see the video game industry today?
Alain LE DIBERDER: The industry is facing a deep shift of its main business model. It’s not a problem of overall market size. Even if the macroeconomic environment is rather dull, even if the new mobile market works with very low price levels, most of the firms are able to adapt themselves to the new revenue framework. Instead the main issue is the changes needed in the corporate organizations. Yesterday, AAA products were king and the sales department was king of the AAA market. Today the whole process starting from an idea and ending in an actual consumer needs to be reshaped. Unfortunately marketing, technology and price policy are far more flexible than human behavior.

C&S:Video games are reaching their full potential online with multiplayer or massively multiplayer, social, viral, flash and ubiquitous components. What do these developments inspire for you?
A.L.D: All these technologies are impressive and improving very quickly. But I’m not sure that they actually drive the industry to a new era. Videogamers were “social” from the start, and videogames were “viral” far before Facebook or Tweeter. Even in the Eighties, schools and universities were an effective “social media” in which gamers and game reputations were debated, built and destroyed. And they still are. The new phenomenon is that the “social” dimension is now included in the code. And there are opportunities to make money with it. But the videogame industry, for the moment, is not able to catch the main part of this market, which is dominated by “transversal” companies like Facebook, Twitter, Apple and so on.

C&S:What are the effects of globalization on the creation of content, on the creators?
A.L.D:The videogame industry is probably the first entertainment business being born global. The movie industry was partly global relatively soon (say around 1910), but even today national and regional components remain important. It’s the same for music industry. The reason why the videogame industry is different is quite simple: during the first ten years of its history, there was barely no text inside the games. Remember Pong, Pacman, Space Invaders, or even the first Mario. Without any words the only text to be translated was the cartridge sleeve or the instruction sheet on the arcade cabinet. The need for text and localization only came with PC games in the eighties, but it was too late for local cultures: the industry DNA was definitively global. Even the word “localization” tells the truth. The industry can’t be more global than it was at its birth, and the only thing that could happen in the future is less globalization, not more. But it probably won’t.

C&S:Hardware vs. software: are home consoles set to disappear in favor of streamed games? More generally, won't hardware be reduced to a "stupid” screen?
A.L.D:I don’t believe that the console industry holds a strong future. Consoles are expensive, non-durable and challenged pieces of hardware. Competition from set-top boxes and mobile devices is stronger and stronger. But hardware won’t be reduced to a stupid screen. There is a bright future for hardware if you compare the specifications (and price) of a present smartphone to those of a home computer or a home console from ten years ago. Hardware is more and more smart, not stupid. In fact the new hardware standards and the telecommunication networks live together in an ecosystem in which smart networks need smart home equipments, not dumb.

C&S:Oculus Rift, Google Glass, holographic technology ... what do you think the next disruptive gaming experience will be?
A.L.D:I can’t see such a thing as a disruptive technology in the videogame history. Technology is an additive process in the videogame industry not a subtractive one. For instance when the home console began, in 1974, the arcade market didn’t disappear. The computer games started slowly at the end of the seventies and added their sales to the console market. When the PC market was mature, in the beginning of the nineties, the console market exploded too in the 16 bits era. Online games began to be popular before the web. I remember having wasted many hours playing Microprose Grand Prix on line in 1992 with a 14.4kbs modem. Today the online market is strong, but more than 20 years after, between 60 and 80% of the overall market, depending on what you consider as a videogame « sale » is still offline. We could also think of 3D games, Virtual reality helmets, streamed games and so on. But the truth is that during 40 years many technologies have been introduced, many have failed (especially with 3D, beware Oculus!) and many have contributed to the Harlequin suit in which the videogame industry is dressed.

C&S:What are the issues in which the French industry still needs to progress?
A.L.D:There are French developers, French magazines, excellent French videogame schools, some French companies, but the “French industry” doesn’t exist. Of course there is Ubi Soft. But Ubi began to develop games in Asia or Morocco 25 years ago, and regarding the workforce, is more a Canadian company than a French one. Vivendi invested in big US companies but they remained American companies reporting to French shareholders. And Vivendi sold the main part of the shares to Activision. Infogrames bought many British and American companies and the glorious brand of Atari, but it failed. From the beginning, the golden era of Ere Informatique or Loriciels, the French (little) companies have always sold more than 80% of their products in the world market. Almost all the titles, such as “Another World” or “Alone in the Dark” were in English, even in the French market. Many French guys have succeeded in the videogames industry, but as it’s a global industry, they were and still are involved in a non-national world. A national videogame industry is a nonsense, except maybe in Asia.

C&S:What remains for us to (re) invent in terms of gaming experience?
A.L.D:Maybe the next frontier could be the physical experience. The Wiimote and the Ki-nect were a first step, and now the “connected object” is blooming. It will probably take time, but I feel that the gamification of personal care is a strong trend.

Alain LE DIBERDER holds a Ph.D. in Economics. After advising French Minister of Culture Jack Lang (1989-1991), he moved to France Télévision under CEO Hervé Bourges (1991-1994) and then on to Canal + as Head of New Programmes (1994-2000), while he contributed to establishing several landmark cultural portals. He created Allociné TV, a channel devoted entirely to cinema, in 2010, and joined Arte as Head of Programmes on 1 January 2013. He has published several books and papers on digital technology and the media.


Interview with Daniel KAPLAN, Business Developer Mojang, Stockholm, Sweden

Published in COMMUNICATIONS & STRATEGIES No. 94, 2nd Quarter 2014

Video game business models and monetization



Daniel KAPLAN, Business Developer at Mojang

Conducted by Peter ZACKARIASSON, University of Gothenburg, Sweden



C&S:  Minecraft is, by any standard, a very successful game. How much of this success do you ascribe to your business model?
Daniel KAPLAN: I think it played quite a big role since it was discounted for quite a long time. The game was discounted from day one, since it was “released” during very early development. The whole idea was to release it early to see if there was an interest and to see if the project could bear fruit. A lot of people who bought it initially, I think, felt that they had somewhat invested into the project and the ones who were on from the beginning made quite a good deal.

C&S: Do Minecraft exploit any specific previous business model, or has it paved its way with a unique model to generate profit?
D.K.: There are other games that were the inspiration for this model, Mount and Blade from TaleWorlds for instance. They also released their game before it was finished for a discounted price and continued the development with the community.

C&S: Today Minecraft has become a phenomenon that is not only tied to the game itself, but there are many physical product spin-offs. How important is this brand extension for Mojang?
D.K.:We are still a game company but it definitely helps. I think there is a fine line in between how much you can do with a brand before it feels too stretched. We try to create merch/products that we would like to have ourselves, rather than try to fill gaps with our brand with various products. It is sure a fine line and I think a brand can be too exposed and become too stretched.

C&S: Is it possible to become too successful? That is, having produced Minecraft – is it possible to repeat that success? What about the next game of Mojang?
D.K.: I think the problem with becoming too successful is that you will always be compared with your success, regardless of what you produce after that. It is important to not lose focus and continue to deliver things regardless of what they are so you don’t stagnate.

I think that it is almost impossible to create a success like Minecraft again. A lof of the “cred” Mojang got was because it was an up and coming company/person during the initial development of Minecraft, and the whole story around Notch (the founder of Mojang) was a classic David and Goliath story, which we can’t reproduce anymore. We have a whole different starting point now in comparison from where we started.

The next game we are working on, Scrolls, is already profitable and was released in a similar manner to Minecraft. We are super happy about the game being profitable even though it is not close to the success of Minecraft. It is a bit silly to try to compete/compare our projects with Minecraft to be honest.

C&S: What directions do you see the video games industry taking when it comes to generating sustainable business models? Last year Minecraft was one of the two pay per play games in the US top 20 mobile games. Not adopting a free to play business model, is it a conviction or the best way to be different within a serious competitive framework?
I don’t know what will happen in the future. You see different trends all the time and you see companies not following the trends and they are successful. I think that the mobile business will continue growing and will continue to have different business models for various types of games or apps. I think it is hard to say that everything will be x or y. Considering the widespread presence of mobile devices, it allows for more niche products too which will let you create products that don’t follow the trends and can still be successful.

Daniel KAPLAN is Mojang's business developer since October 2010. He was born and raised in Skövde, Sweden. He founded ludiosity.com

For more information about our activities: www.comstrat.org

Sophie NIGON
Managing Editor

Discover our issue Video Game business models and monetization on this subject.