13Jan/160

DigiWorld Economic Journal (DWEJ) No. 100 – Interview with Joel MOKYR

CS100_MOKYR_Photo

Published in DWEJ No. 100

 

Interview of Joel MOKYR

Professor of Arts and Sciences and Professor of Economics and History, Northwestern University, USA

Sackler Professor (by spec. appt.), Tel Aviv University, Israel

Conducted by Gilbert CETTE & Yves GASSOT

 

C&S: As a well-known economic historian, you have done extensive work and research on industrial revolutions and the conditions of emergence of British leadership in the 19th century. This could have led you, like your colleague and friend from Northwestern University - Robert Gordon - to relativizing digital innovation, with the fear that in the absence of breakthrough inventions, the world is returning to a long period of stagnation. But this isn't the case. And while some people recognize the power of the digital transformation yet tend to focus on the damage and suffering it can cause, in your own case, while you don't deny the short-term consequences, you see the typical characteristics of creative destruction so dear to Schumpeter.

How do you justify your optimism in regard to the digital revolution at a time when productivity has been slowing down in all developed countries since the early 2000s, and the pace of productivity growth is very low? To what extent can this slowdown be accounted for by the deficit of our statistical system (the limits of what is taken into account by GDP)? By the delay in spreading the digital innovation throughout the various sectors? By the delay in adapting and training the workforce? Or the fact that digital innovation potential (AI, 3D printing, ...) will essentially be realized in the future?

Joel MOKYR: To start off, I don't see the future of technological progress as merely defined by the "digital revolution." AI, robots, 3D printing and such will be an important part of our technological future, but I see progress on a much broader front. Technology will continue to develop at an ever faster rate. But much of that will be necessary to repair the damage that previous innovation has caused. Climate change is only the best known of a whole array of phenomena in which past advances have had unknown and hidden costs that now have to be paid. These costs will be lower if we get better technology, but then that technology will have unintended and unpredicted consequences. And so on. There is progress, of course, but it is not linear, it is not even monotonic. If we knew precisely in advance what every innovation implied, it would not be much of an innovation.

You have on occasion emphasized the interactions between the progress of instruments, breakthrough innovation in technology and scientific invention. How would you apply the analyses you developed for the 18th and 19th century to the components of the digital revolution today?

Compared to the tools we have today for scientific research, Galileo's and Pasteur's look like stone age tools. Yes, we build far better microscopes and telescopes and barometers today, but digitalization has penetrated every aspect of science. It has led to the re-invention of invention. It is not just "IT" or "communications." Huge searchable databanks, quantum chemistry simulation, and highly complex statistical analysis are only some of the tools that the digital age places at science's disposal. Vastly more sophisticated tools – just think of the Betzig-Hell nanoscopes for which the inventors earned a Nobel Prize last year – will allow us to work at smaller and smaller levels of both materials and living things.

Materials are the core of our production. The terms bronze and iron ages signify their importance; the great era of technological progress between 1870 and 1914 was wholly dependent on cheap and ever-better steel. But what is happening to materials now is nothing short of a sea change, with new resins, ceramics, and entirely new solids designed in silico, developed at the nano-technological level. These promise the development of materials nature never dreamed of and that deliver custom-ordered properties in terms of hardness, resilience, elasticity, and so on. New resins, advanced ceramics, carbon nanotubes and other new solids have all come on line. Graphene, the new super-thin wonder material is another substance that promises to revolutionize production in many lines. The new research tools in material science have revolutionized research.

Of perhaps even more revolutionary importance is the powerful technology developed by Stanley Cohen and Herbert Boyer in the early 1970s, in which they succeeded in creating transgenic organisms through the use of micro-organisms. Genetic selection is an old technology: nature never intended to create poodles. But genetic engineering is to artificial selection what a laser driven fine-tuned surgical instrument is to a meat-axe. The potential economic significance of genetic engineering is simply staggering, as it completely changes the relationship between humans and all other species on the planet. Ever since the emergence of agriculture and husbandry, people have "played God" and changed their biological and topographical environment, creating new phenotypes in plants and animals. Genetic engineering means we are just far better at it.

Do you think that in the long-term future, productivity gains will be mainly driven by breakthrough innovations like the creation of new microprocessors with enhanced performance or the implementation of existing innovations in several areas? And in the latter case, isn't there a risk that the induced productivity gains will gradually dwindle?

I don't believe they will ever dwindle. But I think that productivity growth as traditionally measured will become largely irrelevant in describing what is really going on. Such techniques were designed to measure process innovation, that allowed firms to produce wheat and steel with fewer inputs. It is much harder to use it to measure quality improvements, many of them subtle and often hard to quantify (e.g. the introduction of airbags into cars or more sophisticated diagnostic machinery). It is even harder for traditional NIPA to deal with entirely new products such as anesthesia or microwave ovens or online encyclopedias.

For some, the collaborative economy is one of the most fruitful products of the internet. Should we see this primarily as an illustration of the capacity of digital to reduce transaction costs or as the sign of a possible surpassing of the market economy?

Technology will change the market economy. The "share economy" (now already known to some as the "uber-economy") has transformed urban transportation, and airbnb is transforming tourism. But these will be dwarfed by the impact of digital technology on the labor market, as already illustrated by taskrabbit handimen, upcounsel on-demand attorneys, urbansitter for babysitting and healthtap for on-line doctors. But this is just scratching the surface. Digital technology will change the labor market as much as the factory did during the Industrial Revolution. The factory eventually replaced the home as the main location where production took place. That pendulum may swing back, especially if mass customization through home manufacturing (misnomered as 3D printing) starts spreading. If both Robert Reich and Jeremy Rifkin are panicking about this, it cannot be all bad.

Your work has been partly guided by the question as to why the industrial revolution primarily took place in the UK rather than in Germany or France? Can you draw a parallel with the North American domination that we are seeing today in microprocessors, software and the internet? What conditions have favored this supremacy? What factors could threaten it? What priority changes could enable Europe to acquire the necessary conditions to compete with the digital domination of the US?

I am not sure that I am still all that overawed by the question of "why Britain first". The parallel is the putative "domination" of Americans today in high-tech. Rather than seeing the leader as the locomotive that pulls the entire train forward, I think of this as an electric train, in which the motive power is external, and the lead car is there more or less by accident. Technology today is the result of a multinational effort in which boundaries mean less and less. Finland led in cellphones, Israel in flash storage, France in nuclear power – so what? Does that mean they alone can use it? Let's face it, in today's world, if an invention is made somewhere, it is made everywhere. Silicone valley is in the US, but half of the people working there are foreign-born. They could be anywhere (as long as they are together). Of course, if a country has really terrible institutions, such as Putin's Russia or Khamenei's Iran, they are not only not likely to generate new technology, but may even find it hard to absorb it. But nations such as Norway or Switzerland will always be at the frontier even if they are contributing relatively little to pushing it out.

Many observers agree that the 21st century will be marked by the emergence of China in the forefront of the global economy. Do you think this country has the necessary conditions or is developing the conditions to establish its supremacy with new leadership in digital technology sectors?

No. Their institutions are not quite as bad as Russia or Nigeria, which are corrupt to the core and where a small kleptocracy extinguishes entrepreneurship. But to have technological progress and not just a thriving and well-functioning market economy more is needed. What you need is not only the rule of law, respect for property and human rights, and the enforcement of contracts. What you need is pluralism, tolerance, and freedom of expression and association. You need political competition and decentralization, in which the ruling elite is held accountable and in which the government is constrained by what it can do to its citizens. We need to keep in mind that innovators were and are deviants, people who in some way are different and abnormal, eccentric perhaps, and in conformist societies such people in some way are suppressed. Europe's advances started in earnest when those who thought "outside the box" were no longer in fear of being accused of "black magic" or heresy. Chinese history is a fascinating story of how incredible creativity and sophistication were essentially wasted after the Song dynasty and China fell behind the West. Mutatis mutandis, the same is true for the Soviet Union. The potential of Soviet Russia was huge, but bad institutions channeled its creativity into Sputniks, MIG's and Katyushas and little else.

 

Joel MOKYR is the Robert H. Strotz Professor of Arts and Sciences and Professor of Economics and History at Northwestern University and Sackler Professor (by special appointment) at the Eitan Berglas School of Economics at the University of Tel Aviv. He specializes in economic history and the economics of technological change and population change. He is the author of Why Ireland Starved: An Analytical and Quantitative Study of the Irish Economy, The Lever of Riches: Technological Creativity and Economic Progress, The British Industrial Revolution: An Economic Perspective, The Gifts of Athena: Historical Origins of the Knowledge Economy, and The Enlightened Economy. His most recent book is A Culture of Growth, to be published by Princeton University Press in 2016. He serves as editor in chief of a book series, the Princeton University Press Economic History of the Western World. He serves as chair of the advisory committee of the Institutions, Organizations, and Growth program of the Canadian Institute of Advanced Research. Prof. Mokyr has an undergraduate degree from the Hebrew University of Jerusalem and a Ph.D. from Yale University. He has taught at Northwestern since 1974, and has been a visiting Professor at Harvard, the University of Chicago, Stanford, the Hebrew University of Jerusalem, the University of Tel Aviv, University College of Dublin, and the University of Manchester. He is a fellow of the American Academy of Arts and Sciences, a foreign fellow of the Royal Dutch Academy of Sciences, the Accademia Nazionale dei Lincei and a Fellow of the Econometric Society and the Cliometric Society. His books have won a number of important prizes, and in 2006 he was awarded the biennial Heineken Prize by the Royal Dutch Academy of Sciences for a lifetime achievement in historical science. In 2015 he was awarded the Balzan Prize for Economic History.

More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :

Order No. 100 Discover IDATE's publications & studies

3Dec/150

“Digital Innovation vs. Secular stagnation?”: views of Emmanuel MACRON

CS100_Photo_Macron

Emmanuel MACRON
French Minister of the Economy, Industry and Digital affairs

In the Digiworld Economic Journal No. 100

 

 

ICTs do not constitute a sector of our economy: they are its defining new element. We have indeed rarely seen technological breakthroughs that simultaneously alter the three pillars of an economy: its production, its consumption, its labor relations. Whatever their outcome, they already amount to a new "Great Transformation" of our societies.

First, and most classically, ICTs were the main source of productivity gains in the recent period. From the 1990s on, their production with an ever-increasing efficiency (in the so-called "ICT producing sector") but also their diffusion and their use in the broader economy were a major element in an otherwise moderate output growth environment. Between 2001 and 2007, its contribution to annual GDP growth in eight major EU economies [1] was estimated by CORRADO & JAEGER (2014) [2] to be as high as 1 percentage point.

Second, ICTs offer new goods to consume and, more interestingly, even change what "consuming" means, legally, statistically and culturally. Let me provide some examples. "Big data" make tailor-made products always more available, but raise difficult property rightsquestions, at the intersection of privacy, innovation and growth: we can neither wave all personal controls, nor destroy all incentives for the first-collecting firms, nor prevent the rest of the economy from exploiting them to their full value. A new compromise must be forged, with relevant tradeoffs between privacy and innovation being discussed openly. The "platform model", with its natural tendency towards network effects and economies of scale, must be integrated within our competition policies. The "sharing economy" has met a well-deserved enthusiasm, especially in France, but a big part of it is still not included in GDP figures. The "Internet of Things" is an impressive promise, but cannot fit the traditional boundaries between sectors, and will probably run into traditional management culture's resistance.

Third, ICTs create a new demand for untraditional forms of workforce. By reducing and transforming the need for intermediaries, by improving matching efficiency between customers and providers, they make work more flexible and more independent. In France, the secular movement towards payroll-employment has stopped in the early 2000s. Since 2006, the share of independents in total workforce, when excluding agriculture, has even risen by 26%! The status called "autoentrepreneurs", for instance, has found a real success, with one million people now declared, precisely because it allowed for the required simplicity and flexibility.

Our infrastructure is already first rate. Broadband access is higher than the OECD average. Though we lag behind in terms of fiber development (which accounts only for a little less than 4% of high speed subscriptions against 17% for the OECD average), we are rapidly catching up (fiber subscriptions grew by more than 60% in 2013-2014). More generally, in recent years, increased competition has generated lower prices, simpler offers and more innovation.

But our social and political institutions, inherited from a period of Taylorism, mass consumption and catching-up development are ill-suited to meet these new challenges. Their inertia has long been seen as a source of protection, but may now be stifling economic dynamism to a greater extent than we thought, while not even serving well their primary goal of social protection and individual empowerment.

To rejuvenate their spirit, we must ensure that they still support innovation, diffusion and inclusiveness. These are the three targets of my nationwide economic goals: deliver "Nouvelles Opportunités Economiques" (New Economic Opportunities).

Innovation is a complex phenomenon. It requires a subtle mix of flexibility, investment, cooperation and competition: firms must have the means to innovate, the opportunity to learn and the incentive to develop. We already made a historical effort to support the profitability of corporates and indeed profit margins which were falling since 2007 have been up since mid-2014. We also boosted the development of good practices through the "Industries du Futur" initiative. But we need to go further in removing barriers to entry in overregulated sectors and opening up data to competitors. We should also support the development of venture capital, which has proved a key element in the transformation of our numerous startups (where Paris ranks 2nd in Europe) into "unicorns" (where France ranks only 5th in Europe). Banking intermediation is indeed inadequate when risks are high, close screening is required and immaterial collateral is not easily pledgeable.

Diffusion is a slightly related, though different, issue. The productivity slowdown is much less salient at the technological frontier than in the rest of the economy: in OECD countries, output per worker increased annually by 3.5% between 2001 and 2007 for the 100 most productive firms in each manufacturing sector, compared to 0.5% for the others. In services, these figures are respectively 5.5% and 0.3%! This gap is not only very large, it has widened. There is something broken in the diffusion machine. It is also worth remembering that productivity growth does not come from all firms increasing productivity. Around half of the aggregate productivity gains in industrialized countries are generated by faster growth of the most productive firms that attract more workers and more investors. We must encourage this factor reallocation (between firms and between sectors), be they labor – through increased flexibility – or capital – through lower bankruptcy costs.

Inclusiveness is key. The polarization phenomenon, whereby technology destroys "routine" jobs in the middle of the skill distribution and creates opportunities for both skilled and unskilled work, is well known and well documented. France is no exception for the hollowing out of routine jobs (bank clerks and secretaries for example). However, it exhibits a relatively high rate of unemployment among high school dropouts (16,1%) and more generally among low skill workers. It is an apparent paradox, since ICTs either improve their productivity – for instance by improving matching in personal services – or at least cannot act as a substitute– for all activities where social interactions are needed. We are dismantling outdated regulations and lowering labor costs to bring on the board of innovation the outsiders of the "old" industrial society.

Technology is inherently disruptive. But politics is about inclusiveness and trust. Forging a new social pact is not an additional burden on the road to a new economic model: it is a necessary step, for it conditions its long term sustainability. We must allow the necessary flexibility by making social protection better adapted to independent work, multiple activities and diverse careers. We must also provide the necessary skills (through training as well as initial education) to answer both the present and future demands.

At which speed will ICTs develop and what level of growth rate will they help us achieve? Robert GORDON has brilliantly exposed the "supply side" hypothesis of the "secular stagnation" debate. But at the other end, we hear also the arguments of those telling us we are on the verge of massive breakthroughs. Should we turn to statistics? Yes, they seem to show a slowdown in ICTs productivity but at the same time venture capital investments in the US, which were never higher in fifteen years, promise renewed dynamism.

Which employment structure will they foster? The studies on polarization now describe well what happened in the last decades. But in the coming years we may see a new surge in jobs with intermediate skills, for instance in the medical sector where the productivity of nurses could soon be multiplied. For example, by collecting data from a number of wearable devices or sensors, the "internet of me" in the health care sector will mean much more personalized demand from nurses who will become much more effective at responding to this demand. Again this requires investment in training.

All in all, these innovations are paved with uncertainties, as "industrial revolutions" always were. If you had asked an Englishman about the industrial revolution in 1780, he would have asked what you meant. In 1820, he would have expressed his longing for a vanishing agricultural society. In 1860, he would have claimed that it lifted millions out of poverty and opened the way to a supposedly everlasting progress.

I do not assume that present innovations will follow a similar course. But I believe that we cannot foresee, even less enclose, what is yet to be. We must take the best from our past (the ambition of our social protection, the talents of our industries, the quality of our infrastructures), seize the maximum from our present (the renewed demand for work, the widening of opportunities, the creation of new services and new markets) and be ready for the future.


[1] Austria, Finland, France, Germany, Italy, Netherlands, Spain and the United Kingdom.

[2] CORRADO, C. and K. JÄGER (2014): "Communication Networks, ICT and Productivity Growth in Europe", The Conference Board, New York, December.

More information on DigiWorld Economic Journal No. 100 "Digital innovation vs. secular stagnation?" on our website :

Order No. 100
Discover our other DigiWorld publications

 

2Dec/150

Digital first ?

Gassot-Yves

Yves Gassot
CEO, IDATE DigiWorld

The common perception is that digital innovation is everywhere, and that the pace of innovation is accelerating as it applies to every sector, every business and every organisation.

 

Despite which, economists are wary. Productivity gains have clearly been slowing since the mid-2000s, even before the economy collapsed in 2008. And this is not a phenomenon that is confined to Europe, which could explain why it lags behind market leaders, but applies to the US as well. We are reminded of the words of Nobel Prize winning economist, Robert Solow, back in the 1980s: “You see the computer age everywhere but in productivity statistics”. Although we are by no means enjoying gains comparable to those of the 1920s or the great post-war boom, the effects of the Internet revolution can still be seen in statistics for 1995 to 2005. In other words, before the iPhone, before the smartphone and mobile Internet explosion, before 4G, the cloud and the onset of Big Data…

So the experts are divided into two camps: the techno-pessimists aligning themselves with Robert J. Gordon are convinced that the potential for digital innovation is dwindling, sinking very quickly into useless innovations, the latest gadget for the latest smartphone. They do not see any disruptive innovation that will impact productivity and growth in a way that is comparable to the steam engine or the electric motor. After all, they point out, history does not end here: up until the latest industrial revolutions, people in Western societies lived with very moderate productivity gains and GDP growth.

Meanwhile, the techno-optimists aligning themselves with Brynjolfsson and McAfee remain confident, pointing to new waves of innovation with artificial intelligence, new generation robots, the Internet of Things and 3D printing. Even Moore’s Law – the Law named after the co-founder of Intel who, fifty years ago, predicted that the number of transistors in an integrated circuit would double every two years, and which, somewhat unfortunately, appears to have caught on as the measuring stick for the digital revolution’s maturity – is expected to continue to hold true for at least another ten years. From a more general perspective, there are some such as Joel Mokyr who express their optimism by saying we underestimate the effect that the Internet has on change and improving human welfare, on accelerating access to knowledge in every scientific and technical field.

Behind this very black and white division, there are those who are interested in the failures of the statistical apparatus, and of price effects (deflation) that can distort the measurement of the different sectors’ ICT spending. Ultimately, however, their attention is focused on the conditions that would help reduce lag time, which is perceived as the time it takes for digital technologies’ productivity potential to kick in. Here, authors such as Gilbert Cette and Philippe Aghion stress the importance of ambitious and efficient public policies on education and training, seeing them as the cornerstone of a successful innovation policy and an answer to the phenomenon of qualified job opportunities being concentrated in a few major cities. They also stress the importance of reforms if we want to see the Schumpeterian cycle of innovation play out in a fluid and positive way, reduce the divide between a small fraction of highly productive businesses and an economic fabric turning in mediocre performances, while building up the majority’s trust in the digital transformation. We will add that it is useful, as Larry Summers does on a regular basis, to stress the importance under these circumstances of investments in infrastructure (think fibre and superfast mobile) and that we are not forbidden, as Daniel Cohen suggests in his latest work, from calling for an examination of the wisdom and quality of innovation policies, by underscoring the ways in which digital technologies can contribute to turning the tide on climate change.

  Digital innovation vs. secular stagnation?
N° 100 - DigiWorld Econcomic Journal
The DigiWorld Economic Journal, is celebrating its 25th anniversary with this issue No. 100. For this jubilee issue, Gilbert Cette and Yves Gassot Editors have collected contributions from leading economists who examine the links between digital innovation and the associated developments, directly or indirectly, in terms of productivity, growth and job creation. The guest authors do not all adopt the same angle of analysis nor do they all share the same theses... But, in reading this issue, you will discover a different way of thinking about the big questions raised by these topics.
Buy the DigiWorld Economic Journal now !

 

4Nov/150

The economics of platforms in the digital transformation: IDEI views

Photo_Bruno_Julien

Published in Communications & Strategies n°99

 

Bruno JULLIEN

IDEI, Toulouse School of Economics

Interview conducted by Marc BOURREAU,
Telecom ParisTech

 

C&S:  The concept of platform is sometimes used in a very broad way in the policy debates. How would you define platform/multi-sided markets? What is the difference between a one-sided and a multi-sided market?

Bruno JULLIEN: It is difficult to provide a formal definition of a platform in economics and there is no consensus on such a definition. As a start I would say that a platform is a bundle of services that are used by several economic agents in order to interact. In such situations, a side represents a particular type of users (say sellers on a B2C marketplace, or merchants dealing with a credit card). Each side's benefits depend what other sides are doing on the platform. Moreover the platform may treat the various sides in a differentiated manner. For instance some may get free services while others pay for the right to access the platform.

From a theoretical perspective, a platform is not necessarily multi-sided. To be so requires two conditions. First the organization of the platform's services involves network externalities, i.e. participation and other actions of a user affect other users of the platform.  Second the platform discriminates between different types of users.  One criterion sometime used to determine whether an activity is multisided or not is whether the value of the service for each user depends on the whole structure of prices or not.

In a multi-sided platform the customers need to consider interactions with other economic agents to evaluate the value of the good or service and determine their behavior. The final value of the service for the customer is not fully controlled by the platform but results from agents' interactions. By contrast, in a one-sided market, firms choose the product or service characteristics and customers' value depends only on that choice.

The difficulty with the concept is two-fold. First it covers potentially a wide range of goods and services, so that the multi-sided externalities must be significant enough to be relevant. Second, all platforms are not necessarily multi-sided as this may depend on the business model of the platform. Consider for instance retailing: a chain store is typically not a multi-sided platform but Amazon marketplace is one. The chain store decides which products to carry at which prices and then consumers interact only with the store and don't care about suppliers. By contrast, online marketplaces let buyers and sellers jointly determine the products and prices.

The literature on multi-sided markets emerged in the early 2000's (and you were one of the first authors on the topic), but it is still vibrant. What do we learn from the recent research on platforms?

The early literature was mostly focused on price theory, explaining difference between pricing in multi-sided markets and one-sided markets by emphasizing the need to coordinate users and bring all sides on board. A main contribution has been the development of concept of opportunity cost where the cost of providing the service to a user is adjusted to account for the benefits (or costs) accruing to other users. This however needs to be put at work in practice, which is part of what the literature is aiming at.  The recent literature developed along several lines. The first is the application of the concept to specific industries as it has been done for instance for the Internet, search engines, ads financed media or credit cards. For instance, in the case of media, the recent literature helps us understand the evolutions in terms of business models or the implications of mergers. Along the same dimension, the research is trying to develop new operational tools for competition policy where traditional results don't apply; there has been for instance work on bundling or econometric models for empirical work and policy evaluation.

At the theory level, what I retain mostly from recent work is the importance of participation patterns of the users (exclusivity, multiple vs single affiliation, switching) in shaping the competition between platforms.

On the other side of the coin, what do we still not know? What are the key questions where more research is still necessary?

While we have made significant progress in price theory and applications, there is a lot we don't know and a large scope for future research. For the theory I think that the main issue that we need to address is that our theories are mostly static. We need to better understand the dynamics of competition between platforms. What determines the emergence of a successful platform? What is the extent of barriers to entry? What are the respective roles of history and actual merit?

I expect also research to move away from price theory into design and organization, where most competition takes place. We need to understand when and how platform decides to interfere in transactions. A recent concrete example is the issue of MFN clauses for online booking systems (Most Favored Nation: this prevents registered hotels from offering lower prices on competing websites or direct sales).

For this we need more empirical work to guide research and applications. Currently we see many data originating from a single platform, so we may expect many studies of agents' behavior on a platform. But we will need also empirical work on platform competition.

For competition/regulation policy, we need more work to propose operational decision tools to competition authorities and regulators. Basic questions such as market definition or tests for predation are still not resolved for platforms. We have difficulties evaluating the optimal market structure, as more competition may not raise welfare and efficiency. This will require developing research at the frontier between law and economics.

There is a hot policy debate today in Europe on the regulation of platforms. What is your opinion on this question? What are the potential market failures in platform markets, which would justify a regulatory intervention?

The issue is not to identify market failures, which occur when there are externalities between users, network effects and market power as is usually the case with platforms. The main question is whether there is a scope for efficient ex ante regulatory intervention. In some cases, ex ante rules or principles are desirable, for instance for privacy issues. But in general I would be cautious and favor ex post intervention for several reasons. Platforms are very heterogeneous:  platforms may propose very different activities, the same activities may be proposed by very different platforms, platforms may be more or less integrated vertically. This means that it is extremely complex to define ex ante the perimeter of a regulation. Moreover, the same regulation may affect different platforms in different ways, for instance a pay platform and a free platform are not affected in the same manner by restriction on data usage. Finally, the markets where platforms operate are dynamic and innovative. Market power has to be evaluated from a dynamic competition perspective and regulation should not impede this dynamic process.

Notice that it is in the broad interest of a platform to optimize the quality of interactions between its members and correct externalities because this raises their value. The literature has put some limits to this view, but intervention should occur only for clearly identified failure. I would point out two factors that may be matter for that.

A key distinction should be between situations involving bottlenecks and others where all users can easily switch or use several platforms. A bottleneck arises when each platform enjoys the exclusive rights for the conduct of transactions with some of its users. This gives some monopoly power on these transactions and we know that competition between platforms will not resorb it. We may then want to reduce this market power. This is similar to a one-way access problem familiar to telecommunication regulators.

Second, platforms providing free services to some sides rely on a limited set of instruments to coordinate users, which may not be enough to address issues of externalities. Indeed a good coordination of the sides would require as many prices (or subsidies) as there are sides. Free platforms by nature cannot pass on to consumers the true opportunity cost, which may induce excessive usage or may distort prices charged to other sides. This may induce inefficiencies and calls for special scrutiny.

Do you think that today regulators and competition authorities take sufficiently into account the specificities of multi-sided markets (provided you think they should)?

Regulators and competition authorities are now aware of the concept and its importance in some industries. However they lack tools and knowledge to incorporate this dimension in their analysis. I think this is a reason why we don't see as many applications to cases as we would like and why they prefer to rely on more conventional analysis. Some cases are more obviously two-sided than others, the credit card cases for instance. But even if the concept is not explicitly mentioned in decisions, it is often present in the reasoning (an example is the approval of the merger of the satellite digital radio services Sirius and XM by the FCC in 2008).

In platform markets, we observe some big multi-platform players, such as Apple, Google, Amazon, or Facebook, with distinct core businesses and overlapping activities. Do you think this multi-dimensional feature of the competition affects the ways these firms compete with each other?

I am not a specialist of strategy but I think this is the case. These platforms started with very different objectives and business models. This affects their priorities and strategies in terms of pricing, choice and organization of activities. Clearly Google Shopping is organized in a very different manner than Amazon marketplace, reflecting their different competencies and services. I always thought that part of the initial difference of strategies on e-books between Amazon and Apple was due to the expertise of Amazon in the domain of cultural goods.

Bruno JULLIEN is Senior Researcher at CNRS and the Toulouse School of Economics (TSE), and a senior member at Institut d'Economie Industrielle (IDEI). He is currently Scientific Director of TSE. His interests cover industrial organization, in particular in the domain of network economics, ICT and competition policy, as well as regulation, insurance and contract theory. He is recognized as a world leading academic researcher on the economics of two-sided markets, which he contributed to develop. Bruno Jullien has published numerous articles in renowned scientific reviews such as Econometrica, Journal of Political Economy, Review of Economic Studies, RAND Journal of Economics. He is currently co-editor of Journal of Economics and Management Strategy and associate editor of Geneva Risk and Insurance Review. He is Fellow of the Econometric Society, member of the Steering Committee of Association of Competition Economics and of the Economic Advisory Group on Competition Policy of the European Commission. He is a fellow of CEPR, CESIfo and CMPO. Bruno Jullien has also been advising firms and decision makers on regulatory and competition policy issues for more than 20 years. He graduated from Ecole Polytechnique, ENSAE and EHESS, and holds a Ph.D. in economics from Harvard University. He started his career as a researcher in Paris at CEPREMAP and CREST. He was also a Professor at Ecole Polytechnique. He joined the University of Toulouse in 1996. He has been Director of the research centre GREMAQ (1997-2004) and Deputy Director of Toulouse School of Economics (2010-2011). He received the Bronze Medal of CNRS, the "Palmes Académiques", the ACE best article award and the JIE best article award.

The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" is available!

Order n°99      Discover IDATE's publications & studies

DigiWorld Summit 2015

IDATE will contribute to the debate at the upcoming DigiWorld Summit on 17, 18 and 19 November, in Montpellier, with:

  • Fatima BARROS, Chair BEREC
  • Carlo d'ASSARO BIONDO, President EMEA strategic Relationship, Google
  • Bruno LASSERRE, Président de l’Autorité de la Concurrence
  • Eduardo MARTINEZ RIVERO, Head of Unit « Antitrust Telecom », DG Competition, EC
  • Sébastien SORIANO, Président de l’ARCEP

Information & Registration:

www.digiworldsummit.com

5Oct/150

The economics of platforms in the digital transformation: ARCEP views

Photo_sebatien_SORIANO

Published in Communications & Strategies n°99

Sébastien SORIANO

Chairman ARCEP

Interview conducted by Marc BOURREAU,
Telecom ParisTech

 

C&S: There is a hot policy debate today in Europe on whether we should regulate platforms. Some argue in favor of a "laisser faire" approach, because due to strong innovation dynamics, they say, the dominant platforms of today will soon be replaced by new players, in a Schumpeterian fashion. Others propose to strongly regulate platforms, in terms of neutrality, portability of data, access, etc. Where do you think lies the right level of regulation for platforms?

Sébastien SORIANO: Whether or not an economic activity should have specific regulation is a matter of two cumulative factors: an economic factor (are there market failures?), and a political one (is this activity having a structural impact on our society and economy?).

There is no single answer for all platforms, because the term "platform" covers a great variety of actors and models: e-commerce platforms, social networks, search engines, application stores… The fact that the European Commission is currently investigating on whether Uber is a transport service or a digital platform is actually a striking example of the lack of a consensual definition of what a platform is.

In my opinion, it is obvious that some digital platforms have today acquired such a significant influence over multiple segments of our economy that some kind of regulation is needed. But defining specific economic rules for every type of platform would be inappropriate: it would risk numbing the innovation process without bringing any added value, not to mention the potentially high cost of such a regulation.

In the end, the question is whether we should regulate only a handful of major platforms. I believe that such a regulation would help promoting confidence in the digital economy and thus fast-tracking the development of those markets in Europe.

If platforms, or some platforms, should be regulated, what kind of regulation should be put in place? In other words, what kind of market failures calls for a regulatory intervention? Going further, which form of intervention do you think is preferable: ex ante regulation or ex post competition policy?

General rules already exist in consumer, commercial, competition or privacy laws. The Booking.com case, dealt in France by the Autorité de la concurrence, is an illustration that the current legal tools are often sufficient. The real debate today is whether we need ex ante regulation, that is to say a specific regulatory framework adapted to a certain category of platforms.

To build such a framework, three essential values will be needed in my opinion:

First, regulation must have the ability to react quickly: the general law provides some answers, but the response times are often totally ill-adapted. Disputes between a platform and a startup or an SME should be settled in no longer than a couple of months.

Second, the framework must be an agile one: strict and detailed rules would indeed soon become outdated, or simply be bypassed by some actors. Regulation should be articulated around a few general principles, with a regulating institution in charge of ensuring the applications.

Finally, regulation must form an alliance with the multitude: the digital economy is a complex and shifting sector and regulation must take shape with the help of researching communities, programmers, makers... We need to invent the concept of "crowd-regulation".

 The economics literature on platforms and two-sided markets shows that applying insights from the analysis of one-sided markets to two-sided markets might be misleading. For example, we know that it may be profitable (and socially optimal) for a platform to charge a very low price on one side to generate strong network effects for the other side. With "one-sided" glasses, such a price may look predatory, whereas with "two-sided" glasses, it could be viewed as just efficient. How can regulators account for these specificities of two-sided markets?

Infrastructure regulation has existed in France for close to 20 years, and has been applied to a great variety of sectors: railroads, energy, communication... The fundamental issue has always been to deal with network effects, a phenomenon that allows the largest network to constantly reinforce its dominant position. Regulation allows our society to benefit from the positive consequences of these network effects, while minimizing the drawbacks.

The notion of two-sided markets, with cross network effects, is only a refinement of those concepts. Of course, some of our regulation tools will need to be adjusted to the stakes and the specificity of those markets. But the fundamentals are the same, and the issue at stake is to regulate our digital economy's main foundations.

 There is at least one area of friction between telecoms and platform markets, which is the competition and/or complementarity between telcos and over-the-top (OTT) players. Can telecommunications regulation have a role in securing a level-playing-field between telcos and OTTs?

Whether it is as a client, a supplier or a competitor, every company subjected to some form of regulation fears having to deal with Internet players who don't play by the same rules. Because there are specific rules in their sector, this is especially true for the telecom or the media industry. Part of this fear is entirely justified: real issues are at stake, especially when telcos and internet players are in direct competition.

However, we won't solve anything with downward alignment or total deregulation: a new balance must be established, and, in my opinion, part of the solution is precisely to be found by building a framework for platform regulation.

 A related topic is net neutrality. What is the current status of net neutrality regulation in Europe and in France?

The Internet has become a crucial collaborative space, tremendously important for all our society and economy, and I believe it must now be considered as a common good. The risk today is that some companies manage to distort this essential tool for their own profit and against the interest of other users. This is not science fiction or paranoid delusion: some essential privately-controlled bottlenecks have indeed emerged, and without appropriate regulation, there is a real threat to see some kind of privatization of the Internet.

Net neutrality rules precisely aim at preventing a specific category of actors, the telecom operators, from doing so. An ambitious set of rules on net neutrality is in the process of being adopted in Europe. The European framework will be very protective and will rely on guidelines to be issued by BEREC. ARCEP will contribute actively to these works and will be in charge of its application in France.

But if we really want an open Internet, we also need to prevent a situation where a few Internet giants could take advantage of their current position to dictate their own rules to the World Wide Web. This should be a necessary addition to the net neutrality framework, and without it, the job would only be half done, or maybe even less. Ask yourselves: what actors are the most worrying for the future of the Internet?

 Platforms are global players, whereas telcos are usually attached to a local market. Is it possible to regulate platforms at a national level, or should such regulation be supra-national?

The correct level to construct tomorrow's regulation is obviously the European one, and this work is currently underway via the Digital Single Market initiative. But each member state has the responsibility to contribute to this reflection, and I believe it would be appropriate to act first on a national level in order to better observe, understand, compare and assess actor's behaviors in platform markets.

I would however advise against going too far on a national level. Only with a European solution can we avoid a discrepancy of treatment between member states. Moreover, a European solution would be more legible for actors, and we need this legibility if we want actors to invest in innovation in Europe.

Digital platforms, and the digital economy in general, raise new regulatory challenges. Yet, the nature of those challenges, and the potential harm for our society remains poorly understood. France mustn't underestimate the complexity of the issues, and we should give ourselves the means to accumulate the necessary experience and expertise to participate in the debate.

 One possible concern in platform markets is that due to the strong dominance of one firm or a few firms, competition might not emerge. What can be done to protect the innovation process and potential entry by new (European?) players?

This ultimately comes back to the issue of dealing with network effects that participate in locking dominant positions over some markets. One of the challenges for every regulation is to bypass those effects in order to maintain an open competitive game. There is no single right answer but the solution typically lies with regulatory tools such as portability, interoperability, open format...

Another crucial aspect is the matter of vertical integration: in the last few years, some Internet giants have been developing new activities related to their core-business and have constructed entirely closed ecosystems. This is not a problem in itself, but it is imperative that this should be done in a loyal manner, without the dominant actor leveraging its position to stifle competition on other markets.

Similar problematics have been dealt with very strong remedies in the past: structural separations were put in place in railway and electrical companies, and some companies were even dismantled. This is not to say we should go that far in platform markets. Most likely, platform regulation can bring more subtle remedies, adapted to platform specificities.

Sébastien SORIANO was appointed Chairman of ARCEP (Autorité de régulation des communications électroniques et des postes) on 15th January 2015, for a six-year term. Born in 1975, Sébastien Soriano is a chief engineer from École des Mines (the French national school of mining engineers) and graduated from École Polytechnique. He then spent most of his career in competition and telecoms regulation. In 2012, he was Head of Fleur Pellerin's cabinet, the then French Minister for SMEs, innovation and digital economy. Prior to his appointment at ARCEP, he was Special Advisor to the French Minister for Culture and Communication.

The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" will be soon available!

Order n°99      Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.comstrat.org    www.digiworldsummit.com    www.digiworldweek.com

28Sep/150

The economics of platforms in the digital transformation: What does Google think?

CS99_CurtoMillet

Published in Communications & Strategies n°99

Fabien CURTO MILLET

Director of Economics, Google

Interview conducted by Yves GASSOT,
CEO IDATE DigiWorld

C&S:  Is the SMP regulatory framework fit for purpose given the competition among telecom providers and between telecom operators and online service providers?
Fabien CURTO MILLET: Actually, platforms are not an Internet phenomenon.  A platform is simply an environment where two or more groups of economic agents come together to transact in some manner, so the concept is extremely generic: an example of a platform commonly used in the economics literature is that of singles bars!  There are many economically important platforms outside tech.  You can think of a free-to-air television channel as a platform, bringing together viewers and advertisers; the same goes for newspapers.  And within tech, there are many platforms that historically had nothing to do with the web.  An operating system can be analyzed as a platform, bringing together application developers and users.  So the concept has wide applicability.

It is true, however, that the latest crop of web-era platforms has attracted a great deal of public attention.  I attribute that in large part to the simplicity of use and degree of innovation of many of these businesses, which revolutionize everyday tasks and disrupt existing approaches.  Obvious examples include apps like Uber, BlaBlaCar and Lyft in transportation, or AirBnB for accommodation.

Google operates several platforms, starting with its search engine and Google market . Are there any others you can think of?
Many of Google’s activities involve the creation and/or operation of various platforms.  In the ads space, we have for many years run AdSense, an ad network bringing together users and advertisers on third party websites, while allowing publishers to monetize their content.  Similarly, YouTube brings together content creators, viewers and advertisers.

Academic works on platform economics invariably come down either to works on multi-sided markets that emphasise the role of an intermediary between multiple parties that platforms play, or analyses of platforms as strategic necessities for capturing innovations created by others. Do you think that is a fair assessment?
Much of the literature is indeed concerned with analyzing the role of platforms as a matchmaking device between their various types of participants.  This is not surprising, as the art of a platform operator is precisely to figure out how best to balance the interests of parties on various sides.  In the context of web search for example, this often involves search being provided to users for free, but with advertisers on the other side being charged (usually when their ads are clicked on by users, under the so-called Cost Per Click pricing model).  This is the case of search services like Google or Bing (which have clearly demarcated spaces for ads) for example; the point also applies to more specialized players like Booking.com or Tripadvisor.

But the literature is vast and touches on many interesting topics.  An example is the technical question of how to carry out market definition in a platform context.  One issue there is that the standard market definition test normally looks at whether customers switch away in response to a given percentage price rise.  But in the context of platforms, the price charged to one side is often zero.  In this case, how should the test be adjusted in practice?

These are only examples, and while the literature is already vast it is also evolving, so I think we can look forward to additional insights in this area.

How do you explain the fact that the GAFA quartet (i.e. Google, Apple, Facebook and Amazon) is much less powerful in certain markets – notably Russia, China and even Japan and South Korea?
These four companies have obviously achieved great success in many areas, and are engaged in formidable competition across multiple products and services.  Spaces where some or all of these firms compete include search, cloud computing, social networking, operating systems, advertising, mobile phones and tablets.  If you take cloud computing, for example, there is currently a great battle between Amazon, Google, Microsoft and other firms like SAP and Rackspace, with many massive rounds of price cuts and quality improvements having characterized the space in recent years.  So it is very difficult to give you an overall answer covering such a broad scope of activities!

Since you mention specific countries, it is interesting to note that they have also developed a number of strong competitors in a range of tech areas.  To take search, for example, we have Russia’s Yandex, South Korea’s Naver and China’s Baidu.  But it would be unfair to label these as local players, since they are also engaged in aggressive plans to expand internationally -- Baidu is developing in Brazil, while Yandex is already present in several countries and has recently expanded by serving searches in Turkey.  As for the success of the “quartet” in the countries you highlight, it really depends what you are looking at.  Just take the most recent earnings release from Apple -- they reported revenue growth of 112% in “Greater China” (mainland China, Hong Kong, and Taiwan) and iPhone unit growth of 87% in that area.

Some see the eruption of new players in vertical industries – prime examples being Uber in transportation or Airbnb in the tourism business – as the emergence of new platforms and new sources of competition for the Internet’s leading horizontal platforms? Do you share that point of view?
The digital economy is rife with entry and innovation.  The two examples you mention are a case in point.  Another notable story is that of Snapchat, a mobile-only video and photo sharing service that came from nowhere, and into an already quite busy space.  But it became wildly popular at breakneck speed.  Snapchat users today share over 700 million photos worldwide per day, which is reportedly larger than the combined volume of Facebook and Instagram -- truly remarkable for a service that did not exist five years ago and that is only available on mobile!  So I absolutely agree that these new entrants have further turned up the competitive heat on existing firms, including Google.  If you’re looking for a rental property for your next holiday in Provence, you might perhaps go directly to the AirBnB website or app, instead of running a search on Google or Tripadvisor.

This broad phenomenon in itself is not particularly new for the digital economy -- for many years, companies with a more specialized focus have been competing with firms having broader business models, like Google.  Google aims to answer any question that a user might have, whereas players like Tripadvisor focus more narrowly on particular content categories (especially the more commercial queries).  Another case in point is Amazon, which is of course a very major competitor in shopping queries.  Already in 2012, a Forrester study found that some 30% of online shoppers in the US started researching their latest purchase on Amazon, versus 13% on search engines.

Many fundamental factors drive these competitive developments.  First, barriers to entry into many digital activities are generally low and dropping fast.  One reason for this is the development of cloud computing: it used to be the case that firms needed to invest in their own server infrastructure in order to procure computing power, therefore incurring fixed costs. Cloud computing does away with that, by turning this fixed cost into a variable cost – and a low one at that, given the competition I mentioned earlier in this area.  This is precisely one of the ingredients behind Snapchat’s success, as they run entirely on the Google cloud.  Second, switching costs are pretty low – it is generally trivially easy and inexpensive for users to try out a new app or website.  We often say at Google that “competition is just one click away” – although we should perhaps modify that line for the mobile era and say that it is “one tap away”: according to comScore, almost 90% of mobile Internet time in the US is spent on apps rather than in the browser – truly a revolution.  Such ease of access to competing services means that we observe extremely high levels of “multi-homing”, i.e. the presence of a user on multiple competing platforms at the same time (e.g. Twitter and Facebook).  I think these fundamental forces are here to stay, so we should have the opportunity to observe many more examples of disruptive entry in the future.

Net neutrality debates have resulted in regulations that limit the risks of ISPs discriminating against certain kinds of content. How do you respond to those who want to see these neutrality obligations extended to platforms? For instance in the choice of applications that app stores host, or the neutrality of algorithms?
Things like the choice of applications hosted or the operation of algorithms go to the very heart of what a platform does.  “Neutrality” is a nice-sounding word, but it’s essentially in the eye of the beholder.  The purpose of an algorithm is precisely to rank things from more to less relevant.  Who is to say that one choice is better than another?  And on what criteria?  Is it neutral to rank restaurants by reference to distance to the user, or should we use review counts instead?  Or maybe both?  And how should one compare restaurant results and web page results?  You very quickly get into rather abstract and arcane debates as to whether a particular approach is really treating like-with-like and so on.

Fortunately I believe these are questions which do not need resolving.  Most economists would agree that regulatory intervention is only appropriate in circumstances where competition fails as a disciplining force.  And there is frankly very little indication of problems across the digital economy.  In addition to the rapid entry I discussed in my previous answer, I think any objective observer would agree that the speed of innovation in the digital economy is extremely high.  This is for me a fundamental indicator of the competitive health of a sector – it ought to act a bit like a thermometer to determine whether a patient is sick and guide enforcement.  After all, as the famous English economist and Nobel laureate John Hicks once observed: “The best of all monopoly profits is a quiet life”.  There is preciously little that seems quiet about the digital economy today.

What differences do you see in the exchange of ideas taking place in Europe and the United States over platforms and the inherent risks of dominant positions?
I think that the exchange is a lot more nuanced in both places than it is often portrayed.  From a Google perspective, we have faced antitrust scrutiny on both sides of the Atlantic -- the Federal Trade Commission in the US thoroughly investigated many parts of our business in great depth (notably touching on search, patents and ad campaign portability), leading to voluntary commitments in some areas in January 2013.  In Europe, we are obviously currently working with the European Commission today in the context of its own ongoing antitrust investigation.

And while many commentators would like to cast current events in terms of various arm wrestling matches between European regulators and American tech companies, this unduly simplifies reality.  For example, Germany’s Monopolkommission (Monopolies Commission) recently concluded a wide-ranging investigation into competition in digital markets.  In the context of search platforms, this independent agency noted that “search engines’ low degree of user lock-in in comparison with other platform services (e.g. social networks), and the low degree of advertiser lock-in caused by network effects means that the search platform’s attractiveness from a user perspective is of key competitive importance, and this explains why even search engines with high market shares have an interest to further develop their offering with their users in mind, in order to secure their market position going forward”.  Moreover, they expressed a clear view with regard to intervention: “The Monopolies Commission takes the view that a purely preventive regulation – irrespective of potential abuses – is not currently warranted. This holds true in particular for a regulation of search algorithms or regulatory unbundling instruments”.

Finally, I would take issue with the idea that there is an “inherent” risk to the emergence of dominant positions.  I am sure that companies like MySpace or the now-defunct Friendster have views on the question, given how at one point they both towered over the social networking space.  And I am always greatly amused by old press cuttings calling winners in one area or the other -- for example, Fortune declared in a 1998 article that “This much is clear: Yahoo! has won the search-engine wars and is poised for much bigger things”.  1998 was of course also the year when Google was founded...  If there is anything certain in the digital economy, it’s that competition often comes from where you least expect it and failure to innovate faster than your competitors is the real “inherent risk.”

 

Fabien CURTO MILLET is Director of Economics at Google, where he has worked since 2011. He reports to and works closely with Chief Economist Hal Varian on the development of data-driven insights and on research to evaluate the economic value of Google and the Internet. He also leads economic analysis in all competition and regulatory processes involving Google at a global level. Fabien was previously a Senior Consultant in the European Competition Policy Practice of NERA Economic Consulting, where we worked from 2004. During that time, he advised in major European merger control processes such as ABF/GBI, Thomson/Reuters and Universal/BMG. His experience spans a wide variety of business sectors, including: airports, financial services, mining, music publishing, pay TV, print media, retailing, and satellite communications. Fabien was educated at Oxford University, where he obtained a BA in Economics and Management, an MPhil in Economics, and a Doctorate in Economics. For two years he was a Lecturer in Economics at Balliol College, Oxford. He further obtained a Postgraduate Diploma in EC Competition Law from King’s College, London.

The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" will be soon available!

Order n°99      Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.comstrat.org    www.digiworldsummit.com    www.digiworldweek.com

6Jul/150

Interview with Steve UNGER, ‎Group Director and Board Member at Ofcom

CS98_UNGER_photo

Published in COMMUNICATIONS & STRATEGIES No. 98

 

Steve UNGER

‎Group Director and Board Member at Ofcom;
UK Regulator, London
 

 

C&S:  Is the SMP regulatory framework fit for purpose given the competition among telecom providers and between telecom operators and online service providers?
Dr UNGER: The SMP framework has served us well over the years. It is a good starting point for the Framework review that is about to start. However, there are areas where we need to build on it.
For example, we need to ensure that when analysing the market power of traditional network operators, we take into account the presence of new communications providers, delivering services such as voice and messaging over the top of the internet. This is something we can do within the current Framework, but it is also part of a broader debate about the need for a level playing field between network operators and internet-based providers. It may be that expanding the scope of the SMP analysis in the manner results in a reduction of market power, and it’s clearly important that we consider this possblity. There is a separate question about whether or not there are new bottlenecks created by internet-based providers and whether we have the right tools to deal with this .
A more difficult issue is that whilst the SMP framework is an effective means of addressing concerns arising from single firm dominance, it does not deal effectively with oligopolies. This is a problem because a key market trend for both fixed and mobile is towards a limited number of end-to-end competitors – more than one, but not many. In some circumstances this is fine, in that a limited number of competitors is sufficient to deliver a good consumer outcome, and there is no need to intervene. In other circumstances however the outcome might be poor. We need the right tools within the framework to distinguish these cases, and intervene where appropriate. At present the only tool available is the concept of joint dominance, and I don’t think this is sufficient.

What are the most important skills sets for those who need to make sense of results of big data analytics?
Statistics and machine learning are most obvious.  But in order to put analysis to work, communication skills are critically important.  To be effective, a data analyst needs to turn data into information, information into knowledge, and knowledge into action.  You can't do this without communication.

What are the biggest opportunities for business and are businesses able to make effective use of big data to improve their margins?
As in every business, it is imperative to understand your customer.  When you can draw on computer mediated transactional data, it is possible to gain a deeper understanding of the customers' needs than was previously the case.

Are there already some ideas for developing appropriate tools for dealing with oligopolies?
BEREC has recently published a report on this matter, and I think this provides a good starting point for the debate we need to have. The report distinguishes two questions: whether there is joint dominance, associated with tacit collusion, and what is the threshold to prove it; and whether we have situations where there is no tacit collusion, but uncoordinated behavior within a tight oligopoly still results in a poor outcome. The report then focusses on the second of these questions, which is the one that is not currently addressed by the European Framework
It is important to emphasise that tight oligopolies may still result in a good outcome. For example, you may have a small number of networks but we still observe effective competition, including for example the provision of wholesale access on a commercial basis. However, it is not difficult to imagine circumstances where there is more limited retail competition, either between a single incumbent telco and a cable operator, or between a small number of vertically integrated MNOs.
The BEREC report sets out a number of criteria which one might use to distinguish between ‘good’ and ‘bad’ tight oligopolies. It also draws an interesting parallel between these criteria and the SIEC test applied in merges, which serves a similar purpose. What we now need to do is consider in more detail how this thinking might be applied in practice, and what evidence would be required to do so.

Could symmetrical regulation replace or complement asymmetrical regulation in these matters?
I’m afraid I don’t even like the term ‘symmetric regulation’. It sounds benign, since it implies consistency of approach, but that is often not what it means in practice. What it means in practice is that regulation is applied to all service providers in the market, regardless of whether a particular provider has market power.
Such a blanket approach to regulation might be appropriate in circumstances where there is a market-wide market failure. For example, high barriers to switching might necessitate a market-wide intervention to improve switching processes. But where there is not a market-wide market failure, I believe very strongly in the principle that any regulatory intervention should be proportionate, and targeted at the problem you’re trying to solve
I therefore find it odd that, within the current framework we are able to address concerns arising from single firm dominance, or a rather narrowly defined joint dominance, but that where these fail our backstop position is to regulate everyone in the market. We need to find a more sensible middle ground.

Some observers have argued that symmetric regulation has already proven its value for dealing with access problems (e.g. interconnection). Can we apply this comparison to issues dealing with access networks?
I’m not sure the comparison is valid. Remedies which mandate interconnection (or other forms of interoperability) are I think usually imposed because there is a risk that network effects will result in the market tipping to a single provider. In those circumstances it may well be appropriate to impose a market-wide remedy, since the problem you’re trying to address is one that arises from the way that the market as a whole operates
To put it another way, I think we need to distinguish between those forms of network access which are designed to address market failures associated with network effects, and those forms of network access designed to address market power. The former may have to be market-wide, the latter should be targeted at the source of market power.

Steve UNGER is Ofcom's Chief Technology Officer, and is also the Group Director responsible for Ofcom's strategic approach to communications regulation. His group is responsible for critically evaluating external market and regulatory developments, and leading the process of setting Ofcom's strategic priorities. He is also responsible for several specific policy areas, including Ofcom's work on Communications Infrastructure. Steve previously worked in industry – for two technology startups, both of whom designed and operated their own communications networks, and as a consultant advising a variety of other companies on the commercial application of new wireless technologies. He has a Physics MA and a Ph.D. in Astrophysics.

The Communications & Strategies No. 98 "A review of SMP regulation : Options for the future" is now available !

Order now Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.idate.org    www.digiworldsummit.com    www.digiworldweek.com

14Apr/150

Interview with Hal VARIAN, Chief Economist at Google

interview Hal VARIAN GOOGLE

Published in COMMUNICATIONS & STRATEGIES No. 97


Hal VARIAN

Chief Economist at Google;
Emeritus professor at the University
of California, Berkeley

C&S:  What are the biggest challenges for governance/regulation created by growth of the big data market? Are there big differences between the US/Chinese and European approaches to big data opportunities?
Hal VARIAN:  There are policy issues relating to data access and control that arise constantly.  This generates a lively debate, to say the least.  As an economist, I would like to see serious benefit-cost analysis guide regulatory policy.

What are the most important skills sets for those who need to make sense of results of big data analytics?
Statistics and machine learning are most obvious.  But in order to put analysis to work, communication skills are critically important.  To be effective, a data analyst needs to turn data into information, information into knowledge, and knowledge into action.  You can't do this without communication.

What are the biggest opportunities for business and are businesses able to make effective use of big data to improve their margins?
As in every business, it is imperative to understand your customer.  When you can draw on computer mediated transactional data, it is possible to gain a deeper understanding of the customers' needs than was previously the case.

What has big data analytics to learn from mainstream econometrics and what can big data analytics contribute to mainstream econometrics?
Econometrics can draw on some of the powerful techniques of predictive analytics that have been developed by the machine learning community.   These tools are particularly helpful when dealing with data involving nonlinearities, interactions, and thresholds.
Econometrics, on the other hand, has focused on causal inference from its very early days.  Techniques such as instrumental variables, regression discontinuity, and difference-in-differences have been widely used in econometrics but, to date, have not been used in the machine learning community.
Finally, the statistical field of experimental design will be valuable to both communities, as computer mediated transactions enable true randomized treatment-control experiments, which are the gold standard for causal inference.

What should be added to standard US Ph.D. programs in economics to make the students big data literate?
There are now very good textbooks, online tutorials, and tools that make it relatively easy to put together a course on machine learning.   In addition virtually all computer science departments and many statistics departments offer such courses.

Hal R. VARIAN is the Chief Economist at Google. He started in May 2002 as a consultant and has been involved in many aspects of the company, including auction design, econometric, finance, corporate strategy and public policy. He is also an emeritus professor at the University of California, Berkeley in three departments: business, economics, and information management. He received his S.B. degree from MIT in 1969 and his MA and Ph.D. from UC Berkeley in 1973. Professor Varian has published numerous papers in economic theory, econometrics, industrial organization, public finance, and the economics of information technology.

The Communications & Strategies No. 97 "Big Data : Economic, business & policy challenges" is now available !

Order now     Discover IDATE's publications & studies

More informations about IDATE's expertise and events :

www.idate.org      www.digiworldsummit.com      www.digiworldweek.com

5Dec/140

[ITW] Jean-Louis MISSIKA, Deputy Mayor of Paris in charge of urban planning

Published in COMMUNICATIONS & STRATEGIES No. 96

jean-louis-missika

 

Interview with Jean-Louis MISSIKA, Deputy Mayor of Paris in charge of urban planning

Conducted by Yves Gassot, CEO, IDATE-DigiWorld Institute

C&S: The Smart City concept is often criticized for seeking new markets for digital technology rather than tackling the phenomena that make the management of our cities increasingly complex. What is your view?
Jean-Louis MISSIKA:  I do not think it is a fair criticism. Digital technologies have undeniably created the conditions for important changes in our ways of living, inhabiting and consuming. They are now part of our everyday lives and, surely, their impact will increasingly spread throughout the multiple ways we, as humans, interact.
Beyond what they create as opportunities for individuals, digital technologies are fundamental for cities – and among them the city of Paris. Urban systems are confronted with major challenges on the economic, social and environmental fronts. Energy transition, and more generally the management of scarce resources, climate change and the biodiversity challenges drive us to analyze all the solutions available now and in the future to build a more sustainable city - the city of tomorrow. Digital technologies and, in particular, their potential in terms of coordination and rational use of scarce resources, are high on the policy agenda. This is not simply to create a market for them; this is about using all the possibilities offered by technology.
I definitely think it can be a win – win development for both the city and the companies if these firms are working with those involved in the challenges of the city like urban planners and system operators.
Additionally, we are witnessing a boom of young, innovative companies and startups, but also the citizens themselves – both from Paris and outside – who develop digital solutions for the city. This is clear evidence of what is at stake here: it is for local authorities to allow the digital revolution to spread in the society so that innovation does not only occur through large companies but also thanks to citizens' initiatives.

C&S: How would you rate the strategy of Paris, using a broad comparison between the very holistic, top-down approach of projects emerging in the context of new towns and in Asia, and the more bottom-up approach that seems to be primarily based on using multiple data repositories ('open data') associated with urban systems?
J.-L. M:
We are definitely leaning towards the "bottom up" approach to building Paris as a smart city.
Collective intelligence is an effective way to source the best ideas. And it does work well in Paris in part because we provide people with the appropriate means to implement projects: workspaces, coaching, financing, public spaces to experiment… and data.
This is one of the pillars of a smart and sustainable city: a place where the technology is used for people, by people, to include them in the life of the city and in the process of public decisions.
Let me refer to a recent project. We have worked over the last 6 months since the election to reach a greater transparency and citizen involvement in the City operations, by creating a platform for the development, discussion and adoption of community projects. These are chosen by the Parisians and are financed through a participatory budget. 5% of the total investment program, which represents 426 million euros, has been flagged for programs chosen directly, through vote, by the Parisians.
Within the next months, Parisians will even be able to share the benefit of their expertise and creativity by suggesting investment ideas directly.
Another way to involve people is crowdsourcing. We have developed the "DansMaRue" mobile application which Parisians use to signal local problems and even identify spots for "urban greening" (buildings, walls, squares, abandoned urban places). It is this type of exchanges with Parisians we want to implement to make our City better.
This is a genuine urban revolution in the making: the role of local governments of world-cities is to understand, support and leverage the benefits of this revolution. European cities, I believe, have a major role to play in leading this transformation. Their governance is well geared towards citizen involvement and this should alleviate the risks of the "systemic city" or the "cybernetic city".

C&S: Do you have any models or at least references to guide your project for Paris?
J.-L. M: Many interesting models exist throughout the world and we are discussing extensively with many cities facing the same challenges.
That being said, from our discussions we retain one key conclusion: each of these cities has developed its own good practices with its own cultural frame. I think there is no single model of smart city and it would be ineffective to copy-and-paste alien models or ready-to-use solutions in a fast-changing environment.
We have our own model based on an iterative approach that uses successful experiments in Paris. We have been working for several years to make Paris a strong city in the digital sector and a breeding ground for innovation. I would say that over the last 10 years or so we have created the conditions for the emergence and development of a strong ecosystem. Thanks to all these efforts, Paris has experienced a lot in recent years and is now a world leader in innovation and most certainly the top European city.
There are well-known examples of successes such as Velib ', Autolib', Paris Wifi, among other experiments such as heating a residential building thanks to the energy produced by data centers, data vizualisations of the Paris transport system, smart street furniture, … Many of those locally-grown success stories are helping to build our own project of smart city and to deploy these experiments on a larger scale as standards for the city of tomorrow.
Paris is actually creating international benchmarks for smart city, though it is not as recognized as it should be. Through calls for innovative projects led by the Paris Region Lab at the initiative of the City, we facilitate the emergence of intelligent solutions on subjects as diverse as intelligent street furniture, energy efficiency or assistance home support for seniors. Paris provides entrepreneurs and businesses of all sizes with a single territory and open trials. It also runs a network – an open innovation club – that organizes meetings between the largest companies and startups. We are even deploying this initiative in other French cities, at their own request.

C&S: What priority initiatives have been selected for the Smart City project in Paris?
J.-L. M: One billion euros will be invested by 2020 in order to make Paris the international benchmark in innovation related to land use, the participatory democracy, sustainable development, the digital economy and energy transition.
Our smart city approach is threefold: open city (open data), digital city (potential of digital technologies and their application to improve the quality of life of Parisians) and the inventive city (which is built by transversal networks and innovation).
Each of these pillars shall contribute to our 4 main targets.
One of the most important is the food supply because no city in the world is capable of ensuring its food self-sufficiency in the present state of our know-how and our food is responsible for almost 40% of our ecological footprint. We have recently launched a call for projects titled: "Innovative Urban Greening" which consists, among other objectives, in experimenting with the urban agriculture of the future.
Another challenge is the energy of the city. 90% of the energy of the Paris metropolis is provided by fossil fuel or nuclear energy. From a territorial point of view, it is an imported energy. In addition to the on-going effort on renewable energies (with a certain success for geothermal energy), the focus is increasingly on energy recovery. We must go ahead and draw from their hidden resources. These resources are at the core of the circular economy: a waste produced by someone is a resource for someone else.
An example in Paris is the Qarnot Computing start-up which has invented a radiator-computer: by dissipating all the energy consumed  by data processors in the form of heat, the Q-rads make it possible to heat free of charge and ecologically any type of building (housing, professional premises, collective buildings) according to the needs of their users. A low rent housing building has been fitted out with these Q.rads radiators: the inhabitants do not have to pay for their heating anymore and their ecological footprint is zero.
The third challenge is urban mobility. This can no longer be dealt with through the option of car versus collective transport. New systems of mobility are emerging: they concern the technology of vehicles (electric cars, rubber-tired tram), but above all the technology of services (rental among individuals, sharing, car-pooling, multi modal applications, etc.), and they often open the way for the emergence of new chains of values and new players.
In Paris, the massive adoption of Autolib' and Velib' shows the power of attraction of sharing and self-service.
Last challenge is planning for the future of urban spaces and architecture. In order to take into account new ways of working, living or trading we need to be able to test multifunction buildings that combine housing, offices, community spaces, show-rooms and services to people. This mixed use on the scale of a building implies more flexible Local Urban Plans and an adaptation of safety rules. The new way of working implies home-office, mobile office, co working and remote working centers. The new way of living requires community spaces in the building, a greater use of roofs, community gardens, shared utility rooms, services to the person, sorting and recycling. New trading methods integrate ephemeral shops, shared showrooms and fablabs.

C&S: Paris as a city, and you in particular, have worked hard to ensure that digital is also an opportunity to redevelop business in Paris, which is threatened to become a purely residential city. What connection do you see between support for start-ups, incubators and nurseries, and a policy of the Smart City type?
J.-L. M: The City of Paris is an innovative city at the forefront of digital technology, as evidenced by the ranking of PricewaterhouseCoopers. The emergence of Silicon Sentier in the heart of Paris in recent years, or important events such as Futur en Seine and the Open World Forum illustrate the growing dynamism of our city in terms of digital innovation.
Notably, in our incubators, many innovations are related to digital technologies. They create value in all areas of the city and aim to serve people in a better way.
As an example, the Moov'in city competition launched in June 2013 by the City of Paris in partnership with the RATP, SNCF, JC Decaux and Autolib' aimed at bringing out new web-based and mobile services focused on mobility in Paris and the Ile de France region. One hundred ideas were generated through this process; seven of them were awarded a prize. Among them, the Paris Moov' solution is a route calculation application that integrates all public transport modes available in the Ile de France region and suggestions of activities once arrived at destination.
Some incubators and clusters that we support are directed specifically to the city and urban services (energy, transport, water, logistics, etc.).
This is for example the case of the Paris Innovation Massena incubator where we work with large corporations like SNCF or Renault. We help them and they accompany us to build our Smart City project.
In addition, the creation of incubators or Fab Lab continues with determination and ambition displayed, particularly with the MacDonald converted warehouse or the Halle Freyssinet, the future world's largest incubator (1000 start-up companies). New places at the forefront of innovation combining incubators, coworking spaces will continue to be created and its ecosystem of innovation will be internationalized. This is the only way for Paris to be in the top attractive and competitive cities in the world.

C&S: How do you pilot a 'Smart City' project? (Is it through a task force outside the main city services? Or through a cross-functional structure involving all the services?) How did you structure management of the Paris project?
J.-L. M: The smart city is a cross-cutting subject, which means we have no other way to do it than keeping good interaction among the administrative units.
All large cities are confronted with the issue of finding the appropriate scale of governance and new governance tools. The model of organization of local administrations is outdated. The large vertically-organised departments (urban planning, roadways, housing, architecture, green spaces) are facing the challenges of intelligent networks, project management, citizen participation that require a much more cross-cutting and horizontal coordination.
Paris has historically been organized in large vertical services to deal, for example with roads, architecture, urban planning and so on. For this reason, we have chosen to address the question of the Smart City within the City of Paris through a steering committee composed of elected officials and a cross-cutting taskforce driven at the General Secretariat - the body that oversees all directions.
This "smart city" mission is a project accelerator. Its aim is to raise awareness on this subject within and throughout the services but also to manage the relationship with our key partners of major urban infrastructure. It supports the deputy mayors on each of their missions and brings global thinking to structure a coherent overall strategy in the multiplicity of initiatives and concrete actions led by all the services.

C&S: On a more mundane level, the deployment of digital applications in the city is also organized on the basis of a telecommunications infrastructure (fiber access, 4G, WiFi, ...). Are you satisfied with the existing equipment and deployments underway at the initiative of private operators? How do you cooperate with them particularly in light of concerns over radio transmitters?
J.-L. M: While the City of Paris has no formal jurisdiction over this subject, we consider it is our role to ensure that all Parisians can access clear and transparent information on the deployment of base stations, and to take their concerns into account while ensuring the development of new technologies. This led us to sign a mobile telephony charter in 2003 with the telecom operators. His latest release in 2012 has set maximum exposure levels to radiofrequency fields and clear procedures for consultation with residents.

Jean-Louis MISSIKA is deputy mayor of Paris in charge of urbanism, architecture, projects of Greater Paris,  economic development and attractiveness. From 2008 to 2014, he was deputy mayor of Paris in charge of innovation, research and universities. Prior to his local mandates, his professional career included various managerial positions in the public and private sectors.

 

 

28Oct/140

The future of patents in communication technologies : interview with Ruud PETER, Philips

Published in COMMUNICATIONS & STRATEGIES No.95

Ruud Peters

 

Conducted by Yann MÉNIÈRE
Professor of economics at MINES ParisTech,
head of the Mines-Telecom Chair on "IP and Markets for Technology", France

 

 

C&S:  Could you please introduce yourself and the organisation you are working for/have been working for?
Ruud PETERS: I first joined the Philips Intellectual Property & Standards (IP&S) organisation in 1977, with a background in physics. After taking various positions in the technology and consumer electronics sectors, I was appointed CEO of Philips IP&S in 1999. There I have been responsible for managing Philips' worldwide IP portfolio creation and value capturing activities, and responsible for technical and formal standardization activities in the fields of consumer lifestyle, healthcare, lighting and technology until my retirement at the end of 2013.
I remain affiliated to Philips as a Strategy & IP adviser reporting to the board member responsible for Strategy and Innovation. I also represent Philips in the board of various companies, which I created or in which I took a share as Philips in the past. Beside my Philips affiliation, I devote about half of my time to other governing and consultancy roles as board member of a number of international companies and organisations related to IP.

C&S:  What is your/your organisation's approach to IP and patents from a business perspective?
R.P.: Philips has an integrated approach to IP asset management. This includes trademark, domain names and designs, while they are often treated separately in other companies. Philips also has a proactive view of the role of IP as a creator of value. In this view, building an IP portfolio should not be a goal per se, but a lever to support growth and profitability. Accordingly, Philips IP&S is closely involved in the business decisions being made around IP rights. It is responsible for the creation and management of these rights, but also anti-counterfeiting strategy, financial aspects of licensing agreements and formal standards-setting issues.

C&S:  What is your opinion about the role of the patent system in the economy, and the benefits it can bring to the society?
R.P.: Today more than ever, the economy needs people who are prepared to take the financial risk to invest in new ideas and innovative activities that contribute to welfare. Those people need a reward for the risk they take, and it is the role of the patent system to provide such incentives.
This incentive function of patents should be understood in a broad meaning. Patents are highly flexible instruments that open a broad set of strategic choices. Recouping investments by securing an exclusive use of inventions is certainly one of these options, but patents can also be used more proactively. They can be opened up for use by others though licensing programmes or the creation of joint ventures, creating valuable economic activity in the process. In other words, they are the necessary currency for the exchange of ideas and for collaboration.

C&S:  Recent years have seen frequent patent battles and controversy in the digital area. Is there something specific to this technology field with respect to patents and IP?
R.P.: Yes and no. On one hand, the digital area has indeed some specific features with respect to patents and IP. It is first subject to a continuous trend towards higher IP density, with many devices each embodying a growing number of patented technologies. It is moreover organized around a limited number of platform products – such as operating systems– that enable devices to interoperate. These platforms are subject to strong network effects: they become more attractive the more users and the more available compatible products (such as apps in the case of smart phones). They can also generate strong economies of scale in manufacturing. As a result, the competition between platforms is “tippy”: only a few companies that manage to quickly capture enough market shares can eventually establish a profitable business. Against this background it is not surprising that companies compete fiercely to promote their platforms. This includes inter alia a heavy use of patents in the first step. One can yet expect patent battles to recede once market positions will be stabilized.
On the other hand, similar evolutions may take place in other sectors – such as the automotive, healthcare or pharmaceutical industries – where digital technologies are becoming pervasive. In the future, I expect products in these sectors to reach substantially higher and in some sectors, like automotive, similar levels of patent density as in the IT industry. Patents may then become a battleground of the competitive process in these areas too. Patent battles are indeed an inevitable consequence of translating innovative merit into a competitive advantage or, conversely, a disadvantage for the company that pays royalties for borrowing a competitor's technology. They are one part of the market forces that eventually shape industries.

C&S:  What are the key challenges or trends that the patent system is currently facing?
R.P.: The key challenge for the patent system is to raise the bar for the quality of patents. The last decades have seen a sharp increase of patent filings around the world, inducing backlogs in patent offices and a drop in patent quality. Based on results of recent court decisions and inter parties reviews in the USA, it is estimated by some experts that about 50% of all patents can be assumed to be invalid. As a result, one cannot assume nowadays anymore that a granted patent is a valid right.
This legal uncertainty fuels lawsuits, but also criticism of the patent system. I think that both can be avoided with enhanced patent quality. To raise the bar, better searches for prior art should be a priority. While various other regulations are currently being discussed, this is the most obvious and effective way to improve the patent system.
Innovative, market-based means can help patent offices to fight the abuse of low-quality patents. I am thinking, for example, of crowd sourcing based searches for prior art to help defendants against assertion of low-quality patents. Article One Partners is a good example of a company providing exactly this service.

C&S:  Where are the main differences in the patents/IPR thinking and practice between both sides of the Atlantic, and between the Western world and Asia?
R.P.: The basics of the system – that is, patent law – are the same everywhere. Hence there are no significant differences in the way companies obtain IP rights. However, important differences remain at the level of the judicial system, in the way national systems are operated.
The U.S. patent system is more judiciary. It has a very complex judicial system, with high costs of using patents. By contrast, the European system is more balanced. It is less costly for its users despite the persistence of national patent systems. I am confident that this system will further improve in future years with the creation of the unitary patent and patent court.
Asian countries are modernising their patent systems, although not all of them are at the same stage. This is a very important evolution, especially as regards China. As of today, legal uses of IP remain less developed in this country than in the Western world. Local companies and IP institutions are less experienced, but they are catching up rapidly. I expect China to be at the same level as Europe in about five to ten years.

C&S:  What will be the most important developments regarding patents for the coming 5-10 years?
R.P.: The evolution of accounting rules towards a better financial valuation of IP should be a major development in future years. Currently, these rules tend to focus on the cash benefits of licensing income while there are many other ways in which IP assets create value in the knowledge economy. IP makes it possible to protect products and markets from competition, enter new markets, facilitate deal making or create freedom to operate and thus enable higher and more profits or less cost. Because such uses of IP rights do not appear explicitly on the P&L account and the value of the IP portfolio is not on the balance sheet, companies ignore the real value of their intangibles. In practice, this means that IP assets are dealt with at the IP department only, while they should be considered as strategic assets at the board level.
Financial valuation is necessary to convince corporate executives of the real value of intellectual assets, just as for other important assets on a company's balance sheet. This requires new international accounting frameworks that better reflect the true economic importance of intangibles. This is a challenging task for the next ten to fifteen years. Eventually, better accounting rules will facilitate IP recognition within companies, but also in society. The way IP works in the knowledge economy is still not well understood. We still apply the rules of the traditional hardware based economy to the knowledge economy. As an example, courts still calculate royalties as a percentage of the cost price of products, while they should consider the value that IP brings to the product. A new framework will be needed for financial, legal, tax and competition rules in the global, knowledge economy.
I also expect the maturation of markets for IP to be an important development for future years. The current system of bilateral negotiations of licensing deals is quite primitive. It is especially opaque and inefficient when the same patent needs to be licensed to multiple companies, with replicated costs of due diligence, negotiation and monitoring for each deal. A transition towards a more transparent and efficient organization of IP markets is possible, just as happened for stock markets in the past. With market-based pricing of unit licence rights, based on centralised due diligence, the creation of international IP exchange IPXI in Chicago is for instance an important step in this direction.

  • Ruud PETERS was appointed Chief Intellectual Property Officer (CIPO) of Royal Philips in 1999, in which position he was responsible for managing the worldwide IP portfolio, and the technical and formal standardisation activities of Philips. In this role, he turned the company's IP department from a cost centre into a successful revenue-generating operation, while at the same time integrating all the different IP activities within various parts of the company into one IP centralised organisation. He further developed and introduced a new concept for intellectual asset management, in which all the different forms of IP are handled together in an integrated manner, and advanced methods and systems used for determining the total return on IP investment by measuring direct and indirect profits. Ruud joined Philips in 1977. He retired from his role as CIPO at the end of 2013, but continues to work for the company as a part-time adviser on strategy and IP matters. He is also a board member of a number of technology /IP licensing /trading companies. Ruud has a background in physics (Technical University Delft, The Netherlands). He was inducted into the IP Hall of Fame in 2010 and in 2014 he received an Outstanding Achievement Award for his lifetime contributions to the field of IP from MIP magazine. He frequently speaks at major international IP conferences and also writes articles regularly for leading IP and business magazines.

 

  • Yann MÉNIÈRE is professor of economics at MINES ParisTech (France) and head of the Mines-Telecom Chair on "IP and Markets for Technology". His research and expertise relate to the economics of innovation, competition and intellectual property. In recent years, he has been focusing more specifically on IP and standards, markets for technology and IP issues in climate negotiations. Besides his academic publications, he produced various policy reports for the European Commission, French government, and other private and public organisations. Outside MINES ParisTech, he teaches the economics of ICT Standards at the Imperial College Business School. He is associated as an economic expert with Microeconomix and Ecorys, two consulting firms specialised respectively in economics applied to law, and public policies.

If you want to buy Communications & Strategies n°95 "The future of patents in communication technologies", this way, please.