19Mar/140

[ITW] Roberto VIOLA, Deputy Director General European Commission DG CNECT

Published in COMMUNICATIONS & STRATEGIES No. 93, 1st Quarter 2013


Re-thinking the EU telecom regulation

Summary of this issue: It is in a complex environment combining economic crisis, a growing gap between the performances of European operators and those of the US leaders, questions about Europe'sability to meet its objectives in NGA terms ("2020 Digital Agenda targets") and preliminary signs of the appetite of non-European operators to gain a foothold in the markets of the EU, that the Commission announced the publication of its proposal for a "Connected Continent" Regulation to the Parliament and the Council. This was accompanied by a few key reports, which are well represented in this issue of Communications & Strategies. The papers which have been selected provide a deep insight into the issues of European telecom policy addressed in the current "Connected Continent" proposal of the European Commission and that will be at the core of the forthcoming review of the regulatory framework. They are supplemented by two exciting interviews with key personnalities from Deutsche Telekon and the European Commission-DG Connect.

Roberto VIOLA<br />
Deputy Director General<br />
European Commission,<br />
DG CNECT
Exclusive:
Interview with Roberto VIOLA
Deputy Director General

European Commission,
DG CNECT

 

Conducted by Giovanni AMENDOLA,
Head of Relations with International Authorities,
Telecom Italia
& Yves GASSOT
CEO, IDATE-DigiWorld Institute

 

C&S: Could you explain why Mrs. Kroes' draft regulation aims at accelerating the creation of a single market for telecoms?

Roberto VIOLA:
Europe desperately needs to tap into new sources of innovation and growth. Today ICT constitutes half of our productivity growth, and every economic sector today increasingly depends on good connectivity to be competitive: the solution lies in applying the single market philosophy also to the telecoms networks that underpin those connections.
The fact is, European telecom companies cannot afford to remain trapped in 28 national markets; and Europe cannot afford it either. Europeans are enjoying single market freedoms, and telecoms are an increasingly important part of that: as businesses want to use new services like cloud computing, connected cars and mobile health.
If we allow those barriers to remain, we starve the digital economy of the raw materials it needs: connectivity and scale. When supported, the digital ecosystem can grow and create jobs fast; 794,000 were created in the app economy alone in just five years, even with a wider economy in recession. And across the economy, digital tools stimulate business, through higher productivity, efficiency and revenue.
Europe needs to recapture its global lead in ICT – a lead we once had, but lost. The missing link in this digital ecosystem is a telecoms single market. The European single market with 500 million customers will be one of the largest and the wealthiest of the world.

What are the likely characteristics of a single telecoms market in Europe?

A genuine single market in telecommunications is a market that looks at the evolution of internet and data services where consumers and businesses can obtain the best services from any EU operator, where operators can competitively offer services outside their home Member State, and market them anywhere in the EU; and where there are no excessive charges for cross-border communications or for roaming. It is where telecoms companies can have the ambition to expand on a continental scale – and every European benefits from choice and seamless data service. The Connected Continent is the underpinning infrastructure of the future European digital economy.

Do you think that the benefits of national competitive markets can be combined with the benefits stemming from the emergence of pan-European operators?

Our vision is for a dynamic, competitive market where pan-European providers compete alongside regional or local players, more local and more tailored. Such a market will promote competition and increase choice for consumers: with fewer barriers and greater economies of scale, companies will only succeed by offering the best deals at the best prices.
Until now in Europe we have had national markets. The result is significant fragmentation and a lack of dynamism: in particular if you compare Europe with its global competitors. As a result we are losing out on growth and jobs. This is not acceptable any longer. Operators should be able to easily provide their services in any country where they see a market opportunity, without facing unnecessary restrictions – that is what a single market means.

The draft regulation introduces a much more Eurocentric model of regulation based on stronger Commission powers. In addition, the Commission has also stressed that a genuine single market will ultimately require a single EU regulator. Can you explain the new institutional model of regulation proposed by the Commission?

The Commission proposal does not establish new bureaucracy or regulatory bodies at EU level: it levers on the existing bodies and keeps institutional change to the minimum necessary to enable the single market. On Spectrum we would like to increase the role of the Radio Frequency Policy Group as an advisor to the Commission and we want to make sure that Member States enhance their cooperation when assigning and licensing spectrum for broadband applications.
Another important element of our proposal is the enhanced role set out for BEREC in ensuring consistency of regulation.
As we explained in the Communication that accompanied the presentation of the legislation, the enhanced cooperation model we have proposed does not preclude a future review of the regulatory framework, considering all options and selecting the most cost-effective one in light of the market scenario as it then stands. In such a future scenario, it is possible that in a completed genuine single telecoms market we would need tighter links among National Regulators by means of an EU regulatory body responsible both for spectrum and telecom markets in charge of interpreting and implementing a harmonised legal framework. But this is hypothetical; and not covered by the proposal now under discussion.
At the same time, a single telecoms market with lower barriers to entry and more effective competition should over time normally lead to less regulation - shifting responsibility from regulatory to competition authorities as is the case in other economic sectors. Some regulatory tasks will always remain linked to the national or local level. It would therefore be important to assess the possible tasks of an EU regulator, as and when this option might be considered in future.

To what extent could the provisions set out in the draft regulation of the Commission be postponed and taken up in the next Telecoms Package Review?

A fully fledged telecom review is a complex task and it could take years to be completed. Apart from some countries, we witness a very slow development of fast broadband and lack of investment in Europe. We cannot wait for the situation to worsen. As our Commissioner, Vice-President Kroes, has put it: with the economy where it is, with technology where it is, with the rest of the world marching on quickly, we need to act, now, urgently. As we have said on many occasions, this cannot be a piecemeal approach: only the package taken as a whole can bring us towards the single market we need.
In parallel with the work on the Regulation – but not instead of it – we should of course prepare the ground for the next Commission and a future review of the telecoms framework. Indeed in September 2013, we set out how we are preparing such a review, looking at issues like enhancing consistency, a single regulator, the level playing field, and audiovisual convergence. It's right to start preparing for that. Such a review will take time. And we cannot wait that long before acting.

How do you interpret the critical reactions from the BEREC, operators and even a number of governments?

When the proposal was presented there were initial mixed signals, which is quite normal considering the issues at stake. This proposal is designed to have a real impact, it is not a minimum common denominator to make every interest group happy. Some operators favour some aspects but combat hard against others: short termism leads to fighting the end of roaming, for example. But we continue to regard all aspects as integral to the objective, not separable from each other. Besides the telecom providers the reactions from all industrial and service sectors have been generally very positive showing how single market counts for the future of Europe. Also the reaction of the investor community was in general favorable.
The immune system of Regulators is programmed to guarantee stability and to be prudent about change. I remember back in 2007 when many National Regulators were initially opposed very vehemently to the idea of establishing BEREC. An initial critical reaction from National Regulators was to be expected. However in a number of points the opinions of the Commission and of BEREC are convergent.
However many things have changed since the presentation of our proposal. Heads of States and Government in the European Council of October 2013 have welcomed the single market proposal. The Parliament is working very constructively and at full speed towards adopting its opinion on the proposal.
Although in some areas differences of point of view remain, we are now engaging in a constructive dialogue with BEREC which I am sure will bear fruits.

What would you say to telecoms operators who consider that there is too much discrepancy between the very strict sectoral regulatory framework to which they have to submit (ex ante and ex post) and the far less formalised OTT environment?

It is a recurring question but is probably not the right question. Telecom operators are infrastructure providers and "over-the-top" internet players ("OTTs") by definition are not. It's the same difference that you can observe between passenger services and transport industry. The telecom world is becoming data centric. The telecom infrastructures have to evolve towards this new paradigm. For telecoms operators, OTTs are an important driver of connectivity demand and consumption. In other words without OTT's there would not be a telecom industry future and vice versa. Recent trends have shown the increasing interrelation between telecoms operators and OTT's. This trend will be continuing in the future. It is also clear that when analysing competitive pressure all service providers have to be taken into account. What counts is the nature of the service not who is providing it. We also have to recall that there are different kinds of OTT services, not only those competing with transmission and communications services but also those which may fall under the audiovisual media services directive.
Now, it is clear that the close relationship between telecoms and OTT players poses opportunities and regulatory and competition issues that need to be examined carefully, also possibly in a future review of the telecoms framework. These challenges, however, are not confined to this framework, but include other issues that will need to be addressed at European and at the international level.

How do you account for the difference in growth of the telecommunications service market on each side of the Atlantic in the last five years?

Investment, investment and investment! The differential in investment for example in 4G networks is rather remarkable and probably explains it all. The size of the market is also an important element. Fragmentation into small national markets means European telecoms operators have little incentive to expand and reach the scale of some of their American counterparts. We want to enable European telecoms operators to find business opportunities across Member States, and reap the benefits of a market of 500 million consumers.

Do you believe that the Digital Agenda objectives for 2015 and 2020 can be achieved by most of the European countries? Do you also envisage a need to reset the objectives by introducing more ambitious targets?

Having clear targets since the DAE was launched in May 2010 has to enable progress in Europe to be measured. Many national and regional authorities have adopted their digital agenda with the same objectives. In the annual Digital Agenda Scoreboard, we make data openly available so everyone can assess and compare performance and progress country by country.
Basic broadband internet is now everywhere in the EU, but data shows that more effort is needed to achieve the 2020 DAE targets. Fast broadband now reaches over half the population – as 54% of EU citizens have broadband available at speeds greater than 30 Mbps. Internet access is increasingly going mobile - 48% of EU citizens can access the internet via a mobile network from their smartphone, portable computer or other mobile device. However, only 2% of homes have ultrafast broadband subscriptions (above 100 Mbps), far from the EU's 2020 target of 50%. This is very alarming. Before setting new targets we have to use every policy instrument to make sure that we can meet the existing ones. If there is a remedy above all for such an alarming situation then this is called the single market.

Biography

Roberto VIOLA holds a doctor degree in electronic engineering (Dr. Eng.) and a master in business administration (MBA). He is Deputy Director General at European Commission - DG CNECT, with responsibilities for Electronic Communications Networks and Services Directorate, Cooperation Directorate - International and Inter-institutional relations, Stakeholders cooperation, Coordination Directorate - Growth and Jobs, Innovations and Knowledge Base, Media and Data Directorate. Since 2005 to 2012 he has been the Secretary General in charge of managing AGCOM (Italian media and telecom regulator). He has been Chairman for 2012-2013 of the European Radio Spectrum Policy group (RSPG), he was Deputy Chairman for 2011 and Chairman for 2010. He was in the Board of BEREC (Body of European Telecom Regulators). He was Chairman for 2007 of the European Regulatory Group (ERG). He served in AGCOM (1999-2004), as Director of regulation department and technical Director being in charge of, inter alia, regulation in terrestrial, cable and satellite television, frequency planning, access and interconnection of communication services, cost accounting and tariff in telecommunication and broadcasting services. From 1985-1999 he served in various positions as a staff member of the European Space Agency (ESA) in particular, he has been head of telecommunication and broadcasting satellite services.

Published in COMMUNICATIONS & STRATEGIES No. 93, 1st quarter 2014

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

22Jan/140

Cutting the Cord: Common Trends Across the Atlantic

Published in COMMUNICATIONS & STRATEGIES No. 92, 4th Quarter 2013


Joint Interview between Gilles FONTAINE, IDATE and Eli NOAM, Columbia Business School

Summary of this issue: "Video cord-cutting" refers to the process of switching from traditional cable, IPTV, or a satellite video subscription to video services accessed through a broadband connection, so called over-the-top (OTT) video. The impact of cord cutting will probably differ among countries, depending on the level of roll-out of digital cable, fibre optic networks, and/or IPTV, on the tariffs of legacy video services, on the quality of broadband access and on national players’ strategies.
Regulation will play a key role in this new environment, as a strict enforcement of net neutrality could prevent network operators from leveraging their access to customer base to market their own video services.

Eli NOAM<br />
Columbia Business School,<br />
New York, USA<br />
 Exclusive:

 Joint interview with

 Gilles FONTAINE, IDATE,
 Montpellier, France
 
 Eli NOAM, Columbia Business School
 New York, USA
 

C&S: How would you define cord-cutting, from a US or European perspective?

Gilles FONTAINE: Cord-cutting, in Europe, is seen mainly as a USA phenomenon, where consumers would trade-off their pay-TV subscription for over-the-top Internet services. The last years, in Europe, have rather seen the rise of powerful cable and IMPTV operators competing in the pay-TV market with legacy satellite packager.

Eli NOAM: Cord-cutting is the dropping, by consumers, of expensive cable TV subscriptions in favor of online access to TV programs and on-demand films. Drawbacks for consumers are less certain quality (bandwidth), less availability of live programming such as sports, and absence of some channels. Advantages are cost-saving, no need to pay for undesired channels, better search, less advertising, greater choice, more control. In a broader sense, cord-cutting is a transition of TV from a broadcast/cable push model to an individualized pull model. So this is not just about switching to yet another delivery platform. That's the easy part. It is much more fundamental. Looking ahead, one change will be that by going online, TV will move from a slow-moving, highly standardized technology controlled by broadcasters and consumer electronic firms to a system where multiple technical approaches compete with each other and propel video delivery into an internet-rate of change and innovation. And that's just the technology. Equally important changes will take place on the content level, and in the structure of the media industry, in the advertising and business models, and in the policy.

Do you see any evidence that cord-cutting is really happening?

Gilles FONTAINE: Cord-cutting, in Europe, is not happening, or is not happening yet. Several reasons account for this: on the one hand competition is intense in Europe between networks, and is driving Internet access and television prices down, therefore limiting the incentive to "cut the cord". On the other hand, Internet services are far from having the same level of offer as US ones, even if catch-up television is increasingly available throughout Europe. Also, the video-on-demand market is very fragmented, with still limited catalogues and interfaces that could be improved and subscription video on demand is nascent, and mostly pushed by US-bases players, even if some European players have launched first services. Finally, the penetration of connected TVs and connected set-top-boxes is probably also lower in Europe than in the USA.

Eli NOAM: In the short run, there is less cord-cutting than media reports and hype suggest. For a variety of reasons, almost all participants in the media industry have an interest in dramatizing the issue. Broadcasters are making investments in ‘second screen' distribution, partly to be prepared for change, and need to justify them. ISPs are expanding bandwidth to position themselves as providers of mass entertainment options. Telecom companies, similarly, need to upgrade their networks. New providers of bypass service to broadcast and cable, such as Aereo in the US, create buzz to their market-disruptive activities. Media cloud providers such as Amazon or Netflix present new options. And even cable TV operators, who are the ones negatively affected, have an interest in presenting the problem as a crisis, at least to policy makers, in order to gain regulatory relief.

The reality is more modest, at least in the short term, but not insignificant. According to a credible analyst, Craig Moffett, The "pay TV sector" – cable, DBS, and IPTV – lost 316,000 subscribers in a 12 month period mid-2012- mid-2013. Since IPTV has gained subscribers, cable losses must have been larger. That is a loss of about 0.3%. Another estimate for 2012 has the number at 1.08 million. In a 4-year period 2008-2011, anywhere between 3.65 and 4.75 million subscribers were lost. But that was in the midst of the Great Recession, and thus not all can be attributed to cord-cutting.

Do OTT services really challenge telcos and cablecos managed TV and video offers?

Gilles FONTAINE: Many studies seem to show that OTT services propose a better customer experience than the equivalent launched by the telcos or the cablecos. OTT services are Internet natives, customer friendly companies, with a rhythm of innovation that is difficult to compete with. Telcos and cablecos still concentrate on the "linear television model", even if they have developed their own on-demand offers, whereas OTT services specialize in on-demand services. But telcos and cablecos still benefit from a privileged access to the TV set through their TV set-top-box, a competitive advantage which is about to be undermined by low cost solutions to connect the TV set, such as Chromecast from Google.

Eli NOAM: Overall, the extent of video streaming has been quite large. In the evening hours, about two-thirds of internet traffic are video-bits. Netflix alone has added 630,000 streaming subscribers in the US in 3 months in 2013, to a total of 30 million. Thus, while the numbers of cord cutters is not huge yet, as mentioned, a steady loss of subscriptions is to be expected, and it is backed up by surveys in which cable subscribers grumble about staying with expensive subscriptions which they do not fully utilize. This is particularly true for the younger generation. 34% of the Millenials (cohorts born 1980-2000) say that they watch mainly online video and not broadcast TV. For Gen X and for Boomers the numbers drop to 20% and 10%.

With OTT available, the traditional business model of cable companies unravels. In the past, they were able to raise prices and to pass on the raises by channel providers. This becomes more difficult. Similarly, it becomes more difficult to offer only bundled channels ("prix fixe"). Similarly, the ability of channel providers to offer content to viewers directly reduces their bargaining strength considerably. If they want to keep up, they also need to develop expertise in online technology, social networking, and mobile communications.

UK cableco Virgin Media and Sweden cableco recently signed a distribution agreement with Netflix. Do you foresee any revision of the cablecos and telcos triple-play model?

Gilles FONTAINE: Building an IPTV service is not straightforward for a telco: network costs can be high to ensure a homogeneous quality of service. They also face high programming costs and the complexity of negotiating with the media world. On-demand services hardly prove to be profitable, because of the market power of Hollywood studios combined with the strong competition between telcos and cablecos, has for instance led to almost unrecoupable minimal fees to access programs. The situation can be similar for a cableco that would not have the resources to acquire exclusive, attractive content: the recent deal between Virgin Media or Com Hem and Netflix heralds a change of strategy for the smaller telcos and clablecos, which could favor to comfort their Internet access business by offering the best OTT services rather than pushing their own television packages.

Eli NOAM: Overcoming all of these challenges is possible but requires an acceleration of internal processes, major investments, and a willingness to give up some control. There are signs of change in that direction. Comcast, which has just paid $ 39 billion for NBC Universal, thus gaining vertical control from the camera lense to the eyeball, has now announced a trial of a cord-cutting offer to subscribers: if they take a Comcast broadband service (of a quality that is today an upgrade for most customers) they get at basically no additional charge HBO Go (HBO's archive of self-produced shows plus current other shows, available anywhere in the US from most devices), plus the free broadcast channels. The regular monthly price $ 70/ month, compared to a price of $ 135 for a full complement of 200 channels including HBO Go. So the viewer willing to skip regular cable channels saves a lot of money. The data cap for such a service is 300 Gigabytes. This is about 120 hours of HD viewing per month, which is adequate for single viewer but tight for a multi-device, multi-viewer household.

So this shows that cable companies are considering to embrace cord-cutting as an inevitablity. Another development in that direction is the US cable industry's considering to integrate Netflix into its operations. They are holding talks with Netflix to make Netflix an option on their set-top boxes. In such a scenario, Netflix would, in effect, become cable companies' major VOD provider and revenues would be shared. This, together with the cable MSO's own cord-cutting option, would in effect accelerate cord-cutting. However, cable companies would not be entirely bypassed. They would mitigate cord-cutting into channel cutting. Ultimately, cable companies' main asset is their transmission network. Its exploitation will undergo transformation.

TV channels also face another form of cord-cutting, as viewers may directly choose their on-demand programs. How do you see their future role, if any?

Gilles FONTAINE: TV channels, as aggregators, may lose their specific role if on-demand consumption develops significantly. However, they will evolve proposing more and more live events to continue gathering strong audiences at the same time. Moreover, there is still a need of arranging the on-demand catalogues, pushing the right content to the right viewer at the right time and on the right device. TV channels should be able to leverage their linear programming to play their aggregator role in an on-demand market. But they will need to heavily invest in IT and review their trade-off between linear and on-demand distribution.

Eli NOAM: TV channels gain and lose. They gain in bargaining power over cable and other distributors. They can deal directly with users, though more likely they will go through new types of intermediaries such as Apple and Amazon.com. In a profusion of content offerings, strong brands are a valuable way for users to search for content. And if they can identify users or user characteristics they can fine-tune and individualize advertising. The danger for channel providers is that the loss of cable MSOs hold over viewers means that they cannot share in the MSOs pricing power. Furthermore, content providers can disintermediate them by going directly to viewers. Sports leagues, for example, could deliver their events directly and cut out the networks. Most of the channels do not have major operational IT expertise, and this provides an opening for an entire industry of new service providers and video clouds.

Gilles FONTAINE's Biography

Gilles FONTAINE is IDATE's Deputy CEO and is also in charge of IDATE Business Unit dedicated to media and digital content. During its 20 years experience in the Media sector, Gilles Fontaine has become an expert of the media economics and of the impact of Internet on content. He directed numerous studies for both public and private clients, including the EC, governments and local authorities, telcos and TV channels. Recent assignments have included a participation in the future MEDIA programme ex-ante assessment, the analysis of new video internet services economics, a long term forecast project on the future of television. He has also monitored the impact of digitization and online distribution on other media, radio, press and music. Mr. Fontaine holds a degree from the highly reputed French business school, HEC (Ecole des Hautes Etudes Commerciales, 1983) and from the Institut MultiMédias (1984).

g.fontaine@idate.org

Eli NOAM's Biography

Eli NOAM has been Professor of Economics and Finance at the Columbia Business School since 1976. In 1990, after having served for three years as Commissioner with the New York State Public Service Commission, he returned to Columbia. Noam is the Director of CITI. He also served on the White House's President's IT Advisory Council. Besides the over 400 articles in economics, legal, communications, and other journals that Professor Noam has written on subjects such as communications, information, public choice, public finance, and general regulation, he has also authored, edited, and co-edited 28 books. Noam has served on the editorial boards of Columbia University Press as well as of a dozen academic journals, and on corporate and non-profit boards. He was a regular columnist on the new economy for the Financial Times online. He is a member of the Council for Foreign Relations. He received AB, AM, Ph.D. (Economics) and JD degrees, all from Harvard. He was awarded honorary doctorates from the University of Munich (2006) and the University of Marseilles (2008).

Published in COMMUNICATIONS & STRATEGIES No. 92, 4th Quarter 2013

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

17Dec/131

Interview with Craig MOFFETT MoffettNathanson LLC, New York

Published in COMMUNICATIONS & STRATEGIES No. 92, 4th Quarter 2013


Video cord-cutting

Summary of this issue: "Video cord-cutting" refers to the process of switching from traditional cable, IPTV, or a satellite video subscription to video services accessed through a broadband connection, so called over-the-top (OTT) video. The impact of cord cutting will probably differ among countries, depending on the level of roll-out of digital cable, fibre optic networks, and/or IPTV, on the tariffs of legacy video services, on the quality of broadband access and on national players’ strategies.
Regulation will play a key role in this new environment, as a strict enforcement of net neutrality could prevent network operators from leveraging their access to customer base to market their own video services.

Craig MOFFETT<br />
MoffettNathanson LLC, New York
Exclusive:
Interview with Craig MOFFETT
MoffettNathanson LLC, New York

Conducted by Raul KATZ,
CITI (Columbia Institute for Tele Information),
New York

 

C&S: Is cord-cutting affecting equally cable TV and telcos in the US?

Craig MOFFETT:

There's a fundamental difference between the cord-cutting experienced by the cable operators, which is all about video, and that experienced by telcos, which is all about voice. Video is a high bandwidth service and voice is a low bandwidth one.

Low bandwidth services are the easier target, so up to now we've seen much more aggressive cord-cutting in voice than in video. The fact that the cable operators have a more robust physical plant than the phone companies has left the telcos losing share in broadband as well as in voice, making the losses all the more painful for the telcos.

Video is such a high bandwidth service that video cord-cutting is only just beginning. By our estimates, there are now as many as 2 million households that have cut the Pay TV cord in the U.S. That's only about 2% of the market, but it is a growing segment. In these early numbers you can see the beginnings of a bigger problem.

What are the different retention strategies deployed by each type of player to prevent an acceleration of cord-cutting trends?

The telcos seem to have concluded that they are fighting a losing battle to retain wireline voice customers. The residential voice market as a standalone business is vanishing before our very eyes. Unlike in Europe, bundling wireline and wireless therefore isn't really an option. In the U.S., the telcos have regional wireline footprints but also have national wireless ones. Naturally, they are reluctant to make a compelling integrated offering for fear that it will simply reduce the competitiveness of their wireless businesses outside their footprints.

Cable operators have an advantage in that they've got the best physical plant (at least where there is no fiber-to-the-home alternative). So they've been able to bundle video and broadband, and even voice, as a retention strategy. That has proven very sticky. And by tilting the pricing of their services – higher for broadband and lower for video, at least on the margin – they can make it less and less attractive to leave.

And the cable operators have another advantage. It is easier to defend high bandwidth services than it is to defend narrowband ones. The key is whether the cable operators will be able to begin charging for broadband usage. If they can, defending against high bandwidth video streaming becomes relatively easy. Or rather, it becomes a moot point, since a carrier charging the right price for usage is economically indifferent whether video is delivered via traditional Pay TV or via internet-based OTT (over-the-top) alternatives. The question here is entirely regulatory. Whether they will meet regulatory resistance to their early trials is unclear.

Would any changes in the content arena (e.g. sports content) accelerate the cord-cutting trend?

In many ways, sports programming holds the key to how the ecosystem will evolve in the U.S. Today, sports are exclusively available via the traditional model. Cutting the cord is therefore appealing to a relatively smaller segment of the population. If the most popular sports events were to be made available over the Internet you would suddenly begin to see a much more rapid migration to video over the Internet.

Conversely, if traditional cable and satellite operators are ever able to force the unbundling of sports networks by putting them on a separate tier, they would relieve what is otherwise a tremendous pressure point on the system. In theory, that would slow down cord-cutting. Today, cord-cutting is primarily about cost, not technology. And the biggest driver of cost inflation is sports programming. Taking it out of the basic programming tier would lower the cost to non-sports enthusiasts, reducing their incentive to cut the cord.

Would you see that cord-cutting would trigger additional changes in the content value chain (e.g. backward/forward integration, M&A)?

For distributors, the key question is whether the economic value of the video transport function can be preserved in an over the top model. If it can, the distributors will fare relatively well. Even satellite operators would benefit, since the economic benefit of cord-cutting would be mostly eliminated, which would naturally slow down the migration. Again, the real questions here are regulatory, not technological or economic.

For programmers, the key question is whether cord-cutting will necessitate unbundling. Most consumers think that content bundling is driven by the distributors. It is not. It is driven by the programmers. The programmers sell bundles of cable networks to the cable operators, and their contracts require that those bundles be kept intact.

Cord-cutting is typically assumed to entail a move to unbundling, or a la carte, programming, but that doesn't necessarily have to be the case. One can imagine a model where video is delivered over the Internet in the same unwieldy bundles that are today delivered by cable and satellite operators. If things evolve that way, the implications for the programmers will be relatively modest. On the other hand, if programming is ultimately unbundled as it moves to the Internet then the value chain as we know it will be upended. Value in that model would move further and further upstream, ultimately to the actors and artists, accelerating a migration we've been witnessing in slow motion for years. The value of the media conglomerates would radically decline as their revenues declined and as their costs of content acquisition and production rose. At this point, it is too early to say whether this will happen in video. It already has in music, and the results haven't been pretty.

Biography

Craig MOFFETT is the founder of MoffettNathanson LLC, an independent institutional research firm specializing in the telecommunications, and cable and satellite sectors. Mr. Moffett spent more than ten years at Sanford Bernstein & Co., LLC as a senior research analyst. He was previously the President and founder of the e-commerce business at Sotheby's Holdings. Mr. Moffett spent more than eleven years at The Boston Consulting Group, where he was a Partner and Vice President specializing in telecommunications. He was the leader of BCG's global Telecommunications practice from 1996 to 1999. While at BCG, he led client initiatives in the U.S. local, long distance, and wireless sectors, in both consumer and commercial services, and advised companies outside the U.S. in Europe, Latin America, and Asia. He was the author of more than 20 articles about the telecommunications industry during the 1990s. He published analyses and forecasts

Published in COMMUNICATIONS & STRATEGIES No. 92, 4th Quarter 2013

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

3Dec/130

Interview with Terry DENSON, Verizon Communications, New York

Published in COMMUNICATIONS & STRATEGIES No. 92, 4th Quarter 2013


Video cord-cutting

Summary of this issue: "Video cord-cutting" refers to the process of switching from traditional cable, IPTV, or a satellite video subscription to video services accessed through a broadband connection, so called over-the-top (OTT) video. The impact of cord cutting will probably differ among countries, depending on the level of roll-out of digital cable, fibre optic networks, and/or IPTV, on the tariffs of legacy video services, on the quality of broadband access and on national players’ strategies.
Regulation will play a key role in this new environment, as a strict enforcement of net neutrality could prevent network operators from leveraging their access to customer base to market their own video services.

Terry DENSON, Verizon Communications, New YorkExclusive:
Interview with Terry DENSON
Vice President, Content Strategy & Acquisition
Verizon Communications,
New York

Conducted by Raul KATZ,
CITI (Columbia Institute for Tele Information),
New York

 

C&S: Is the telco voice cord-cutting experience at all applicable to video distribution?

Terry DENSON:

I would not necessarily agree that the voice cord-cutting experience is the salient point. I believe the applicable lesson from the transition of voice from wireline to wireless is that the wireline relationship was literally and figuratively connected to the household while the wireless relationship is personal (e.g., it is common for several, if not all, members of a household to have their own device, which is personal to them). I see a similar opportunity in video distribution: the long term winners will be those distributors who are able to develop and offer video relationships (subscription or otherwise) that are targeted toward individuals (and all of their devices) and not solely the household.

What do you believe are the Telco's key assets in facing cord-cutting (either voice or video)?

Telcos have two key assets that make them well-positioned to establish and maintain market leadership in video: 1) The best bundled broadband product; and 2) the best platform for offering consumers a wider and deeper choice of live, recorded and on-demand content across all devices on a personal basis than any OTT player.

How do you believe Telco's fare relative to cable companies to face current and future video cord-cutting trends?

I believe Telcos are in a better position to prosper (especially those with a material wireless business) because they will be better able to: 1) monetize video traffic over the networks through high speed wireless and wireline networks; and 2) deploy compelling video services based upon those networks that provide more value and choice to customers than an OTT only player.

Do you expect any changes in value creation along the chain as a result of future cord-cutting trends?

I expect two long term changes in value creation: 1) the enhanced value of owning the broadband pipe based upon consumers increased reliance on greater capacity; and 2) the expansion of the video pie based upon the proliferation of video access points on devices and increased personal relationships and subscriptions for video access on those devices.

Biography

Terry DENSON is Vice President, Content Strategy & Acquisition for Verizon Communications. He is responsible for Verizon's content strategies and acquisition across all platforms including FiOS TV, Broadband, Verizon Wireless and Redbox Instant by Verizon (Verizon's joint venture with Redbox). He previously was vice president of Programming and Marketing, a position he was named to in August 2004 when he joined Verizon. In that position, Denson oversaw the creation and implementation of FiOS TV's content packaging, pricing and marketing strategies and video content acquisitions. Prior to joining Verizon, Denson served as vice president of programming for Insight Communications where he led the acquisition of programming, in addition to the development of analog, digital, video-on-demand, high definition TV, Broadband and interactive content strategies. Previously, as director of business development for the Affiliate Sales and Marketing department of MTV Networks, a division of Viacom International, he negotiated affiliation agreements. As general attorney for ABC, he managed numerous content rights and distribution matters. A graduate of Harvard University, Denson holds a J.D. degree from Georgetown University.

Published in COMMUNICATIONS & STRATEGIES No. 92, 4th Quarter 2013

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

13Nov/13Off

Interview with Wilfried SAND ZANTMAN Toulouse School of Economics; IDEI

Interview Published in COMMUNICATIONS & STRATEGIES No. 91, 3rd Quarter 2013

Public-private interplay in the telecom industry

Wilfried SAND ZANTMAN (Professor and Research director at IDEI and the Toulouse School of Economics) provides an overview on public and private partnerships in the economic literature. He describes different ways to conceive relationships between public and private actors and explains the rising interest of PPPs in various sectors. He also gives his view on core factors and policy requirements for successful PPPs implementation by focusing on the telecommunications sector. Finally, he addresses the potential role of partnerships to reach the digital agenda objectives.

Wilfried SAND ZANTMAN

 

Exclusive:
Interview with Wilfried SAND ZANTMAN
Toulouse School of Economics; IDEI (Industrial Economic Institut)

Conducted by Edmond BARANES, Montpellier University, France

 

C&S: How would you define a Public-Private Partnership? Does PPP represent a unique way to conceive relationships between public and private actors? Or please describe a spectrum of possible relationships.

Wilfried SAND ZANTMAN: There are many formal definitions of a PPP that differ from country to country. Nevertheless, one can say that it is a form of arrangement by which public authorities make use of the private sector to provide public services. This arrangement can take many forms as it can concern the design, the building, the management and even the financing of those public services. One of the major features of PPP is the fact that the private sector is in general responsible for a bundle of these tasks. It can be the design and building, or the building and management of the infrastructure, and most of the time part of the financing.

Even if PPPs have been increasingly popular, they are not the only way to think of cooperative agreements between public and private actors that existed well before the idea of PPP (or PFI) emerged. Historically, States have used private actors for the management of public services (taxes collection, road management), leading to a form of franchising. In this first case, the private actors bear part of the risk but the service to be delivered is precise and well defined ex-ante. In other cases, the private sector is asked to provide the good or the infrastructure following precise instructions given the State. In contrast with procurement contracts, PPP is a global relationship involving many dimensions of a project. As with PPPs, the firm has very often some leeway to organize production, it bears some risks not only on the demand side (as in a franchise contract) but also on the technical side.

What explains the increased interest and popularity of PPPs in various sectors?

I think many reasons explain the development of PPP in the last decades but, at a general level, there has been the belief that the State cannot do everything by itself. This general statement has some practical implications.

Secondly, PPPs being more global than standard procurement methods, they allow a better coordination between the various dimensions of the project. When a firm knows that it will operate the infrastructure in the future, it adjusts the effort undertaken today on the building of this infrastructure. Bundling tasks may therefore be beneficial in terms of efficiency. To give a standard example, spending some time on the design of a prison is costly but may avoid some costs in the future as less people may be needed to manage it. Delegating both building and management to one firm will lead that firm to think properly of the design ex-ante for the benefit of that same firm.

Lastly, some projects are new and even if the State has a general idea of what it wishes to obtain, it lacks the ability to define the good precisely. The PPPs are flexible enough to allow a competitive tendering on undefined objects. The firm can therefore meet the demand with a good it is creating, on behalf of the State, and competition tends to force the firm to do it at the best price. In the case of network infrastructures, it is sometimes feasible to use the previous generation network or part of it, while in other cases it is better to start everything from scratch. To consider a telecommunication example, one can think of the amount of copper lines to be kept and the choice between FFTH, FFTB or FTTN. By setting clear objectives but leaving the firm some flexibility on the best way to reach them, PPPs are much more likely to be cost-efficient than the old command-and-control approach to public service provision.

Do you think that the economic theory of PPPs provides sufficiently robust and interesting insights for decision makers? What are success factors and policy requirements for successful PPP implementation?

The types of questions economists consider are studied at a quite general, and sometimes abstract, level. They look at the best way productive activities should be organized between the State and the market, how risk should be allocated between industrial partners, or whether different types of activities should be performed by the same firm or different ones. The answers given to those questions depend on the detail of the situation studied, which can sometimes be disappointing for decision-makers who tend to favour ready-to-use solutions. Nevertheless, economic theory is helpful to think in many real situations and provides useful insights.

Let us take for example the financial aspect and the debate on whether PPPs are more (or less) cost-effective than public management. Many large firms can borrow at better terms than governments of developing countries (or local governments of developed ones) but in general the States can better diversify their risk (both in the space and in the time, i.e. between different generations). So, for very large projects, one should not expect significant gains from using private actors to finance public infrastructures.

Consider now the technical aspects of PPP. Choosing this form of partnership, where all the tasks are bundled, can generate management overload but avoids both the coordination and communication problems when many different and independent actors must work for the project. It is then worth choosing a PPP, i.e. allocating all the tasks to a single unit (possibly through a Special Purpose Vehicle), when the different tasks display positive externality. In short, if a clever design allows saving significant management costs, a PPP bundling design and operation is the right option.

More generally, one way to maximize the chance of success of a PPP is to increase the congruence of interests between the State and the firm. The State has neither the time nor the ability to monitor all the actions chosen by the firm. But this can be done indirectly by making this firm as much as possible residual claimant of the success of the project. It means that the firms must accept to take some risks, and the State must accept to leave some success dependent rewards to this firm.

Are there some relevant specificities of PPPs in the telecommunications sector? What do you see as the key to a successful partnership in the telecommunications sector?

The telecommunications sector, and more particularly for the fibre roll-out, seems to be an adequate case to think about this sort of cooperation. First, many contracts are signed by local communities who lack financial means, in contrast to the large firms operating in this sector. It is therefore one case where using private actors for financing and a better risk allocation makes real sense. Second, rolling out and managing the telecommunication infrastructure are natural complementary activities. Bundling these two activities, as it is generally the case in PPPs, is totally in line with basic economic principles. As discussed earlier, designing and building an efficient network is costly but allows saving on cost later on. The firm designing the network is all the more likely to choose the long-term rather than the short term cost-efficient solution that it will be in charge of managing the network once in operation.

Lastly, the potential complexity for building telecommunication networks makes it difficult for the State to manage or control on a day-to-day basis. A system where some room is left for private initiative - together with some risk - fits the characteristics of this industry well.

Do you think partnership between public and private sectors is the best way to reach the digital agenda objectives?

In a world where the local authorities are generally responsible for reaching the digital agenda objective, I don't see how one could do without private actors. From a technical point of view, PPPs help in choosing a cost-efficient solution. From a financing point of view, at least when part of the area covered by the infrastructure has high enough density, the participation of private funds alleviates the constraints of the public budget. One big question is whether this cooperation should be done at the State level (like in Australia), or should it be done at the local level, and only for the areas where private initiative is very unlikely to emerge. When the partnership is organized at the global level, the society may benefit from economies of scale and from the ability national governments have to obtain good favourable contractual terms. Nevertheless, such a solution does not seem relevant for countries with heterogeneous density. Indeed, some regions of those countries already benefit from the service provided by pure private players. A national partnership will distort competition and deter further private investment. Moreover, local partnerships are more likely to attract bids from medium-size firms whereas only big players can compete at the national level. Finally, the technical solutions, and the cost associated with these solutions, are different from one area to another. Adjusting the contractual conditions to the local features is more likely to lead to cost-efficient outcomes.

In Europe several PPPs projects have been developed using different investment models, how would you explain these different ways to organize cooperation between public and private actors?

When you look at different European project projects (Cornwall, Auvergne, Milan or Asturias), it is fascinating to see the variability in the way cooperation between public and private actors is organized. One key dimension explaining these differences is the density of those areas, and therefore the private profitability of the projects. In the region of Milan (Metroweb), there is no need for public funding but public participation is helpful both to use some public infrastructure and having good support from the administration. In contrast, the Auvergne region - a mountainous area with very low population density - is not profitable for a private firm. Ownership is then totally public, even though part of the risk is born by the operator, who pays part of the investment cost and receives some annual transfer from the regional body. But other dimensions must be considered, some of them probably related to national preferences for public or private control. For example, one can see that while Cornwall and Asturias could both be considered as areas with poor private profitability, the private sector has been welcomed in the British case while the Asturias region chose to keep the total control and ownership of the project. One can also consider that those two regions had different attitudes towards risk-taking, and that the different models of partnership only reflect this taste heterogeneity.

But at a global level, all those projects were originally very clear on the objectives, the duties of all contracting parties and the intermediate targets to be reached by all the actors. This transparency and the long-term commitment of public and private actors are probably the most important elements to achieve socially efficient and economically profitable results.

Biography

Wilfried SAND-ZANTMAN is professor at the Toulouse School of Economics and research director at the IDEI. He received his Master from the ENSAE (Paris) and his Ph.D. from the University of Toulouse 1. Mr. Sand-Zantman's research focuses on industrial organization and regulation, with a special emphasis on the telecommunications sector.

NGN funding: Public/Private interplay - DigiWorld Summit 2013

DigiWorld Summit 2013
DigiWorld Summit 2013 Executive Seminar NGN funding: public/private interplay
: a half-day roundtable discussion of new broadband networks will be an opportunity to evaluate the importance and modalities of public sector actions in coordination with operators in various world regions, to examine their effectiveness, and to anticipate their impact on the future of the electronic communications sector.

Interview Published in COMMUNICATIONS & STRATEGIES No. 91, 3rd Quarter 2013

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

9Oct/13Off

Interview with Jussi HÄTÖNEN Economist, European Investment Bank.

Interview Published in COMMUNICATIONS & STRATEGIES No. 91, 3rd Quarter 2013

Public-private interplay in the telecom industry

Summary of this issue: Telecom liberalization progressively limited public intervention to the definition of a regulatory framework setting unbalanced market conditions to stimulate investments of new market players. However, a new impulse to public initiative has come from the availability and use of broadband networks with a growing set of services at disposal of public and private users. This new form of public intervention has gained ground because of strict financial constraints faced by some telecom operators and local authorities’ awareness of positive externalities due to the reduction of digital divide (e.g. economic development and social inclusiveness). This special issue aims to set the stage for a broad discussion on public-private interplay.

Jussi HÄTÖNEN, European Bank of Investment

Exclusive:
Interview with Jussi HÄTÖNEN
Economist, European Investment Bank.

Conducted by Alberto NUCCIARELLI, Cass Business School, City University, London

 

C&S: How would you define a "public-private interplay" in telecom and, specifically, in the broadband sector?

Jussi HÄTÖNEN: In its broadest form, public-private interplay, or PPPs for that matter, occurs whenever input from both parties is required. While PPPs often refer to financial, or in-kind, cooperation to deploy broadband networks, PPI can include other forms of cooperation such as sharing of information in its simplistic form. I think a good example of this is the market mapping exercise. The aim of this exercise is to identify the areas where private operators have existing infrastructure and areas in which they have no interest in deploying NGA networks in the near future – accordingly to identify the areas of market failure and therefore define the scope for further public sector involvement.

In the European broadband sector is public-private interplay a crucial pre-condition to generate investments?

We have to bear in mind that after liberalization, telecommunications is effectively a private sector driven industry, albeit a regulated one, and public sector intervention is justified only if there is a market failure. That said, looking at the recent studies and research, it seems that in developing NGA networks in Europe, the market failure is evident, particularly for more remote and rural parts of the countries. Market operators will continue to pursue selective investments in areas where unit costs are low enough to make returns for their shareholders. Wherever they cannot build a viable business case, they are not likely to invest. In these areas, PPIs are crucial for generating investment. In fact PPIs are very important, as relying only on market forces to deploy network would potentially lead to an even greater digital divide than we have at the moment. This is because market operators are investing to deploy NGA networks in areas which are already well covered by copper, cable and even by existing FTTx networks.

How much risk should the public sector be able to bear to develop a long-term interplay with private stakeholders?

This depends fully on the context and ambition. The more rural networks we want to deploy, the more risk the public sector needs to bear. Typically the private sector's risk threshold is in deployment cost of below EUR 1,000 per home passed, translating to fibre network build-outs in urban and to some extent suburban areas. If we want to deploy networks in rural areas, the public sector needs to bear the risk to the extent that private sector is not willing to take. For instance in rural areas where the cost per home passed is EUR 2,000 and more, the majority of the financial burden and thereby risk needs to fall to the public sector if we want to see these networks being rolled-out. On the other hand, the public sector has a better risk bearing capacity. For instance, while private operators seek paybacks for their investments typically within 10 years, the public sector can take a longer-term investment horizon which is more in line with the economic life of these assets.

Should a fair social rate of return for public commitment be defined?

Yes, but what this is depends again on the context. Public resources are scarce and come with an opportunity cost. This means that instead of deploying fibre, the public sector needs to decide for instance whether they would use the funds to build roads or hospitals instead. Of course, the less developed the basic infrastructure, the more you need to demonstrate the social return from investing in broadband as opposed to other opportunities.

If we consider public-private partnerships an example of interplay, can we affirm that they potentially distort or stimulate competition in the EU broadband sector?

As said before, public sector involvement and intervention is justified only if there is a market failure, i.e. the private sector is not willing to invest. Therefore if the market analysis (or market mapping in the case of broadband) is done correctly, focused public sector involvement through PPPs should not distort competition. Whether PPPs stimulate competition depends on the model, but the open and fair access principles are set to do this.

What is the way forward with the 2012 EU Guidelines on public-private partnerships?

I think the key is to understand that market forces alone will not deliver the DAE targets, but focused and well-designed public sector involvement, complementing private sector investments, is a key element in reaching these policy objectives especially in more remote and rural areas where, in fact, the social return is typically the highest. Nevertheless in my view, PPPs should be designed in a way which maximizes the private sector involvement and minimizes public sector intervention. Of course, given the complex structure of the industry, this is a difficult task.

Will public-private interplay be more relevant in the co-creation of contents and applications (e-health, e-education) in the next 5-10 years? Or will it continue to be limited to the definition of better conditions for infrastructure development?

I do not see why not. In fact PPPs play an important role in research and innovation space in Europe. These PPPs are not like the ones used to deploy infrastructure, but PPPs nevertheless combining private and public sector resources. For instance under FP7, there are several ongoing projects in the eServices domain, which combine public funding, private sector companies and public sector research institutes for instance. This is however on the R&D side, but I would not see a reason why PPPs should not be suitable for the deployment of e-service solutions in the public sector. However the prerequisite is to have first the infrastructure (i.e. networks) in place to support these services.

Biography

DigiWorld Summit 2013Jussi HÄTÖNEN will take part in the DigiWorld Summit 2013 Executive Seminar NGN funding: public/private interplay : a half-day roundtable discussion of new broadband networks will be an opportunity to evaluate the importance and modalities of public sector actions in coordination with operators in various world regions, to examine their effectiveness, and to anticipate their impact on the future of the electronic communications sector.

Interview Published in COMMUNICATIONS & STRATEGIES No. 91, 3rd Quarter 2013

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

18Jul/13Off

Interview with Gilles BRÉGANT, CEO of ANFR

Published in COMMUNICATIONS & STRATEGIES No. 90, 2nd Quarter 2013

The radio spectrum: A shift in paradigms?

Summary of this issue: Demand for the use of the radio spectrum is constantly and rapidly growing, not only as a means of carrying Internet traffic, but also for new or expanding use by the military, public protection and disaster relief, at the same time that more traditional applications such as aeronautical, maritime, and radio astronomy remain. Is spectrum policy entering a trackless wilderness, or can a new direction and a new set of paradigms be expected to emerge? The contributions to this special issue of Communications & Strategies cover a great deal of ground. They serve to provide valuable signposts for spectrum policy going forward.

Gilles BRÉGANT CEO of ANFR

Exclusive:
Interview with Gilles BRÉGANT
CEO of ANFR.
(French national spectrum agency)

Conducted by Frédéric PUJOL,

Head of the radio technologies & spectrum practice, IDATE

 

C&S: What are ANFR's main priorities in the coming two years as far as Spectrum management is concerned?

Gilles BRÉGANT:

The Agence nationale des Fréquences (ANFR) is the French public Agency in charge of radio spectrum management. It is placed under the jurisdiction of the Minister responsible for Electronic Communications (Mr. Arnaud Montebourg and Ms. Fleur Pellerin since May 2012) but all the Ministries using spectrum are represented at ANFR's board. Besides, ANFR's decisions regarding spectrum allocation are actually taken by the Prime Minister since spectrum, in France, is a state affair.
Spectrum management priorities will be closely linked to the governmental decisions and digital economy needs for the following years and to the international and European agenda.

A. Create the conditions of mobile broadband (4G) success in France
4G allows very high data flow rates and significantly increased user comfort: lightning-fast downloads, and a more fluid navigation become possible on smartphones or tablets. This opens up opportunities for new services in mobility, such as access to audiovisual content. A factor of innovation, growth and job creation, 4G is one of the priorities of the Government. ANFR has been deeply involved for the development of European harmonized conditions for the usage of 4G and is currently mobilized to make a success for the introduction of this new technology.

Since December 2012, the Agency has published a 4G roll out observatory. This tool will be key to monitor 4G infrastructures deployment, carrier by carrier.
However, the 4G challenge will be a tricky one when it comes to spectrum management since the 800 MHz 4G can interfere with DTT. ANFR uses its resources devoted to the protection of TV reception so that the 4G 800 MHz and TNT coexist harmoniously.

The ANFR intervenes at every stage of the deployment:
- it actively participates in the communication towards local elected officials, professionals and the general public on these operations;
- during the phases of deployment, it collects and instructs the claims of viewers through its call center;
- it oversees the resolution of the problem by operators if the interference comes from the 4G 800 MHz. A professional intervenes, most often to insert a filter in the reception of the TNT facility.
The TV reception is therefore guaranteed for each viewer. The full cost of interventions is supported by mobile operators.

B. Prepare the next World Radiocommunication Conference (WRC)
In 2012, we have drawn the immediate consequences of the WRC-12. In 2014, the delegations will develop first arbitrations of WRC-15. In 2013, national positions must be taken.

One of the challenges of this Conference will be the question of the future of the 700 MHz band. In France, it is now assigned to audiovisual. Since the debates on the first digital dividend, five years ago, the terms of the problem have been well known: the use of mobile Internet is expected to grow regularly in the coming years to meet the expectations of very mobile broadband. But this demand for broadband is common to all sectors: the audiovisual sector wants to keep these frequencies to offer new services: generalization of high definition, introduction of ultra high definition or 4K for example. And Government services, such as those of the Ministry of the Interior, also want to access services such as video for safety services.

In this debate, three ideas seem inevitable:
- there is not enough spectrum available under 1 GHz to satisfy fully each need;
- France is not an island, and it will have to act in harmony with its Western European neighbors;
- Europe will have to play an important role.

ANFR, as it manages the entire spectrum and guaranties technical neutrality, is coordinating the preparatory work at the national and international levels. ANFR, which is already contributing to the preparation of the next WRC, is involved in various entities in CEPT and UIT involved in this process and is bringing its technical expertise to the Government so that a decision can be taken in the best conditions.
ANFR is also an active member of the RSPG ad hoc group, which will provide recommendation to European Commission on WRC issues and on the identification of 1200 MHz for wireless broadband.

C. Facilitate the deployment of the 6 new DTT channels
Since December 12, 2012, 25% of the French population can access 6 new HD channels with their DTT HD TV sets. Free to air TV is no longer limited to generalist channels. Every French citizen, and not only the ones with cable, satellite or IPTV subscriptions, will be able to watch specialized channels on areas such as sports, travels, diversity and so on by 2015.

The years to come will see more of the French population covered by the new HD DTT channels.
The Agency, together with the CSA, has the mission to assist viewers in solving their TV reception problems through its call center and its dedicated website, "www.recevoirlatnt.fr", in collaboration with local aerial installers. If necessary, it will grant funding provided by the State to viewers who have lost DTT reception.

What are the expected evolutions as far as new ways of sharing spectrum are concerned? What are their consequences on spectrum management?

First, it is important to recall that spectrum sharing is already a reality with short range devices operating under a general authorization on a non interference and non protection basis. This is the case for Wifi in the 2.45 GHz and 5 GHz bands. This is also the case of all applications using ultra wide band devices which are sharing spectrum thanks to a very low power density. UWB technology was also used in sectors such as automobile and aeronautics.

What about Licensed Shared Access (LSA)?

The objective of an LSA approach is to facilitate the introduction of additional users operating with individual spectrum rights of use in specific bands and on a shared basis with an incumbent user, thus allowing predictable quality of service for all rights holders. These arrangements will need sufficient flexibility in order to account for national particularities, in relation to the administration of spectrum.

LSA could be introduced as a regulatory approach to release spectrum. In addition to conventional planning methods, cognitive radio technologies and their capabilities (geolocation databases, sensing, etc.) could be taken into account as enablers for sharing under the LSA approach.

ANFR engineers are actively participating in European works, at the ECC level for instance, on this issue, which is still in its early stages.

700 MHz band: what are the stakes and constraints?

World Radiocommunication Conference (WRC) 2012 decided that for the Region 1 the 694-790 MHz band will be allocated to mobile service co primary with broadcast services, this allocation becoming effective after WRC-15.

The issues at stake in the preparatory works for the WRC-15 are each tied to technical and negotiated matters. The possible refinement of the lower band edge (694 MHz) is one issue up for debate during the preparatory works. The second stake is the identification of a harmonized channelling arrangement, that is to say, the uplink and downlink bands. Finally, technical matters such as sharing studies between mobile and DTT at 694 MHz and the consequence of this on the necessary guard band are also to be clarified through the preparatory works for the WRC-15.

Regarding the choice, and its consequences, between IMT and broadcast, WRC was the starting point. The next steps are European decisions and national arbitrages.

2013 will be the year of public exposure to electromagnetic fields in France (ANSES report, Abeille Bill…): what is the role of ANFR as far as exposition control is concerned?

First, the Agency has no sanitary or health prerogatives, its expertise and missions only rely on technical matters.
The Agency monitors the respect by radiocommunication network operators of the public exposure to electromagnetic fields limits. The legal limits are the ones of a 1999 European Recommendation. Besides, by Law, the Agency has to make an inventory of "atypical" points, that is, the points where the exposure is significantly above the national average (while still below the limits). ANFR also elaborates the protocol used to measure the public exposure to electromagnetic fields. ANFR is also in charge of devices monitoring (phones, smartphones, tablets…). We insure that DAS limits (2 W/kg) are respected. We also check if the necessary information is properly provided to consumers.

2013 will indeed be the year of public exposure to electromagnetic fields. It began with the Bill introduced by MP Ms. Abeille from the Environmentalist Party. This Bill was forwarded to the Parliament Economic Commission for further analysis.
In 2013, we will publish our report on technical experiments which were lead in France to assess the possibility to reduce public exposure to electromagnetic fields due to mobile operators antennae without decreasing coverage and quality of service. Such experiment is a world premiere until now. 2013 will also be the year when ANSES, the French sanitary authorities, publishes its new report on the sanitary effects of such a field.

The Agency is a neutral, technical expert in that area. By participating in public meetings, advising elected officials and also the general public through its website Cartoradio, the Agency participates in turning this potential concern into a serene public debate. Finally, in 2013 we will provide a mobile version of Cartoradio, with the location of all mobile based-stations and the results of more than 26,000 field measures.

The ANFR organizes an international Conference on June 26 and 27 2013 entitled "Spectrum & Innovation": what is it about?

The Conference "Spectrum and Innovation" was instigated by Ms. Fleur Pellerin, delegated Minister in charge of Small businesses, Innovation and Digital Economy. We want the Conference to be a major event in 2013 for the digital economy sector in general and radiofrequencies in particular. The objective is to show to a large audience of professionals from the digital economy how spectrum is key to their sector and how this resource is crucial to economic growth in the coming years.

Different themes will be dealt with: how mobility is shaping our society and stimulating innovation, how radiofrequencies constitute a growth leverage for industry and small businesses, or even the spectrum needs for 2020. To debate on these subjects only experts in their fields have been chosen. The Conference will also be a chance to listen to influential and renowned speakers: Ministers, European and foreign institutions officials, renowned academics and business leaders (BBC, Bouygues Telecom, Cisco, Eutelsat, France Télévisions, Free Mobile, IBM, M6, NRJ Group, Orange, Qualcomm, Renault, SFR, TDF, TF1…).

We expect these two days to shows us what exciting new developments can be in store in the coming years. The Conference will prove how spectrum can foster innovation, growth and job creation.

Biography

Gilles BRÉGANT was born in Chambery in September 1963. He graduated from Ecole Polytechnique (1986) and from Telecom ParisTech (1988). Following an 8-year-career at France Telecom research center, Gilles Brégant was appointed technical adviser to the Minister in charge of Research (1996-1997). He had to coordinate international projects and themes in relation with information technology. He then worked for the department of trade and industry as deputy director in charge of Prospective. He was appointed secretary general of the ministerial task force "Digital Economy" (2001-2005). He was then appointed Technical Director of Conseil supérieur de l'audiovisuel (the French Media Regulator) in 2005. Gilles Brégant is the CEO of ANFR since 2011.

Published in COMMUNICATIONS & STRATEGIES No. 90, 2nd Quarter 2013

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

12Jun/13Off

Interview with Paul E. JACOBS, Qualcomm’s Chairman & CEO

Published in COMMUNICATIONS & STRATEGIES No. 90, 2nd Quarter 2013

The radio spectrum: A shift in paradigms?

Summary of this issue: Demand for the use of the radio spectrum is constantly and rapidly growing, not only as a means of carrying Internet traffic, but also for new or expanding use by the military, public protection and disaster relief, at the same time that more traditional applications such as aeronautical, maritime, and radio astronomy remain. Is spectrum policy entering a trackless wilderness, or can a new direction and a new set of paradigms be expected to emerge? The contributions to this special issue of Communications & Strategies cover a great deal of ground. They serve to provide valuable signposts for spectrum policy going forward.

Paul E. JACOBS, Chairman & CEO, Qualcomm

Exclusive:
Interview with Paul E. JACOBS
Chairman & CEO, Qualcomm

Conducted by Frédéric PUJOL,

Head of the radio technologies & spectrum practice, IDATE

 

C&S: Qualcomm anticipates a 1000x mobile data traffic increase in the coming years; can you explain how Qualcomm, as a key player and leader in the mobile ecosystem, will contribute to solving this challenge?

Paul E. JACOBS: Mobile data traffic has doubled every year over the past few years. If this growth rate continues for ten years, we will see a 1000x increase. I believe that the growth of mobile data is going to continue and this sentiment is shared throughout the industry. It is crucial for all stakeholders in the wireless industry to engage in research and development and investment and continually expand capacity to keep pace with this mobile data growth.

Now, how can we achieve the 1000x capacity? Qualcomm's vision is built upon three pillars, which are related to one another and must be pursued simultaneously: more spectrum, more small cells and greater efficiency across the system. We are working hard in all of these areas, and innovation in both technology and regulation is required. I strongly believe that industry and regulators must work cooperatively to achieve this goal.

First thing we need is more licensed spectrum. Only licensed spectrum can deliver the quality of service consumers increasingly depend upon every day. This spectrum should be harmonized as much as possible and made available in a fast track manner. It is important to do so in order to achieve scale and affordable access to mobile data. Freeing up enough spectrum in a fast track manner is a tremendous challenge. We need to clear and auction spectrum, but that alone is not enough. We also need to do more with Authorized Shared Access (ASA). ASA was designed to free up more licensed spectrum. As of now, there has been very good progress on ASA in Europe and the United States and I believe more countries will embrace it. We also need more unlicensed spectrum in higher bands for high capacity, short range services – for example, additional bandwidth in the 5 GHz for Wi-Fi and 60 GHz for even shorter range applications, such as wireless docking.

The second thing we need to achieve our 1000x vision is a hyper densification of networks with small cells. We need to bring base stations closer to mobile devices and small cells do just that. The magnitude of the challenge of meeting demand for mobile data has led us to this new way of thinking. The idea is to provide much more capacity from low cost indoor small cells in complement to macro cells. Small cells will allow operators to re-use their licensed spectrum in existing bands as well as new higher bands, to capture the huge increase in indoor traffic, and dedicate macro capacity for outdoor users.

Finally, there is a need for more efficient networks, applications and devices – we need to make sure that all of these work intelligently with each other to offer the best performance and the greatest user experience. Continuing the 3G, 4G and Wi-Fi technology evolution, enabling devices to select the best wireless access available and optimizing interference management are some enhancements that make this possible.

Innovation in spectrum management: Qualcomm recently demonstrated SDL (Supplemental Downlink) capabilities in Toulouse (France) and supports ASA (Authorized Shared Access). Can you share your views on the potential of these new "tools"?

More licensed spectrum is an essential component in meeting the 1000x challenge. This spectrum should be in prime bands – meaning that the spectrum is harmonized regionally and, ideally, globally. In addition, the spectrum needs to be released quickly. We cannot afford to wait years for this spectrum to be assigned as delaying access to spectrum has a direct impact on the quality of service that can be provided to consumers. The spectrum crunch, if unsolved, will lead to higher prices, degraded quality and data caps, and missed opportunities. The genesis of SDL and ASA, two great innovations, lies here. We have been working with our industry partners and governments to bring these innovations to market as quickly as possible.

SDL is an innovative way to use unpaired spectrum, taking advantage of the newest spectrum aggregation techniques in 3G and 4G. Today, users are downloading considerably more data than they are uploading with video consumption, now the biggest contributor to traffic volumes. At the same time, operators' spectrum is most often paired with the same amount of MHz allocated for uplink and downlink. With SDL, an operator can aggregate an unpaired band with the downlink of the paired spectrum to create a bigger downlink pipe and increase its network capacity and achieve faster data rates. Consumers will then see the difference when they download data, particularly videos, on a network using SDL. Qualcomm has shown this in pre-operational conditions with Orange and Ericsson in Toulouse, France, and I can tell you that the results were impressive. The demonstration provided industry and government representatives with an opportunity to observe the significant benefits that will result from the recent decision taken by CEPT in Europe to harmonize the L-Band spectrum for SDL. I applaud this decision as it will drive the harmonization of this band globally. We are thrilled with this prospect, and we have started planning for the support of L-Band SDL on our chipsets. In the United States, SDL will be launched by AT&T using unpaired spectrum in the 700 MHz band.

When we started working on ASA a couple of years ago, we aimed to develop a new way to bring more licensed spectrum in harmonized bands to market quickly enough to cope with the explosive data growth. While I believe traditional licensing after spectrum clearance will remain the preferred approach, it is not always possible. We know that some spectrum users, such as government users, do not use their spectrum nationwide every hour of every day, but they are not in a position to vacate it because they still need it from time to time or in specific locations or situations.

In this scenario, ASA is an ideal approach to enable wireless broadband operators to access this spectrum in a mutually beneficial way with those incumbents. With ASA, a commercial operator can share the spectrum with the incumbent in time or location or both. This is done on an exclusive basis, which means that either the incumbent or the commercial user access the spectrum at a given location at a given time. This means that they never interfere with each other and they can still leverage the very best performance of their equipment. Everybody can win with ASA. The incumbent can monetize its underutilized spectrum, operators can access new spectrum for exclusive use and ensure reliability and quality of services, and regulators can pragmatically address the ever increasing demand for more spectrum for mobile broadband. ASA unlocks hundreds of MHz of high-quality spectrum for 3G/4G.

ASA applies to harmonized bands so that commercial devices will be readily available and operators can benefit from economies of scale. ASA will not require any new technology for devices – the devices will simply have to work on the selected spectrum. This aspect of ASA is important to note because it will enable operators to quickly start using the ASA spectrum in conjunction with their other existing spectrum assets. In addition, ASA is particularly suited for higher bands as interference propagates less.

So, if you think about ASA and small cells, you can see that there is a perfect match here. The lower transmit power of small cells allows them to be deployed much closer to an incumbent's operations without causing any interference. Small cells also offer a perfect capacity complement to an operators' existing deployment and, of course, ASA also allows deployment of macro cells. I am pleased with the progress made in Europe and the United States. In Europe, CEPT has set up a project team to make the 2.3 GHz band available using ASA. In the United States, the FCC is considering ASA in the 3.5 GHz for sharing with coastal radars.

You announced during the Mobile World Congress that you will ship RF chipsets with support of the 30 LTE frequency bands later this year. Is this the definitive solution to LTE global roaming?

We have been striving to harmonize bands for mobile broadband and contributing to this work across the world at national, regional and ITU levels. The reality is that once a band gets harmonized at the ITU level, it is released on a country-by-country basis at different times. This is mainly due to national legacy regulations and the fact that the spectrum is used by someone. In some countries, it can take years to vacate a band. This situation has led to the explosion of the number of commercial bands and today, there are already over 30 LTE frequency bands around the world – 40 cellular bands in total for 2G, 3G and 4G. Having so many different frequency bands creates difficulties for the development and design of LTE devices. In particular, the front end of the phone – the components that manufacturers place in between the antenna and the digital modem – requires discrete band-specific components for each and every operational frequency in a phone. In today's sleek smartphone designs, an OEM simply runs out of printed circuit board area before it can fit in all the front end components and may need up to 10 or more versions of each LTE handset design in order to sell a particular phone model around the world.
Qualcomm has announced a new front end solution called the Qualcomm RF360, which we believe is an important industry advancement to address this challenge. It features many RF innovations that utilize only half the board area of other front end solutions. For the first time, it enables a single phone model that supports at least one LTE frequency band in every country in the world where LTE has been deployed.

Having said that, the number of active bands which are simultaneously supported in a device is not unlimited, and that's why it is crucial to continue striving to harmonize bands and make them available in as many countries as possible.

How do you see the relationship between eMBMS supported by LTE networks and broadcast networks? Is convergence between broadcast and mobile networks possible in the coming years?

Most people look at this issue from a technology point of view and try to defend one technology against another. For me, it is really a question of service demand and business models. People want to watch video content when they want it on any device. They also want both linear and on-demand content on their flat screen, tablets and smartphones. And they want to interact on social media while watching this content. The demand is pretty clear, but it can create trouble both for broadcasters and mobile operators.

The terrestrial broadcast networks will not reach the tablets and smartphones for two reasons: they are designed for fixed reception, for delivery of linear content and tablets, and smartphones do not support reception of DVB-T signals. At the same time, mobile networks will have a hard time coping with the explosion in video demand due to the capacity constraints faced by mobile networks. LTE broadcast with eMBMS can help. eMBMS allows mobile operators to optimize the use of their network capacity by efficiently delivering content in broadcast mode. However, eMBMS design as of today, doesn't allow it to be the sole terrestrial platform for broadcasting, including for roof top reception.

It is fair to say that a dedicated broadcast platform is not likely to be the platform for the future as the ratio between linear and interactive services is not fixed and varies in time as well as in geography. So any solution would have to implement a kind of dynamic unicast broadcast in order to allocate the resources to the service requiring them. We've shown this feature over eMBMS at the last Mobile World Congress.

The only real solution to these challenges is convergence, so it is not a question of 'if'; it is just a question of 'when'. The timing will depend on a number of factors. Progress would require political decisions and different countries face different situations. But decisions cannot be made on a purely national basis because mobile devices won't be available without scale. Convergence will also require the emergence of new business and regulatory models. Today, the worlds of broadcasting and telecommunication are often opposed, especially when it comes to spectrum rights in the UHF band. So there is a lot of work to be done.

What is important is to understand that the first steps towards convergence are happening right now. Verizon has announced their intention to deploy eMBMS – their focus on an event such as the Super Bowl is a strong indication that there is solid business logic behind this convergence movement and that we are only at the beginning of this trend.

All around the world, focus is generally on sub-1 GHz frequency bands during auctions but higher frequency bands are also likely to play a major role in congested areas. How do you envisage the use of the 2.6 GHz band and other high frequency bands by mobile operators in the next 2-3 years?

Lower frequency bands will continue to play a pivotal role in providing ubiquitous and cost efficient coverage for wide area networks. We have seen this recently with LTE commercial deployments using spectrum in 700 MHz and 800 MHz. However, an operator also needs higher bands for wider contiguous bandwidths and higher data rates. I do not believe that a mobile operator can fully succeed today without an adequate strategy for the higher bands. And the relevance of those bands will surely increase in the future.

I do believe that small cells will change the current deployment paradigm. I think we will see small cells systems deployed very broadly – in homes, office buildings, enterprises, and shopping malls. The future that we envision entails the hyper densification of networks and the key is to manage interference in order to dramatically increase the overall capacity of the network. Qualcomm has solutions and we are developing further enhancements that take the performance to the next level, the 1000x level. One example of such a solution is opportunistic small cells that dynamically switch on and off based on the demand for data. This solution aids in reducing interference. Another example is extending carrier aggregation to combine traffic among small cells and across small and macro cells.

Introducing new deployment models such as neighborhood small cells also provide very interesting prospects – a network of low-cost, plug-and-play, open, indoor small cells. They provide extremely high indoor capacity as well as good outdoor coverage and capacity within a neighborhood. This can be extremely cost effective for operators because consumers could also deploy the small cells and provide the backhaul – saving them time and money while still relying on the operators' outdoor coverage from macro cells.

In the coming years, small cells will take advantage of licensed spectrum in higher bands and each operator will have to decide how to use its 2.6 GHz spectrum. This decision will depend on various parameters including an operator, other spectrum holdings, network structure and the competitive market situation. It is my belief that small cells will be an important part of the future of mobile communications.

Biography

Paul E. JACOBS is chairman of Qualcomm's board of directors and the Company's chief executive officer. A leader in the field of mobile communications for over two decades and a key architect of Qualcomm's strategic vision, Dr. Jacobs' responsibilities include leadership and oversight of all the Company's initiatives and operations. Following the completion of his Ph.D. in 1989 and a year as a post-doctoral researcher at a French government lab in Toulouse, Dr. Jacobs joined the Company full time in 1990 as a development engineer leading the mobile phone digital signal processor software team. Five years later, Dr. Jacobs became vice president and general manager of the combined handset and integrated circuit division, which was subsequently divided into Qualcomm Consumer Products (QCP) and Qualcomm CDMA Technologies, respectively. In 1996, Dr. Jacobs was named senior vice president of the Company and in 1997, president of QCP. In 2000, Dr. Jacobs was named executive vice president of Qualcomm and in 2001, group president of Qualcomm Wireless & Internet (QWI). Dr. Jacobs became CEO in July 2005 and was appointed Chairman in 2009. As an innovative leader of a broad range of technical teams within Qualcomm, Dr. Jacobs has been granted more than 40 patents for his inventions in the areas of wireless technology and devices. Dr. Jacobs serves on the Board of Directors for A123Systems, is chairman of the Advisory Board of the University of California, Berkeley College of Engineering; is a trustee of the Museum of Contemporary Art San Diego; and is a member of the US-India CEO Forum and the Young President's Organization. Dr. Jacobs received his bachelor's (1984) and master's (1986) degrees as well as his doctorate (1989) in electrical engineering from the University of California, Berkeley, and subsequently endowed the Paul and Stacy Jacobs Distinguished Professor of Engineering chair at the school. He is a member of the Phi Beta Kappa, Eta Kappa Nu and Tau Beta Pi honor societies. Dr. Jacobs is a recipient of a number of industry, academic and corporate leadership awards.

Published in COMMUNICATIONS & STRATEGIES No. 90, 2nd Quarter 2013

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

22Apr/13Off

Interview with FRANK PILLER Aachen University, Germany / MIT Media Lab, USA

Published in COMMUNICATIONS & STRATEGIES No. 89, 1st Quarter 2013

Open Innovation 2.0
Co-creating with users

This issue of C&S analyses the thematic of open innovation with a focus on co-creation with end-users

Summary of this issue: Innovation has always been a central element of competition dynamics. During the last decades, globalization, deregulation, internet, new technologies, the digital revolution, and consumers' behavior have radically modified the innovation process and the competition structure. In many areas, the offer is rich and diversified: innovation is a unique opportunity to create competitive advantages necessary for growth. Among the general topic of open innovation, this special issue focuses on users' involvement in the innovation process. It offers a collection of papers providing interesting opinions, experiences, advances and evidence.

Frank Piller

Exclusive:
Interview with FRANK PILLER
Aachen University, Germany / MIT Media Lab, USA

Conducted by Anna Maria KOECK
ZBW – German National Library of Economics, Hamburg, Germany

 

C&S: Why are you interested in open innovation?

Frank PILLER: We today know that the time of the sole Schumpeterian entrepreneur is over. While there still are examples of individuals making great innovations on their own, today successful innovation is a team game.

This is not new per se. But with the internet, a number of new tools and interaction possibilities have been made possible supplement traditional forms of external partnerships in the innovation process. When I refer to open innovation, I am not talking about contract research, supplier innovation, research alliances, or market research. Open innovation in my understanding builds on new, crowdsourcing-based methods that connect an innovating firm with "unobvious" other, people that are not in its regular list of partners or in its own industry.

The core idea of open innovation is to work with an organization or with someone you didn’t know previously. When you have a problem to solve, you make it known, circulate it - whether on a large scale or by going through specialized channels like Innocentive or NineSigma.

And thanks to large network effects, going through this type of channel is not expensive any longer, we are talking about project fees of 20,000$ or less.

Is open innovation more relevant than ever?

Definetly, I’d say yes. Technologies have evolved so much that companies need help if they want to keep up. You know, whether you are a very large or a small company, customers now ask for solutions, not just products. But when you sell solutions, you need more expertise and knowledge. The outsourcing mentality is well established in the manufacturing sector. In Germany, for example, open innovation is becoming popular the moment, as we have a lot of trouble recruiting engineers.

What type of company can open innovation apply to?

Most of the present users are manufacturing companies, particularly multinationals. Companies like Unilever and Procter&Gamble, which constantly have to bring out new products and have been pioneers in open innovation. Similar in the pharmaceutical industry where research is very expensive and highly complex. Car manufacturers have been reluctant for a long time, but I think this is slowly changing. The sector which, in my opinion, would benefit from using open innovation is small and medium enterprises. SMEs have far fewer resources for innovation and often lack the time to tackle it. And I strongly recognize a growing interest from this sector today.

Is open innovation a business imperative yet? What would happen if companies continue to remain closed and locked into the traditional way of generating ideas and products without external collaboration at the society level?

Well, I would say truly closed innovation is not possible anyway. All innovation built on existing knowledge and some form of networking. But the term open innovation suggests that a company build dedicated practices to make the connection with the best external knowledge for a given innovation task better and more efficient. So for me, open innovation is not a business imperative, but a set of practices and organizational capabilities to connect with a firm's periphery for innovation.

Having said this, however, our research finds that companies need a dedicated balance between openness and closeness (Look for more at http://ssrn.com/abstract=2164766). Being too open also comes at a cost, and firms need to build dedicated internal organizational practices to become more open.

Customers are often considered the most important source of external input for innovation. But is this really true? As proven in many idea contests, great ideas come from the "common man" or outsiders. How can a company engage with these users?

Here we have to make an important distinction. Research, originating by the path-breaking work by Eric von Hippel at the MIT, has shown that many commercially important products or processes are initially thought of by innovative users rather than by manufactures. Especially when markets are fast-paced or turbulent, so called lead users face specific needs ahead of the general market participants. Lead users are characterized as users who (1) face needs that will become general in a marketplace much earlier before the bulk of that marketplace encounters them; and (2) are positioned to benefit significantly by obtaining a solution for those needs.

But lead users are NO average customers or users. There are only very few lead users. Average customers are in general neither innovative nor do they want to engage in innovation. Hence, it is the task of a company to identify these lead users by specific search and screening methods. There is not enough space here to describe these methods, but they are very well documented (look at Eric von Hippel's MIT homepage for some examples).

And ideation contests indeed are a great way to engage with "unobvious" users and idea providers. A company broadcasts a task or challenge, calling for ideas, and users self-select to participate. In this way, it are not representative customers like in market research or focus groups who provide input, but people that really have a problem or already a solution.

In a way co-creation can be defined as outsourcing idea generation to the society. What is your exact definition of this concept? And what is the main benefit for companies?

Customer co-creation has been defined by us as an active, creative and social process, based on collaboration between producers (retailers) and customers (users). Customers are actively involved and take part in the design of new products or services. Their co-creation activities are performed in an act of company-to-customer interaction which is facilitated by the company. The objective is to utilize the information and capabilities of customers and users for the innovation process.

The main benefit is to enlarge the base of information about needs, applications, and solution technologies that resides in the domain of the customers and users of a product or service. Examples for methods to achieve this objective include user idea contests, consumer opinion platforms, toolkits for user innovation, mass customization toolkits, and communities for customer co-creation.

The main benefit for companies is to enhance the "fit to market", but also to engage in a more interactive, engaged relationship with their customers and users – with great effects for relationship marketing!

Being open about problems are not yet a norm in the market place, where companies are conversing predominantly about what they know, more than what they do not know. What are your comments?

Good question! This indeed is one of the largest challenges we see in the field today. Many companies know about the tools or methods to co-create that I named previously. But they are not ready to internally exploit the knowledge generated with their customers and users. Here I believe we still need plenty of change management to change this mind-set you mention!

This is a field where I believe we also need more research. Firms need more information and better guidance on how to assess whether their organization and branch is suited for customer co-creation. This information is crucial in order to build specific competences that aid firms in identifying opportunities and ultimately in using the right method. Managers need a clear picture of their own organizational settings and capabilities before being able to answer important questions during the implementation of one’s own customer integration initiative. This could include answers to questions like how innovation projects have to be reorganized, which kinds of projects are suited for customer integration and how the internal development processes have to be adjusted in order to allow optimal customer integration.

The internal readiness of companies – such as having a co-creation team/department, methodology, etc – is often lacking in companies that spend huge sums on co-creation projects, which are mostly managed by corporate communication departments or marketing departments. Do you advocate the formation of a multi-disciplined co-creation team that can do the job of creating, running co-creation projects? Is it not an exclusive, specialized professional/managerial skill – like branding, marketing, finance – by itself?

Yes, you already provided the answer by yourself. The problem, however, is that there are still very few companies that have such a co-creation team in place, many even don't have one functional manager taking care of the initiative. But this will change, and I think that the first organizations are building exactly these interdisciplinary teams you are talking about.

What is the link between the success of a co-creation project and the performance of the base product or initiative?

To answer this interesting question, we only have anecdotal evidence that co-creation provides value. But large scale quantitative research is lacking. However, I know that several researchers are just in the progress of conducting this research, and so I hope that in a few years or so, we will get a better answer on the performance effects of co-creation. A very first study recently has been published by Martin Schreier from WU Vienna, and he found together with a team from Japan, using data from a large Japanese retailer, that indeed user-generated products are much more profitable than internally created products (more at http://tinyurl.com/ae2bu6a). And I personally have seen many companies profiting from co-creation, if it is executed correctly and the results are used internally in the right way.

Biography

Frank PILLER is a professor of management and the director of the Technology & Innovation Management Group at RWTH Aachen University, Germany. He also is a co-director of the MIT Smart Customization Group at the MIT Media Lab, USA. His research focuses on innovation interfaces: How can organizations increase innovation success by designing and managing better interfaces within their organization and with external actors. This stream of research includes topics like value co-creation between businesses and customers/users, strategies to increase the productivity of technical problem solving by open innovation, and models to cope with contingencies of the innovation process. Frank Piller's research is supported by grants from the European Commission, the DFG, BMBF, and other institutions. He has consulted and delivered executive workshops for many Dax30 and Fortune500 companies. As an investor, member of the Board of Directors or as a scientific adviser of several technology companies, he transfers his research into practice.

Published in COMMUNICATIONS & STRATEGIES No. 89, 1st Quarter 2013

> For more information about our activities: www.comstrat.org

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org

21Mar/13Off

Interview with Henri VERDIER Director of Etalab, Services of the French Prime Minister

Published in COMMUNICATIONS & STRATEGIES No. 89, 1st Quarter 2013


Open Innovation 2.0
Co-creating with users

This issue of C&S analyses the thematic of open innovation with a focus on co-creation with end-users

Summary of this issue: Innovation has always been a central element of competition dynamics. During the last decades, globalization, deregulation, internet, new technologies, the digital revolution, and consumers' behavior have radically modified the innovation process and the competition structure. In many areas, the offer is rich and diversified: innovation is a unique opportunity to create competitive advantages necessary for growth. Among the general topic of open innovation, this special issue focuses on users' involvement in the innovation process. It offers a collection of papers providing interesting opinions, experiences, advances and evidence.

Henri VerdierExclusive:

HENRI VERDIER's Interview
Director of Etalab, Services of the French Prime Minister

Interview conducted by Gilles FONTAINE (IDATE, Montpellier/Paris)

C&S: Henri Verdier, you were co-author of L'âge de la multitude ['The age of the multitude'], which explains how individuals, outside organisations, are now crucial to creation and growth. Do they play a particular role in the process of innovation of products and services?

Henri VERDIER: Certainly.

Their first role, as we often forget, is to choose, from among all the inventions, the ones that they will make true innovations. That is to say, the ones that will be transformed into progress, both because the audience has adopted them and because of the uses it will make of them. It is in this sense that we speak of "use-driven innovations": not because they are driven by the value of use, as marketing sometimes imagines, but because they are driven by "usage patterns and customs", by the manner in which society organises itself with these innovations.

But this isn't something that dates back only to the beginning of the digital age - it is the common law of innovation in Humanity. What has changed of late is the number of individuals who are educated, equipped and connected, who, by virtue of the sum of their creations, or even their small contributions, can support radical innovations as we see on the Internet.

This is rather good news. But at the same time, we must be aware this "free labour" of Internet users, whether they are active (voluntary contributions) or passive (through data or even usage history), can also be monopolised by major platforms. Most of the time, Internet users feel that the service rendered to them by these platforms is only worth the contribution they are able to make. But it is clear that this can raise a few questions, in terms of protection of privacy and international taxation. Thus Nicolas Colin, co-author of L’Age de la multitude, was tasked with reflecting on the tax implications of this new means of creating value.

Are the social networks the nexus of this open innovation, driven by users?

Yes, if you accept a broad definition of "social network". The big social networks are of course major players in digital. But the phenomenon goes far beyond what happens on Facebook or LinkedIn...

It is quite easy to see that most of the major digital applications have a social dimension, even if you wouldn't call them "social networks" per se. Such is the case of Flickr, digital cameras that automatically connect to YouTube, Google searches, etc. The famous online teaching service, Coursera, probably owes its success not to the quality of its courses (other prestigious universities had already launched similar services), but rather to the power of interaction it affords among students. Someone had this say: "People had never seen an educational project that delegates part of the work to the students themselves."

More broadly, one could say that communities are the basic unit of the Internet. The fact that you have friends, belong to a community, share your interests, support a cause, etc. make you a stakeholder in the Internet. There are therefore social networks beyond the realm of Facebook and Twitter. The great experiences of crowdfunding, crowdsourcing, viral communication, etc. do not necessarily go through the social networks. So we mustn’t neglect any of the networks that emerge on the web: massively multiplayer games, virtual campuses, virtual currencies with their user communities, NGO activists – all of these have the potential to greatly empower the individual.

What is your take on the living labs, which hope to bring users together upstream in the innovation process?

It's an excellent approach when it doesn't get caught in the rut of being an overly utilitarian "test bench". Living labs, as with all those third-party spaces that are fond the digital ecosystem (coworking spaces, Fablabs, etc.) are fertile when they are alive. They must leave room for the unexpected, for creative randomness ("serendipity"), develop subtle listening, propose new formats of interaction, find co-creation strategies, etc.

You also presided over the "Cap Digital" Centre for Competitiveness. How can companies rethink their innovation processes to take advantage of this new situation? In particular, how do you see the future of R&D in big companies?

Firstly, I think it is essential that the major technology companies pursue and intensify their R&D efforts. The basic materials of innovation come from research and development, and if there is one characteristic of our times, it is that the pace of innovation continues to accelerate.

One should now, however, confuse R&D with innovation. Innovation is not the natural continuation of R&D. There are big innovative companies that do not have R&D, particularly in the fields of service, content publishing and communications. And where innovation is concerned, every company should learn to better harness the strength of the multitude. Such as by involving their own employees in the multitude. The formats of open innovation, listening and working with one's market, and incorporating design into the heart of the decision-making process are starting to become rather well documented methods.

Does this vision of "open innovation" imply a change in the way intellectual property is managed?

This is a complex question.

Since the Internet has become popularised, it is caught between the opposing forces of openness, open source, and being free, on the one hand, and closure, protection and privatisation on the other. This tension is structural. One the one hand, there wouldn't be any progress, perhaps even a company, without information commons (what would science be if the results of research weren't accessible to other researchers?). At the same time, we are well aware that most economic sectors need clearly defined assets to prosper. It is likely that the best answer is to strike a happy medium.

But, personally, I think nowadays there is a tendency to broaden the scope of application of intellectual property too much. Copyright was originally intended for intellectual work which was a creative expression of the author's personality. That is to say, work from his very soul, as it were. I'm not so sure that people have put their soul into all the creation for which this type of copyright protection is being claimed.

You are now the director of Etalab, the agency responsible for promoting open data in Fraance. Could one say that shared data is the prerequisite of open innovation?

Yes, that is what I believe.

This is not the only reason it is good to open up and share public data: citizens also have a right to demand the accountability of authorities, which is the hallmark of democracies. And there are innovation strategies for the administration itself, since creating large open repositories is often a guarantee of improving an organisation's efficacy.

But supporting innovation is clearly a key component of opening up public data. The services developed by citizens, individuals or companies using such data are impressive. We see them at every edition of the Dataconnexion event launched by Etalab, and they are really quite impressive.

The opening of public data will increasingly become a springboard for industrial policy. It will become a strategy for attracting innovation to one's territory (since these creators work in the territories that have published data), even transforming public action into a platform and preventing these innovations from becoming monopolised by other players.

Is the opening of data often associated with public data? Should companies be encouraged to share their data more? How?

In this respect, the State began before the business, which is understandable. The right of citizens to access public information dates back a long time. It is enshrined in the Declaration of the Rights of Man and of the Citizen, and has been part of French legislation since the CADA Law of 1978.

The debate on the opening up of public data has therefore not been too concerned with data held by companies. But I think the question will arise one day.

It will be raised because large companies too will discover the potential to boost efficiency by placing large repositories online and increasing their transparency. It will also be raised since companies will one day likely have to identify the "information commons" that it owns and which must be made accessible to all. This will probably happen when the big data collectors reach such monopolistic proportions that States are forced to require that they open up these new kinds of infrastructures to competition.

Biography

Since January 2013 Henri Verdier has been the director of Etalab (tasked with the opening up of public data), coordinated by the Secretariat-General for the Modernisation of Public Action, itself a part of the Office of the Prime Minister. An alumnus of École normale supérieure, Henri Verdier was the CEO of Odile Jacob Multimédia, where his work included developing a set of teaching materials for the educational programme La main à la pâte along with Georges Charpak. In 2007, he joined Lagardère Active as Director of Innovation. In 2009, he joined Institut Télécom as Director of Foresight, responsible for establishing the think tank "Digital Future". He is also co-founder of MFG-Labs. Founding member of Cap Digital Centre for Competitiveness, where he served as Vice-Chairmen from 2006 to 2008, before acting as Chairman of the Board from 2008 to January 2013. He is a member of the Scientific Council of Institut Mines-Télécom. He is also a member of the ARCEP Foresight Committee and the CNIL Foresight Committee. Henri Verdier is the co-author of L'Age de la multitude (Armand Colin, 2012).

Published in COMMUNICATIONS & STRATEGIES No. 89, 1st Quarter 2013

> For more information about our activities: www.comstrat.org

Contact
COMMUNICATIONS & STRATEGIES
Sophie NIGON
Managing Editor
s.nigon@idate.org