Europe’s telecommunications sector is in a major state of flux these days, due to a combination of changes in Brussels and an acceleration in market consolidation deals.
• The formation of a new Commission in Brussels, and the introduction of Junker’s investment package, which could include funding for telecoms infrastructure, although no figures or details on the allocation scheme have yet been released.
• The particular way this new Commission’s Digital Single Market project team is being structured around a vice-president and a commissioner. Many have commented on the lack of cohesion between the statements issued thus far by Messrs Andrus Ansip and Günther Oettinger.
• Questions over Ms Kroes and the outgoing Parliament’s legacy. The Recent Council of Telecommunications Ministers demonstrated how hard it will be to stick to the agenda that focused on three issues: a) net neutrality b) roaming in Europe c) coordinating spectrum management policies. Despite being substantial, the revisions to the initial text proposed by the Italian president failed to achieve a consensus, and were rejected by operators and most members.
• The launch of a new review. The Commission will also need to establish a timetable to begin reviewing the directives as planned. The process will include a review of relevant market definitions, and will probably result in the proposal of an even shorter list of markets subject to ex ante analysis of SMP.
• By replacing “European single market for telecommunications” with the “Digital single market” in Europe, the Commission is also looking to highlight telecoms-adjacent legal provisions, such as those relating to privacy and intellectual property. In another vein, it also needs to back the OECD’s efforts to put an end to OTT companies’ abusive tax evasion practices, while its antitrust powers will need to rule on whether Google is abusing its dominant position. And coming up soon are the Transatlantic Trade and Investment Partnership (TTIP) negotiations over new digital economy dossiers…
At the same time, we are tempted to say that real life goes on, and may even be accelerating
Europe’s telecom services markets are still in a slump, but many do seem to be getting back onto a more solid footing. The most striking trend is the rising M&A fever: after the finalisation of major deals in Germany (O2-Eplus) and in France (Altice/Numericable-SFR), we find out that BT, which had been displaying growing ambitions for several quarters (stepped up fibre rollout plan, combined with the launch of BT Sports and the acquisition of 4G frequencies) were also looking to take over O2 UK or EE. Hutchison (3), the smallest of Britain’s four mobile operators has said it would be willing to buy the operator that BT does not.
A veteran proponent of fixed-mobile convergence in the superfast era, Vodafone – which had already integrated Cable & Wireless in the UK, Kabel Deutschland in Germany and cableco Ono in Spain – has now set its sights on Liberty Global. Present in a dozen countries in Europe, the takeover of Liberty Global would give Vodafone majority control over Virgin Media in the UK, whose cable network covers close to 50% of British homes, full ownership of UPC-Ziggo which covers 75% of households in the Netherlands, and in its main market of Germany, control over the top two cable companies, covering close to 90% of the country’s households.
Of course, antitrust authorities will need to examine all of these deals, and may well impose certain “remedies”. We should also add that other (public) proposals are also underway, including: Orange’s bid to bolster its assets in Spain by taking control of Jazztel; Altice/Numericable’s acquisition of (the Portuguese-owned stake) in Portugal Telecom; and the possibility that Brazil’s Oi (with which the Portuguese incumbent had formed a joint venture) could merge with Telecom Italia, or its Brazilian subsidiary, TIM! Should these deals go through – or at least the major ones – they will tip the balance of power dramatically, which could well trigger another round of M&A deals in response.
Securing loans does not appear to be an issue, and the financial markets are apparently not put off by debt to EBITDA ratios of more than four or five to one. But analysts will be scrutinising the deals’ P/E multiples and the true outlook for synergies (or at least positive scale effects) being forecast for the future entities’ EBITDA.
The "TV Everywhere" Executive Seminar at the 36th annual DigiWorld Summit helped shed some light on new viewer behaviours, the possibilities opened up by hybridisation and cloud technologies, and how they will impact video watching down the road, both inside and outside the home.
TV Everywhere becoming an habit
On-demand viewing has become one of the most widely adopted new behaviours, whether via catch-up, TVoD or SVoD. IDATE nevertheless underscored that on-demand access to content can be limited by regulatory or contractual restrictions, especially when it comes to feature films.
Accessing video content from anywhere is not yet commonplace behaviour, particularly for users on the move. This can be explained in part by technological constraints in certain locations, and by the fact that TV service providers do not make all of their content, particularly live programmes, available for mobile viewing.
• Any device
Despite the proliferation of new screens and the rapid rise in viewing on smartphones and tablets, most users still watch their programmes on their home TV, even in the most mature markets. To achieve a convincing “any device” viewing experience, service providers also need to address the issue of continuity/handover between devices.
François ABBE (Founder & CEO, Mesclado) recalled that Disney had signed an agreement with Google and Apple, thanks to which a Disney video purchased from the iTunes store could be played on an Android device, and vice-versa, which is a good first step towards providing users with a seamless experience.
The television will be the source of different challenges, depending on the broadcaster and their specific distribution issues. Vincent FLEURY (CTO and Deputy CEO for New Media, France Medias Monde), whose core challenge is to deliver the same services to all users, sees the TV as “just one screen among others,” whereas Laurent FRISCH (Vice-President Digital, France Televisions) and Valery GERFAUD (General Manager, M6 Web) believe the television remains the screen of choice for watching live programmes, and for catch-up or time-shifted viewing.
• Any Content
Because the ways for accessing video products are so fragmented, and given the glut of content available, consumers today appear more willing to pay for services that can meet their need for more structured and personalised solutions.
For IDATE, one of the biggest challenges for video services is to deliver personalised content that also takes the viewing device and situation into account. Here, the issues surrounding big data and recommendation engines are especially crucial.
Didier LEBRAT (CTO, Sky) also stressed the massive investments that Sky has made in content, and in improving quality to create a sense of value for its customers, and noted that Sky+ On Demand users now watch more pay-TV content than free to air content.
Hybridisation and new video distribution configurations
Vincent GRIVET (Group Head of Broadcast Development, TDF) reminded us that all-IP for video is not yet possible, but that on-demand viewing is on the rise. The two trends thus require hybrid solutions to be deployed. If HbbTV is expected to provide a response to this demand, it does not appear to be quite as hybrid as what was being promised three years ago. Vincent FLEURY believes the industry expected a lot more from connected TV and that, today, it is much easier to access the Web on a TV set via ISPs’ boxes and mobile solutions. His main focus is on bringing connectivity to locations that are today without it.
Darko RATKAJ (Senior Project Manager Technology & Innovation, EBU) focused on the needs of today’s users, and drew a parallel between classic TV viewing, which remains a shared experience, and on-demand viewing which is still a sort of top-up. He believes the issue is knowing whether a video platform truly meets users’ needs, in terms of both quality and coverage – the latter being not just geographical but demographic as well. Ratkaj says that hybrid solutions do exist, but not in the sense that a specific network or a technology can solve all the problems. What is important is delivering the right service under the right conditions, not whether it is delivered over a broadcast or broadband network.
Yves BOUDREAU (VP Mediacom Technology Strategy, Ericsson) has been watching the “prosumer” phenomenon develop: if vendors do not design solutions tailored to users’ behaviour, prosumers will create their own "Frankenstein video" solutions. Broadcasters, programmers and technology providers need to join forces to create products that satisfy consumers’ new demands, even if it means running the risk of it being the solutions provided by Internet giants like Google, Amazon or Apple that gain the upper hand.
We are seeing hybridisation develop around broadcast networks that have no native return path. But the momentum depends on the TV service’s business model:
• pay-TV providers such as Sky or DirecTV are capable of defining the user experience thanks to their DVRs, and so meet customers’ new demands efficiently;
• for free to air TV, the problem of standards currently appears to be a real obstacle to the development and adoption of truly viable hybrid solutions. Broadcasters need to make sizeable investments in developing their applications, which is a direct result of the huge technological fragmentation of application ecosystems. Laurent FRISCH pointed out that, in the multitude of technologies that exist today, not all are equal. TV networks are not necessarily taking a position, but rather waiting for a solution to take hold as the industry standard. Without a single or unified solution, Smart TV will not take off.
Jean-Hubert LENOTTE (Director of Strategy, Eutelsat) notes that network operators are also taking initiatives in the arena of hybridisation, spurred by the fact that, while consumers still watch a great deal of linear TV, the time spent doing so is not increasing, whereas the time spent watching on-demand and time-shifted programmes is on the rise. So market players need to be able of providing live interactivity to boost the appeal of programming. If clients such as Sky and Canal+ want to keep control over their viewers and develop their boxes and products themselves, Eutelsat is developing a smart LNB solution for other clients, in other words two-way LNBs that make it possible to integrate a return path directly in the user’s satellite dish, and so do away with the need to connect the STB to the Internet. He also reminded us that satellite makes it possible to deliver on-demand content in HD and even UHD to locations with no broadband coverage.
For TV channels, hybridisation also means the development of new business models and new partnerships:
• for Didier LEBRAT, marketing the Now TV OTT service allows his company to target consumers who want a lot of flexibility, and do not necessarily want to subscribe to BSkyB’s satellite TV plan;
• Vincent FLEURY believes that hybridisation does not apply only to technical networks but is also a way to access new consumers: he underscored the importance of syndication, and recommended using the means made available by new video platforms such as YouTube and Facebook;
• according to Laurent FRISCH, broadcasters and new entrants will need to create new “hybrid” TV channels that combine linear and non linear programmes, to reinvent their value proposition;
• Valery GERFAUD reported that the percentage of ad revenue generated by catch-up TV for the M6 group is proportionate to the time spent watching the network’s catch-up TV (i.e. compared to their live programming), thanks to a solid monetisation of catch-up TV.
Lastly, Marc LE DAIN (Associate Partner, IBM Consulting Services) stressed that hybridisation also applies to customers whose behaviour differs depending on the type of programme being watched (his presentation on slideshare)
Uncertainties over switching to an unicast only model
Telcos’ and cablecos’ networks both have a return path that enables the development of advanced video products for their pay-TV customers. Plus, their point-to-point networks can use software-based security solutions that are cheaper than the broadcasting world’s conditional access systems.
Yves BOUDREAU reminded us that the Internet was not initially developed to distribute TV, and is currently not capable of taking over from free to air and pay-TV, if ever broadcasting networks were shut down in the near future. There are still lingering questions over how much telcos would need to spend to satisfy consumer demand, under a unicast-only model. For Jean-Hubert LENOTTE, the combination of broadcasting and broadband is still the most efficient solution today, especially from an economic standpoint.
Distributing TV services via LTE broadcast, thanks to eMBMS technology, is another possible new alternative for video distribution. Pierre-François DUBOIS (VP of Product Development, Orange Technocentre) pointed out that all LTE smartphones are already outfitted with an eMBMS chipset capable of receiving broadcast streams.
The technology has already been deployed commercially in South Korea, and expected to develop in other countries soon, even though uncertainties remain over the right business model, especially on mobiles. For Yves BOUDREAU, the combination between broadcasting and LTE does make it possible to create a product that consumers could be willing to pay for.
Cloud technologies’ growing role in video distribution
The development of cloud-based television and video distribution solutions is upending how all of the TV industry’s veteran players operate. Cloud TV technologies make it possible to move steadily to a more flexible model that enables swift rollouts for new services, and which alters the investment structure to an on-demand model.
nPVR technologies, for instance, make it possible to move the intelligence in operators’ networks, which would mean that STBs would no longer need to be equipped with a hard drive. Valery GERFAUD nevertheless pointed out that, should this type of solution develop, it could very well undermine catch-up TV revenue.
Cloud technologies also make it possible to solve new editorial issues tied to Social TV, such as Rising Star, centred around interactivity with viewers. According to Valery GERFAUD, incorporating interactivity into the very heart of a TV programme may be very popular with viewers, but it also creates new technical issues that need to be managed. Only the cloud enables broadcasters to handle such huge surges in traffic, from a flexibility and cost perspective. Mr Gerfaud believes that quality of service remains a very real problem, as users will quickly turn off a poor quality video, which means the provider loses money.
For Xavier POUYAT (Senior Program Manager, Azure Media Services), the cloud also allows content to have an existence that goes beyond the aired programme: e.g. for an interactive episode of the series "Bref", more than a million personalised videos were generated in three days, thanks to the cloud.
Source: Canal Plus
These technologies are also expected to be crucial in the coming years to enabling TV services to make the transition to ultra high definition, which represents both a technological and economic challenge, as Jérôme RENOUX (Regional Sales Director, Digital Media, Southern Europe, Akamai Technologies) reminded us. The introduction of new compression formats, such as HEVC, will no doubt also make a vital contribution to future developments, for both HD and UHD. Pierre-François DUBOIS hopes that, thanks to HEVC, 85% of Orange’s IPTV Orange will have access to HD programming.
They are also likely to play a major role in merging and streamlining workflow for TV industry players all down the line, to be able to tackle live and on-demand viewing on any device imaginable.
Not just a technical, but a legal issue as well
TV Everywhere and the cloud naturally create issues in the realm of user identification, and so of privacy and data protection.
While the trend around the world is towards monetising internet users’ personal data, Alain BENSOUSSAN (lawyer with the firm, Alain Bensoussan) reminded us that the notion of data ownership has no legal status: Facebook has thus given its one billion users a right that, legally speaking, does not exist, as no sovereign state recognises ownership of personal information.
In addition, while some 100 countries have adopted data protection and freedom regulation, it appears that, with big data, individuals have no control over the data that pertain to them, or do not know the data pertain to them. The important thing with big data is not knowing the name of the person behind the screen, but rather the ability to predict with more than 90% accuracy who is there and what they are going to want. So we are moving towards anonymous personalisation.
Lastly, Alain BENSOUSSAN introduced the concept of “privacy by design”, which consists of designing products and services with “privacy inside”, to reduce the anxiety-provoking aspect for users, which is one of Facebook’s chief selling points.
If you want to go further read "Live TV vs. on demand viewing: what does tomorrow’s world have in store for broadcasting?"
> You are interested by our work ? You will find our study about Future TV 2025 in our shop
Our guests' presentation are interesting you ?
> Here is the general presentation of Florence Leborgne, from Idate.
>Here is the presentation from Marc Le Dain (Associate Partner, IBM Consulting Services) : http://fr.slideshare.net/DigiWorldIDATE/tv-everywhere-41808106
> Here, you will find the presentation from Laurent Frish (Vice-President Digital, France Televisions) "TV + Digital").
Published in COMMUNICATIONS & STRATEGIES No. 96
Interview with Jean-Louis MISSIKA, Deputy Mayor of Paris in charge of urban planning
Conducted by Yves Gassot, CEO, IDATE-DigiWorld Institute
C&S: The Smart City concept is often criticized for seeking new markets for digital technology rather than tackling the phenomena that make the management of our cities increasingly complex. What is your view?
Jean-Louis MISSIKA: I do not think it is a fair criticism. Digital technologies have undeniably created the conditions for important changes in our ways of living, inhabiting and consuming. They are now part of our everyday lives and, surely, their impact will increasingly spread throughout the multiple ways we, as humans, interact.
Beyond what they create as opportunities for individuals, digital technologies are fundamental for cities – and among them the city of Paris. Urban systems are confronted with major challenges on the economic, social and environmental fronts. Energy transition, and more generally the management of scarce resources, climate change and the biodiversity challenges drive us to analyze all the solutions available now and in the future to build a more sustainable city - the city of tomorrow. Digital technologies and, in particular, their potential in terms of coordination and rational use of scarce resources, are high on the policy agenda. This is not simply to create a market for them; this is about using all the possibilities offered by technology.
I definitely think it can be a win – win development for both the city and the companies if these firms are working with those involved in the challenges of the city like urban planners and system operators.
Additionally, we are witnessing a boom of young, innovative companies and startups, but also the citizens themselves – both from Paris and outside – who develop digital solutions for the city. This is clear evidence of what is at stake here: it is for local authorities to allow the digital revolution to spread in the society so that innovation does not only occur through large companies but also thanks to citizens' initiatives.
C&S: How would you rate the strategy of Paris, using a broad comparison between the very holistic, top-down approach of projects emerging in the context of new towns and in Asia, and the more bottom-up approach that seems to be primarily based on using multiple data repositories ('open data') associated with urban systems?
J.-L. M: We are definitely leaning towards the "bottom up" approach to building Paris as a smart city.
Collective intelligence is an effective way to source the best ideas. And it does work well in Paris in part because we provide people with the appropriate means to implement projects: workspaces, coaching, financing, public spaces to experiment… and data.
This is one of the pillars of a smart and sustainable city: a place where the technology is used for people, by people, to include them in the life of the city and in the process of public decisions.
Let me refer to a recent project. We have worked over the last 6 months since the election to reach a greater transparency and citizen involvement in the City operations, by creating a platform for the development, discussion and adoption of community projects. These are chosen by the Parisians and are financed through a participatory budget. 5% of the total investment program, which represents 426 million euros, has been flagged for programs chosen directly, through vote, by the Parisians.
Within the next months, Parisians will even be able to share the benefit of their expertise and creativity by suggesting investment ideas directly.
Another way to involve people is crowdsourcing. We have developed the "DansMaRue" mobile application which Parisians use to signal local problems and even identify spots for "urban greening" (buildings, walls, squares, abandoned urban places). It is this type of exchanges with Parisians we want to implement to make our City better.
This is a genuine urban revolution in the making: the role of local governments of world-cities is to understand, support and leverage the benefits of this revolution. European cities, I believe, have a major role to play in leading this transformation. Their governance is well geared towards citizen involvement and this should alleviate the risks of the "systemic city" or the "cybernetic city".
C&S: Do you have any models or at least references to guide your project for Paris?
J.-L. M: Many interesting models exist throughout the world and we are discussing extensively with many cities facing the same challenges.
That being said, from our discussions we retain one key conclusion: each of these cities has developed its own good practices with its own cultural frame. I think there is no single model of smart city and it would be ineffective to copy-and-paste alien models or ready-to-use solutions in a fast-changing environment.
We have our own model based on an iterative approach that uses successful experiments in Paris. We have been working for several years to make Paris a strong city in the digital sector and a breeding ground for innovation. I would say that over the last 10 years or so we have created the conditions for the emergence and development of a strong ecosystem. Thanks to all these efforts, Paris has experienced a lot in recent years and is now a world leader in innovation and most certainly the top European city.
There are well-known examples of successes such as Velib ', Autolib', Paris Wifi, among other experiments such as heating a residential building thanks to the energy produced by data centers, data vizualisations of the Paris transport system, smart street furniture, … Many of those locally-grown success stories are helping to build our own project of smart city and to deploy these experiments on a larger scale as standards for the city of tomorrow.
Paris is actually creating international benchmarks for smart city, though it is not as recognized as it should be. Through calls for innovative projects led by the Paris Region Lab at the initiative of the City, we facilitate the emergence of intelligent solutions on subjects as diverse as intelligent street furniture, energy efficiency or assistance home support for seniors. Paris provides entrepreneurs and businesses of all sizes with a single territory and open trials. It also runs a network – an open innovation club – that organizes meetings between the largest companies and startups. We are even deploying this initiative in other French cities, at their own request.
C&S: What priority initiatives have been selected for the Smart City project in Paris?
J.-L. M: One billion euros will be invested by 2020 in order to make Paris the international benchmark in innovation related to land use, the participatory democracy, sustainable development, the digital economy and energy transition.
Our smart city approach is threefold: open city (open data), digital city (potential of digital technologies and their application to improve the quality of life of Parisians) and the inventive city (which is built by transversal networks and innovation).
Each of these pillars shall contribute to our 4 main targets.
One of the most important is the food supply because no city in the world is capable of ensuring its food self-sufficiency in the present state of our know-how and our food is responsible for almost 40% of our ecological footprint. We have recently launched a call for projects titled: "Innovative Urban Greening" which consists, among other objectives, in experimenting with the urban agriculture of the future.
Another challenge is the energy of the city. 90% of the energy of the Paris metropolis is provided by fossil fuel or nuclear energy. From a territorial point of view, it is an imported energy. In addition to the on-going effort on renewable energies (with a certain success for geothermal energy), the focus is increasingly on energy recovery. We must go ahead and draw from their hidden resources. These resources are at the core of the circular economy: a waste produced by someone is a resource for someone else.
An example in Paris is the Qarnot Computing start-up which has invented a radiator-computer: by dissipating all the energy consumed by data processors in the form of heat, the Q-rads make it possible to heat free of charge and ecologically any type of building (housing, professional premises, collective buildings) according to the needs of their users. A low rent housing building has been fitted out with these Q.rads radiators: the inhabitants do not have to pay for their heating anymore and their ecological footprint is zero.
The third challenge is urban mobility. This can no longer be dealt with through the option of car versus collective transport. New systems of mobility are emerging: they concern the technology of vehicles (electric cars, rubber-tired tram), but above all the technology of services (rental among individuals, sharing, car-pooling, multi modal applications, etc.), and they often open the way for the emergence of new chains of values and new players.
In Paris, the massive adoption of Autolib' and Velib' shows the power of attraction of sharing and self-service.
Last challenge is planning for the future of urban spaces and architecture. In order to take into account new ways of working, living or trading we need to be able to test multifunction buildings that combine housing, offices, community spaces, show-rooms and services to people. This mixed use on the scale of a building implies more flexible Local Urban Plans and an adaptation of safety rules. The new way of working implies home-office, mobile office, co working and remote working centers. The new way of living requires community spaces in the building, a greater use of roofs, community gardens, shared utility rooms, services to the person, sorting and recycling. New trading methods integrate ephemeral shops, shared showrooms and fablabs.
C&S: Paris as a city, and you in particular, have worked hard to ensure that digital is also an opportunity to redevelop business in Paris, which is threatened to become a purely residential city. What connection do you see between support for start-ups, incubators and nurseries, and a policy of the Smart City type?
J.-L. M: The City of Paris is an innovative city at the forefront of digital technology, as evidenced by the ranking of PricewaterhouseCoopers. The emergence of Silicon Sentier in the heart of Paris in recent years, or important events such as Futur en Seine and the Open World Forum illustrate the growing dynamism of our city in terms of digital innovation.
Notably, in our incubators, many innovations are related to digital technologies. They create value in all areas of the city and aim to serve people in a better way.
As an example, the Moov'in city competition launched in June 2013 by the City of Paris in partnership with the RATP, SNCF, JC Decaux and Autolib' aimed at bringing out new web-based and mobile services focused on mobility in Paris and the Ile de France region. One hundred ideas were generated through this process; seven of them were awarded a prize. Among them, the Paris Moov' solution is a route calculation application that integrates all public transport modes available in the Ile de France region and suggestions of activities once arrived at destination.
Some incubators and clusters that we support are directed specifically to the city and urban services (energy, transport, water, logistics, etc.).
This is for example the case of the Paris Innovation Massena incubator where we work with large corporations like SNCF or Renault. We help them and they accompany us to build our Smart City project.
In addition, the creation of incubators or Fab Lab continues with determination and ambition displayed, particularly with the MacDonald converted warehouse or the Halle Freyssinet, the future world's largest incubator (1000 start-up companies). New places at the forefront of innovation combining incubators, coworking spaces will continue to be created and its ecosystem of innovation will be internationalized. This is the only way for Paris to be in the top attractive and competitive cities in the world.
C&S: How do you pilot a 'Smart City' project? (Is it through a task force outside the main city services? Or through a cross-functional structure involving all the services?) How did you structure management of the Paris project?
J.-L. M: The smart city is a cross-cutting subject, which means we have no other way to do it than keeping good interaction among the administrative units.
All large cities are confronted with the issue of finding the appropriate scale of governance and new governance tools. The model of organization of local administrations is outdated. The large vertically-organised departments (urban planning, roadways, housing, architecture, green spaces) are facing the challenges of intelligent networks, project management, citizen participation that require a much more cross-cutting and horizontal coordination.
Paris has historically been organized in large vertical services to deal, for example with roads, architecture, urban planning and so on. For this reason, we have chosen to address the question of the Smart City within the City of Paris through a steering committee composed of elected officials and a cross-cutting taskforce driven at the General Secretariat - the body that oversees all directions.
This "smart city" mission is a project accelerator. Its aim is to raise awareness on this subject within and throughout the services but also to manage the relationship with our key partners of major urban infrastructure. It supports the deputy mayors on each of their missions and brings global thinking to structure a coherent overall strategy in the multiplicity of initiatives and concrete actions led by all the services.
C&S: On a more mundane level, the deployment of digital applications in the city is also organized on the basis of a telecommunications infrastructure (fiber access, 4G, WiFi, ...). Are you satisfied with the existing equipment and deployments underway at the initiative of private operators? How do you cooperate with them particularly in light of concerns over radio transmitters?
J.-L. M: While the City of Paris has no formal jurisdiction over this subject, we consider it is our role to ensure that all Parisians can access clear and transparent information on the deployment of base stations, and to take their concerns into account while ensuring the development of new technologies. This led us to sign a mobile telephony charter in 2003 with the telecom operators. His latest release in 2012 has set maximum exposure levels to radiofrequency fields and clear procedures for consultation with residents.
Jean-Louis MISSIKA is deputy mayor of Paris in charge of urbanism, architecture, projects of Greater Paris, economic development and attractiveness. From 2008 to 2014, he was deputy mayor of Paris in charge of innovation, research and universities. Prior to his local mandates, his professional career included various managerial positions in the public and private sectors.
François Barrault,President, IDATE
Opinion piece first published in Les Echos, 28 November 2014
Every, or almost every, European has one or several mobile phones. Most have a smartphone. Despite which, the mobile revolution has only just begun.
By the end of the decade, there will be more than 9 billion mobile users on the planet. Mobile traffic will be ten times what it is today, increasing at three times the rate of wireline traffic. Mobile and wireless (Wi-Fi) systems will be the top clients for optical fibre networks. Video will account for more than 50% of mobile traffic. In Africa, cellular networks and €50 smartphones will add hundreds of millions of new Internet users to the online population…
This will create several challenges for Europe. The first concerns our telecommunications industry, which has not been spared the vicissitudes of the latest developments. At least there was a possibility: 4G. A new, more powerful generation of mobile networks, capable of providing Internet applications with high quality access. Massive investments, but also an opportunity to differentiate oneself and break out of a somewhat frustrating competition model that tends to boil down just to price. In most European markets, in fact, and despite the advent of 4G, revenue continues to decline and margins are struggling to stabilise, which naturally undermines telcos’ investment capabilities. The impact of a reasonable consolidation on national markets is also needed put an end to this type of price war, while waiting for the emergence of a European market populated by somewhat more pan-European players. Let’s be optimistic: the first major M&A deals are about to be approved, and a new European Commission is almost in place. The adventure is only just beginning. We can also add that 4G is a universal standard, which is first for the industry, and that its future evolution (LTE Advanced) is already in the works, offering concrete improvements including even faster connections, while the first spectacular offerings from 5G will no doubt be upon us before the decade is out.
Europe’s telecom industry, which was a mobile market leader for some time, fell behind with 4G and is far from having caught up, if we compare its 4G status with that of South Korea or the United States. It would be a catastrophe for the situation to repeat itself with 5G. First, for our operators and consumers. But also for the telecommunications industry associated with it, and which continues to represent one of Europe’s far too rare digital assets thanks to companies such as Ericsson, Alcatel-Lucent, Gemalto and Oberthur.
But there is a second challenge, as well. These 4G and 4G+ networks are the first to be all IP. They will accelerate the transition from a fixed to a mobile Internet that began with 3G and Wi-Fi. This is true in both emerging economies and our own markets. And for the Internet’s top players. Google was quick to see the need to invest in Android. Facebook’s message to market analysts over the past two years has focused chiefly on the growth of its mobile users. Amazon is investing in its own tablets to protect access to its e-commerce. Netflix and YouTube have understood that video was going to account for a major percentage of mobile traffic. And even Microsoft has adopted the slogan: “mobility first!”.
At the same time, we want to believe that the future is not written in stone. A very profound transformation of the Web has begun thanks to mobile: integration of location-based solutions, the intimacy of wearable technology (glasses, watches, clothing) that can act as our wallets, monitor our health and our environment (home, car, smart city) in real time… We have seen machine-to-machine (M2M) begin to really take off in recent months, and it already represents millions of connections. The Internet of Things is becoming a reality. All of these (still tiny) waves that are building up to the future mobile Internet will be combined with the power of cloud architectures, to constitute a no doubt majority share of the data-driven economy.
These two challenges are closely intertwined, even if each will also play out separately. It would be dangerous for Europeans to become complacent in their views, or to resign themselves to a schism between the network-based economy, which could be the victim of harsh sector-specific regulation, and the economy of OTT applications which must not enjoy the impunity of offshore companies. Lastly, in addition to technological feats, we need to recognise the tremendous importance of a third challenge: namely creating trust between the industry’s players and regulators, and between those two parties and the consumer.
Chairman of IDATE
These topics will be revisited at IDATE’s 36th annual DigiWorld Summit, next year in Montpellier. Stay in touch at: www.digiworldsummit.com
At the 2014 DigiWorld Summit 2014, IDATE has unveiled it latest market report devoted to the Internet’s evolution over the next 10 years.
As the Web undergoes massive changes brought by ubiquitous mobility and verticalised consumption, IDATE has published a report that explores the future of the Internet, through an analysis of technological trends, user habits, business models and regulation. Using a scenario-based approach, it looks at the role each of the market players will play, and delivers qualified data for the global Internet services market up to 2025.
Vincent Bonneau, Head of IDATE’s Internet Business Unit, who oversaw this report, says that: “The Internet is a fundamental disruption for the ICT industry in general and even for other (non-ICT) industries, leading new and old players to operate with lower revenues and cost per unit. The effects of Internet have already been quite impressive, capturing 229 billion EUR in 2013 and destroying value in IT, content and telecom industries, but these are merely effects and have not yet had their full-scale impact.”
Internet-related disruptions originate from an open technical environment, leveraging many standards regarding core technologies, including those around networking technologies and leading to some form of network agnosticism. The parallel shift towards digitisation is becoming a progressive softwarisation, starting with information and data but now also reaching hardware and verticals. Business models are increasingly replicating the economics of software in being expensive to produce but cheap to reproduce; in particular, their replication of economies of scale and zero marginal cost is leading to bigger addressable markets. This ‘perfect’ picture is challenged, though, by the development of the Internet today with numerous (upper-layer) proprietary technologies, local regulations, commercial barriers and significant costs of non-software assets and marketing.
The major uncertainties around evolutions towards 2025 are concentrated around two main questions that can help to draw the lines between four very different scenarios.
• Availability and openness of data: Personal data is at the core of the business model of many service providers, but privacy and security are also major concerns for most users. Internet users and governments are facing a trade-off between (cheap) access to innovative services, requiring advanced technologies and adequate funding, and the control and sharing of the data in an overall environment of relatively limited trust.
• Ecosystems: At the same time, the development of major platforms, developing their own technologies, is challenging the open nature of the original Internet ecosystem. Local regulations and open standards could limit the influence of platforms, as well as business models more focused on hardware and physical product sales.
The most likely scenario to prevail is, broadly speaking, a continuation of today’s ‘Platform Wars’, where leading Internet and retail platforms concentrate ever more data. Leveraging their own infrastructure and a relaxed regulatory environment, they would provide the most innovative services around a mix of advertising and hardware and product sales and capture most of the 875 billion EUR market by 2025 (CAGR of 12% for 2013-2025).
The other scenarios are more extreme options. In an ‘Open Innovation’ scenario, there are no more dominant players due to an environment with plenty of interoperable solutions and stricter competition rules. Service providers combine their own technology in real time with third-party data to provide advanced innovative services, mostly based on targeted advertising, leading to a market of almost 1,077 billion EUR by 2025. In the ‘Low-cost Islands’ scenario, end users would discard services with limited privacy and focus more naturally on paid services bringing strong savings compared to traditional services without sharing personal data with third parties. Numerous services would co-exist thanks to advanced standardisation and would remain relatively unknown, not leading to higher trust level.
The low-cost centric approach would be reflected in an overall market of some 750 billion EUR in 2025. The ‘Pay per Trust’ scenario is a more radical scenario with only a few players providing enough trust thanks to advanced and expensive security mechanisms. Revenues would mostly come not from personal data, with users relying primarily on direct payment (for services, products and the like), for a grand total of some 678 billion EUR by 2025, the lowest total of all four scenarios for Internet services, but probably not for the ICT industry as a whole.
Source: IDATE, in The Future Internet in 2025, November 2014
Would you like to buy our study about Future Internet 2025 ? This way please.
Published in COMMUNICATIONS & STRATEGIES No.95
Conducted by Yann MÉNIÈRE
Professor of economics at MINES ParisTech,
head of the Mines-Telecom Chair on "IP and Markets for Technology", France
C&S: Could you please introduce yourself and the organisation you are working for/have been working for?
Ruud PETERS: I first joined the Philips Intellectual Property & Standards (IP&S) organisation in 1977, with a background in physics. After taking various positions in the technology and consumer electronics sectors, I was appointed CEO of Philips IP&S in 1999. There I have been responsible for managing Philips' worldwide IP portfolio creation and value capturing activities, and responsible for technical and formal standardization activities in the fields of consumer lifestyle, healthcare, lighting and technology until my retirement at the end of 2013.
I remain affiliated to Philips as a Strategy & IP adviser reporting to the board member responsible for Strategy and Innovation. I also represent Philips in the board of various companies, which I created or in which I took a share as Philips in the past. Beside my Philips affiliation, I devote about half of my time to other governing and consultancy roles as board member of a number of international companies and organisations related to IP.
C&S: What is your/your organisation's approach to IP and patents from a business perspective?
R.P.: Philips has an integrated approach to IP asset management. This includes trademark, domain names and designs, while they are often treated separately in other companies. Philips also has a proactive view of the role of IP as a creator of value. In this view, building an IP portfolio should not be a goal per se, but a lever to support growth and profitability. Accordingly, Philips IP&S is closely involved in the business decisions being made around IP rights. It is responsible for the creation and management of these rights, but also anti-counterfeiting strategy, financial aspects of licensing agreements and formal standards-setting issues.
C&S: What is your opinion about the role of the patent system in the economy, and the benefits it can bring to the society?
R.P.: Today more than ever, the economy needs people who are prepared to take the financial risk to invest in new ideas and innovative activities that contribute to welfare. Those people need a reward for the risk they take, and it is the role of the patent system to provide such incentives.
This incentive function of patents should be understood in a broad meaning. Patents are highly flexible instruments that open a broad set of strategic choices. Recouping investments by securing an exclusive use of inventions is certainly one of these options, but patents can also be used more proactively. They can be opened up for use by others though licensing programmes or the creation of joint ventures, creating valuable economic activity in the process. In other words, they are the necessary currency for the exchange of ideas and for collaboration.
C&S: Recent years have seen frequent patent battles and controversy in the digital area. Is there something specific to this technology field with respect to patents and IP?
R.P.: Yes and no. On one hand, the digital area has indeed some specific features with respect to patents and IP. It is first subject to a continuous trend towards higher IP density, with many devices each embodying a growing number of patented technologies. It is moreover organized around a limited number of platform products – such as operating systems– that enable devices to interoperate. These platforms are subject to strong network effects: they become more attractive the more users and the more available compatible products (such as apps in the case of smart phones). They can also generate strong economies of scale in manufacturing. As a result, the competition between platforms is “tippy”: only a few companies that manage to quickly capture enough market shares can eventually establish a profitable business. Against this background it is not surprising that companies compete fiercely to promote their platforms. This includes inter alia a heavy use of patents in the first step. One can yet expect patent battles to recede once market positions will be stabilized.
On the other hand, similar evolutions may take place in other sectors – such as the automotive, healthcare or pharmaceutical industries – where digital technologies are becoming pervasive. In the future, I expect products in these sectors to reach substantially higher and in some sectors, like automotive, similar levels of patent density as in the IT industry. Patents may then become a battleground of the competitive process in these areas too. Patent battles are indeed an inevitable consequence of translating innovative merit into a competitive advantage or, conversely, a disadvantage for the company that pays royalties for borrowing a competitor's technology. They are one part of the market forces that eventually shape industries.
C&S: What are the key challenges or trends that the patent system is currently facing?
R.P.: The key challenge for the patent system is to raise the bar for the quality of patents. The last decades have seen a sharp increase of patent filings around the world, inducing backlogs in patent offices and a drop in patent quality. Based on results of recent court decisions and inter parties reviews in the USA, it is estimated by some experts that about 50% of all patents can be assumed to be invalid. As a result, one cannot assume nowadays anymore that a granted patent is a valid right.
This legal uncertainty fuels lawsuits, but also criticism of the patent system. I think that both can be avoided with enhanced patent quality. To raise the bar, better searches for prior art should be a priority. While various other regulations are currently being discussed, this is the most obvious and effective way to improve the patent system.
Innovative, market-based means can help patent offices to fight the abuse of low-quality patents. I am thinking, for example, of crowd sourcing based searches for prior art to help defendants against assertion of low-quality patents. Article One Partners is a good example of a company providing exactly this service.
C&S: Where are the main differences in the patents/IPR thinking and practice between both sides of the Atlantic, and between the Western world and Asia?
R.P.: The basics of the system – that is, patent law – are the same everywhere. Hence there are no significant differences in the way companies obtain IP rights. However, important differences remain at the level of the judicial system, in the way national systems are operated.
The U.S. patent system is more judiciary. It has a very complex judicial system, with high costs of using patents. By contrast, the European system is more balanced. It is less costly for its users despite the persistence of national patent systems. I am confident that this system will further improve in future years with the creation of the unitary patent and patent court.
Asian countries are modernising their patent systems, although not all of them are at the same stage. This is a very important evolution, especially as regards China. As of today, legal uses of IP remain less developed in this country than in the Western world. Local companies and IP institutions are less experienced, but they are catching up rapidly. I expect China to be at the same level as Europe in about five to ten years.
C&S: What will be the most important developments regarding patents for the coming 5-10 years?
R.P.: The evolution of accounting rules towards a better financial valuation of IP should be a major development in future years. Currently, these rules tend to focus on the cash benefits of licensing income while there are many other ways in which IP assets create value in the knowledge economy. IP makes it possible to protect products and markets from competition, enter new markets, facilitate deal making or create freedom to operate and thus enable higher and more profits or less cost. Because such uses of IP rights do not appear explicitly on the P&L account and the value of the IP portfolio is not on the balance sheet, companies ignore the real value of their intangibles. In practice, this means that IP assets are dealt with at the IP department only, while they should be considered as strategic assets at the board level.
Financial valuation is necessary to convince corporate executives of the real value of intellectual assets, just as for other important assets on a company's balance sheet. This requires new international accounting frameworks that better reflect the true economic importance of intangibles. This is a challenging task for the next ten to fifteen years. Eventually, better accounting rules will facilitate IP recognition within companies, but also in society. The way IP works in the knowledge economy is still not well understood. We still apply the rules of the traditional hardware based economy to the knowledge economy. As an example, courts still calculate royalties as a percentage of the cost price of products, while they should consider the value that IP brings to the product. A new framework will be needed for financial, legal, tax and competition rules in the global, knowledge economy.
I also expect the maturation of markets for IP to be an important development for future years. The current system of bilateral negotiations of licensing deals is quite primitive. It is especially opaque and inefficient when the same patent needs to be licensed to multiple companies, with replicated costs of due diligence, negotiation and monitoring for each deal. A transition towards a more transparent and efficient organization of IP markets is possible, just as happened for stock markets in the past. With market-based pricing of unit licence rights, based on centralised due diligence, the creation of international IP exchange IPXI in Chicago is for instance an important step in this direction.
- Ruud PETERS was appointed Chief Intellectual Property Officer (CIPO) of Royal Philips in 1999, in which position he was responsible for managing the worldwide IP portfolio, and the technical and formal standardisation activities of Philips. In this role, he turned the company's IP department from a cost centre into a successful revenue-generating operation, while at the same time integrating all the different IP activities within various parts of the company into one IP centralised organisation. He further developed and introduced a new concept for intellectual asset management, in which all the different forms of IP are handled together in an integrated manner, and advanced methods and systems used for determining the total return on IP investment by measuring direct and indirect profits. Ruud joined Philips in 1977. He retired from his role as CIPO at the end of 2013, but continues to work for the company as a part-time adviser on strategy and IP matters. He is also a board member of a number of technology /IP licensing /trading companies. Ruud has a background in physics (Technical University Delft, The Netherlands). He was inducted into the IP Hall of Fame in 2010 and in 2014 he received an Outstanding Achievement Award for his lifetime contributions to the field of IP from MIP magazine. He frequently speaks at major international IP conferences and also writes articles regularly for leading IP and business magazines.
- Yann MÉNIÈRE is professor of economics at MINES ParisTech (France) and head of the Mines-Telecom Chair on "IP and Markets for Technology". His research and expertise relate to the economics of innovation, competition and intellectual property. In recent years, he has been focusing more specifically on IP and standards, markets for technology and IP issues in climate negotiations. Besides his academic publications, he produced various policy reports for the European Commission, French government, and other private and public organisations. Outside MINES ParisTech, he teaches the economics of ICT Standards at the Imperial College Business School. He is associated as an economic expert with Microeconomix and Ecorys, two consulting firms specialised respectively in economics applied to law, and public policies.
Florence Le Borgne
Head of the TV & Digital content Practice, IDATE.
Can anyone compete against American on-demand vendors?
IDATE is releasing the latest version of its “TV and video services worldwide” market report and database. It provides readers with vital data on a market in the throes of major upheavals, analysing changes in viewer habits, TV access networks (terrestrial, satellite, cable, IPTV) and revenue sources (linear TV, pay-TV, DVD, Blu-ray, VoD) in more than 40 countries.
The report’s project manager, Florence Le Borgne, tells us that, ‘even though we are watching more video than ever before, revenue growth for the global video market is being stunted by the inexorable drop in video hard copy sales, and by the pressure that Over-the-top (OTT) distribution is putting on traditional TV business models’.
According to IDATE, television revenue worldwide will increase from 368.9 billion EUR in 2014 to 424.7 billion EUR in 2018, which translates into an average 3.6% annual growth, compared to the 5% reported between 2010 and 2013:
• pay-TV revenue is forecast to decrease dramatically over the next few years, with average annual growth dropping to 2.8% between 2014 and 2018, compared to 6.1% between 2010 and 2013. Despite which, it will continue to be the main source of TV revenue up to 2018, bringing in 195.9 billion EUR in 2018;
• advertising revenue is expected to enjoy more dynamic growth overall, in line with its trajectory in recent years: 4.8% a year up to 2018, compared to 4.6% per annum over the past four years, to reach 193 billion EUR in 2018;
• funding from TV licensing fees will continue to increase significantly: by an average 1.5% a year, to reach 36 billion EUR in 2018.
The revenue generated by video on demand (VoD) will climb to 34.4 billion EUR in 2018, thanks to a solid and steady increase (+131.5% compared to 2013) and will represent more than double hard copy sales (15.5 billion EUR in 2018) by that time.
• Online (OTT) video is expected to consolidate its dominance of the VoD market, accounting for more than 80% of on-demand revenue.
• VoD rentals will continue to be the central model on managed networks, generating 4.7 billion EUR in revenue in 2018, compared to 3.3 billion in 2014.
• The hard copy market will continue to shrink across the globe, losing close to a quarter of its value in 2014, despite the growth of Blu-ray.
TV revenue growth forecasts by market, 2014-2018 (billion EUR)
Source: IDATE, State of TV & Video Services worldwide, July 2014
Breakdown of TV revenue by source, 2010-2014 (billion EUR)
Video on Demand Focus: Increasingly competitive OTT players
• Despite the popularity of premium cable channels, Netflix now rivals top dog, HBO. Although subscriber numbers for the top premium cable channels in the US (HBO, Showtime, Starz) have remained relatively stable, and are even increasing for some – +13.3% and +14.2%, respectively, for Showtime and Starz between 2010 and 2013 – the real momentum today is behind OTT services, starting with Netflix whose customer base grew by 71.4% between 2010 and 2013. At the end of June 2014, Netflix had 36.2 million residential subscribers, including 35.1 million paying customers, compared to 28.6 million subscribers for HBO in the United States.
• Will the top American SVoD providers dominate the global market? As of October 2014, Netflix is present in 46 countries, and reporting a base of 13.8 million subscribers worldwide. This means that it alone controls two thirds of the globe’s subscription VoD customers. By way of comparison, HBO is present in 61 countries in Latin America, Asia and Europe, and has more than 35 million subscribers, of which a growing percentage to its HBO Go service. Meanwhile iTunes leads the way in electronic sell-through (EST), earning 65% and 67% of movie and TV programme sales revenue, respectively, in 2012. Virtually all European countries have access to the company’s video rental service, while residents in eight countries – Germany, Austria, Spain, France, Italy, Ireland, the UK and Switzerland – can also download to buy from the iTunes store. Only smaller national companies are competing with these heavyweights. And while some are popular – France’s CanalPlay VoD service has 520,000 subscribers – one cannot help but wonder whether they can hold their own against these global titans.
• Also noteworthy is that Netflix outperformed HBO in terms of total SVoD revenue for the first time in Q2 2014: generating 1.146 billion USD vs. 1.141 billion USD for the premium cable channel.
Netflix share of the global SVoD market as of 31 December 2013 (%)
Source: IDATE, State of TV & Video Services worldwide, July 2014
American OTT video providers’ footprint in Europe as of October 2014
Would you like to discover our study ? Go to our store.
Several of IDATE’s DigiWorld Institute members took a business trip to the United States on 18 and 19 September, to attend the 2014 edition of our Transatlantic Telecom Dialog in New York, an annual event that we co-host with our partner, CITI, which is headed by Professor Eli Noam of Columbia University.
This trip also provided an opportunity to prepare for the launch of a Collaborative Research Programme being conducted in tandem with our Members. This think tank will be held in Brussels and devoted to the topic, “Telecoms USA: role-model or counter-model?" Before attending the Dialog, we travelled to Washington D.C. to meet with several FCC representatives, as well as the Public Affairs and Regulation teams from AT&T, Alcatel-Lucent and Verizon.
Back home at IDATE, I wanted to share a few thoughts on three hot-button issues that are attracting a great deal attention in America’s telecommunications sector:
• Superfast broadband competition rules
• Spectrum auctions and mobile market competition
• Will the net neutrality soap opera ever end?
1. How to prevent cable from having a monopoly over the supply of superfast access in a number of locations a few years from now?
We can start by remembering that, in the early 2000s, the Republicans went a long way in defanging the Telecom Act, banking instead on intermodal competition between telcos and cablecos to sustain the construction of superfast access infrastructure. In doing so, they abandoned the idea of imposing unbundling obligations like the ones we have in Europe. As a result, the leading operators began making sizeable investments around 2005 in deploying fibre and hybrid access networks. At the same time, the cable companies that serve 90% of households upgraded their (DOCSIS 3) systems to deliver ever faster connections. But cable progressed quickly, whereas telcos soon shifted their focus to mobile network rollouts, particularly these past three years as the LTE battle has heated up. The footprint of the leading carriers’ upgraded networks has expanded very little since then. And cable’s share of the broadband access market, which today stands at 60%, continues to increase steadily. In a recent talk, the Chairman of the FCC presented and commented on this following graph that shows that 79% of households have access to a connection of 50 Mbits and up, but that only 17.6% of them are covered by more than one provider.
What trump cards does the FCC hold to “encourage” telcos to step up their superfast broadband rollouts?
• Google? Of course Google does not want to do business with a single ISP. As a result, in Kansas City and later several more cities, the company began to build 1 Gbit/s networks – under the notable condition that residents in the targeted neighbourhoods explicitly express their interest in having it. Nobody thinks that Google plans to deploy fibre across the country. But its initiative has roused the interest of municipalities, in addition to helping set 1 Gbit/s as the new threshold for high-speed access.
• What bout the municipalities? There had been a handful of initiatives from cities in the past, but several of them failed to reach their potential. Added to which a number of states considered that these city-led rollouts constituted unfair competition with the private sector, and virtually forbade them. The FCC’s new chairman now wants to review these bans.
• Quid pro quo negotiations to shut down the (TDM) POTS and transition to an all IP system. This is a sensitive and legitimate part of telcos’ development strategy, but one that the states are watching very closely, and not a little warily. The FCC authorised AT&T to test two TDM network shut downs, one in rural Alabama and the other in suburban Florida. As in Europe, where stakeholders are talking openly about phasing out legacy copper systems (and switching to fibre), the goal is to test the problems encountered by the lines that outfit lifts, security systems, etc.
• The conditions that anti-trust authorities might impose on several mega-mergers that are being examined: Comcast/Time Warner Cable, AT&T/DirecTV…
• The FCC can also underline the competition aspect of 4G and 4G+ (frequency aggregation, MIMO antennae, small cells). Encouraged too by the growing number of announcements in Google’s wake of 1 Gbit/s networks being made available here and there (but especially in areas coveted by Google) in recent months by AT&T, Century Link and Cox Cable. The previous FCC chairman, Julius Genachowski, had called for the deployment of one network per state delivering a minimum 1 Gbit/s by the end of 2015. But these recent deployments do not appear to foreshadow any great increase in wireline telcos’ Capex: a market analyst in fact suggested they could be dubbed FTPR (Fiber To The Press Release) rollouts…
To finish on this point, we will underline that the gap (1) in market growth between Europe and the US, which up until now had been mainly in the mobile sector, appears to be spreading into residential wireline as well. America’s two largest carriers, AT&T and Verizon, are on the verge of putting an end to 10 straight years of shrinking revenue. This is very directly the result of an increase in triple play customers in their upgraded markets (U-Verse and FiOS) and the $150+ ARPU they generate. Provided the video services that are central to this ARPU prove profitable, telcos could decide it is in their interest to step up their spending on wireline networks, and expand their superfast access footprint. This is indeed one of the central aims of the planned mergers between AT&T and DirecTV, like the one between Comcast and Time Warner Cable (2), namely to bolster their power with the studios when negotiating programming rights.
2. How to better monetise spectrum while removing it as a bargaining chip in M&A deals?
The AWS-3 auctions will be taking place on 13 November, and will be the biggest since the 700 MHz band auctions in 2008. On the block are 65 MHz in three frequency bands: 1695-1710MHz (unpaired uplink), 1755-1780Mhz and 2155MHz-2180MHz (these last two are to be paired to provide uplink/downlink operations). The FCC has set a total reserve price of $10.587 billion. This takes into account that the bulk of the first two frequency bands are currently occupied by federal government services, including the DoD, and that it will take several years to complete the handover, or coordinate licensed shared access (LSA) (3). AT&T, Verizon, T-Mobile and Dish Networks, along with local and rural operators, have all expressed their interest in taking part in these auctions. Not so Sprint which, unlike its competitors, has no AWS-3 adjacent frequencies, so will not be taking part.
But discussions in recent months have focused especially on strengthening the competition policies that the FCC could impose on the auctions, and on the spectrum trading market. These provisions currently make up the points of review in the regulator’s 'spectrum screen'. In an order issued in June, the FCC expanded this provisos by stressing the particular value of lower frequency bands, i.e. below 1 GHz, after having recalled that the country’s two largest carriers today control more than 70% of allocated spectrum. For the upcoming AWS-3 auctions, which do not concern these frequencies but rather bands that are currently shared by a host of players, no specific conditions have been defined to limit any given company’s access to them (4). For so-called incentive auctions in the 600 MHz band, however, which are slated for 2015, a reserve of a maximum 30 MHz will be set for each market on the block. The ultimate size of this reserve will nevertheless be contingent on meeting the reserve price set by the FCC for the market. National carriers (as opposed to local and regional ones) that control more than a third of below 1 GHz-band frequencies in this market will not be able to take part in the auctions for this reserved spectrum. The FCC has also set the proviso of precluding secondary market sales of this spectrum, to ensure that parties not eligible to take part in the incentive auctions, or sales that would enable an entity to control more than a third of below 1 GHz spectrum, cannot acquire licences to the reserve frequencies during that time. It should be mentioned that these provisos did not receive unanimous support within the FCC, and that the two Republic commissioners voted against them.
It is interesting to note that while the FCC is concerned about local and rural cellular operators’ future, which probably serve less than 5% of mobile users today but can cover a much higher percentage of the physical landmass, we are seeing more and more roaming agreements being signed between the four national operators and these smaller regional carriers. In the race to expand their footprint, the big national operators are in fact leasing their spectrum to small rural operators so they can provide LTE coverage. So there is at once an agreement on the terms and conditions for leasing spectrum and on roaming prices, which has enabled one million national ISP subscribers to enjoy coverage in rural areas, while the tens of thousands of users who subscribe with local operators have access to national operators’ infrastructure. We understand that the FCC does not currently regulate these agreements. Verizon and Sprint have apparently got a head start here, having signed 21 agreements for 2.3 million PoP, including 18 for LTE in the 700MHz and AWS-1 bands, and around 30 agreements for 4 million PoP in the 2.5MHz band, respectively.
We will wrap up this quick summary of the latest news from the US mobile market by listing some of the other topics that are attracting attention:
• cable companies’ ongoing investment in Wi-Fi hot/home spots, with roaming agreements between the two, and the prospect of entering the cellular market by positioning themselves as MVNO to complete their infrastructure;
• the debate triggered by Qualcomm on using LTE (vs. Wi-Fi) on open (i.e. licence-free) spectrum;
• confirmation of the onset of, if not a price war, increasingly lively competition in the mobile market since the Sprint/T-Mobile merger was cancelled. While we were there, Sprint rolled out an unlimited voice-SMS-data plan priced at $50 a month;
• the massive queues outside the Apple store in Manhattan, and the huge boost that VoLTE could give to iPhone 6 sales.
3. Will the net neutrality soap opera ever end?
Here again, we need to go back to the 2002 decision that classified cable modem access as an “information service” rather than a Title II service under the Telecom Act, which would make it subject to common carriage obligations – a designation that was then abolished for all access services in 2005, including telcos’ ADSL services. This decision snowballed, and the FCC’s successive bids to enforce net neutrality –Chairman Powell’s four Internet freedoms in 2004, the Internet Policy statement in 2005 and the Open Internet Order in 2010 – had to be defended in court, following law suits filed by Comcast and Verizon.
Today, and following the ruling handed down early this year by the Federal Court in Washington, there are no longer any regulatory provisions that prevent an ISP from being a gatekeeper.
It was under these circumstances that the FCC began a 120-day consultation on the future of net neutrality this past spring. It received more than 3 million responses. The ensuing debates in the blogosphere and at industry conferences are focusing on several issues.
A legal decision needs to be found that avoids disqualifying the FCC’s core principles. There are two options here: either agree to repeal the earlier decisions, and reclassify Internet access as a Title II service under the Telecom Act, or enforce Section 706 of the Telecom Act more extensively. Section 706 vests the FCC with the authority to encourage the deployment of broadband infrastructure, and eliminate the barriers to development and competition in this market.
In addition to these interpretations of the Telecom Act, approval for the Comcast-Time Warner Cable and AT&T-DirecTV mergers could carry case-by-case obligations aimed at preserving the Open Internet. It is worth remembering that the FCC used the Comcast-NBC merger as an just such an opportunity.
Alongside these somewhat technical questions, debates over the past few weeks have also focused on the following points:
• Should mobile access also be subject to net neutrality rules (which it has always managed to avoid)?
• After this summer’s polemics over the paid peering deals struck between Netflix and the top ISPs, should interconnection between content providers and ISPs be covered by net neutrality rules?
Also noteworthy is the debate that followed AT&T’s sponsored data API proposal, i.e. to have content/service providers sponsor the traffic delivered to consumers’ devices – an idea that was more less picked up by T-Mobile.
We will end by mentioning that all of these unresolved issues are fostering a certain curiosity in how things are being handled in Europe.
Perhaps because he was the head of the cable lobby, and later a CEO for mobile operators, which was pointed out repeatedly during his nomination hearings, in his many pronouncements the new FCC Chairman (5) has been keen to impress that he wants to strengthen competition policies. He has addressed all aspects of the debate relatively explicitly. While nonetheless taking the chance of dashing some of the hopes that he himself kindled, within a complicated political and institutional situation – and one where he is regularly reminded that the FCC has to answer to Congress.
1 There is also a gap in terms of market structure. Even though there are four national mobile operators in the United States, AT&T and Verizon are only very large regional residential carriers. The idea of fixed-mobile convergence, typified by merger and acquisition deals in Europe such as SFR/Numéricable and Vodafone/Deutsche Kabel-Ono, do not appear to be in the cards for the US market. Nor, as far as we can tell, are quadruple-play bundles.
2 It is not the only one. The deal would enable immediate synergies in managing bundles, including DBS for customers not covered by U-Verse. It also has an international diversification component, given DirecTV’s sizeable footprint in South American markets that AT&T is interested in.
3 Licensed Shared Access, or ASA (Authorized Shared Acess) in US. Worth noting is that debates continue ver what form ASA will take in the 3.6 GHz band.
4 If there is no reserve spectrum in the AWS-3 auctions, certain provisions, such as dividing frequency bands into 2 X 5 MHz blocks, are aimed at satisfying the needs of smaller regional operators.
5 Tom Wheeler was nominated for Chairman of the FCC by President Obama, and confirmed by the Senate in November 2013.
Online advertising expected to account for 33% of all media advertising by 2018
IDATE has just released its report and database dedicated to the world online advertising market. This report provides an analysis of today’s key online advertising trends and technologies (including privacy issues, retargeting, VRM, new data measurement techniques, etc.) and includes an overview of the world leaders and their KPIs (Amazon, Apple, Facebook, Google, Microsoft, Twitter and Yahoo!). It takes a look at they key markets for monetizing online advertising, including search, display, mobile, RTB, social networking and video in 15 countries, including Brazil, China, France, Germany, Italy, Japan, India, Russia, Spain, South Africa, South Korea, Switzerland, Turkey, UK and the United States.
The global online advertising market will be worth more than 160 billion EUR by 2018, enjoying an 11.4% annual growth rate from 2010 to 2014. IDATE expects a steady increase in this market for the coming years: 9.7% annual growth up to 2018.
IDATE Consultant Soichi Nakajima, who managed the production of this report, points out that “we expect the breakdown of advertising formats to remain unchanged in the five coming years. Whilst the search market is already a stable market, largely dominated by Google across the globe, overall revenue for the display market is expected to increase slightly over time, with much more competition in terms of players fighting for market share.”
Global online advertising revenue (billion EUR) and its share (%) of total media advertising revenue, 2010-2018
Major trends in the advertising market include:
• Mobile advertising expected to account for 20% of online advertising by 2018 with an expected +50.1% annual growth rate from 2014 to 2018.
• Social advertising forecast to account for 14% of online advertising by 2018, with a +39.8% annual growth rate from 2014 to 2018.
• OTT video advertising expected to account for 9% of online advertising by 2018 with +21.8% annual growth rate from 2014 to 2018.
• Global RTB advertising market accounting for 30% of display advertising, thanks to an annual growth rate of 32.7% from 2014 to 2018.
Close-up on mobile advertising: mobile ad market an extension of the fixed
The tools and technologies of fixed online advertising can be re-used for mobile, which means that Google could easily carry its dominance of the fixed market over to mobile. On the other hand, in-app tools and technologies have to be created from scratch. In-app advertising, which is unique to mobile, makes up 20% of the current market. Plus a great many apps are games, promoting their paid services rather than displaying actual ads.
Current global market breakdown: search vs. display advertising on the fixed and mobile Internet, in 2014
- If you want to come to our seminar "TV Everywhere" during the Digiworld Summint 2014 wednesday november 18.
Head of "Video Distribution" Practice
IT services accelerating the transformation
The video distribution environment is changing at an incredible pace, with viewing habits becoming more and more individual and involving multiple screens. Added to which, the growing integration of IP-based solutions is only accelerating the pace of this transformation.
IDATE’s freshly updated TV and video database, and its accompanying market report, provide readers with a complete view of the latest market trends, delivering key figures for 30 countries on TV access modes (terrestrial, satellite, cable, IPTV and broadband) and connected devices (televisions, set-top boxes, home consoles, Blu-ray players, PVR, DMA/R, smartphones, tablets).
Traditional broadcasting networks are being forced to join this new environment to stay in the game. Up until now, competition between the networks had been based primarily on the rate at which infrastructures were being digitised, with satellite and IPTV leading the way over cable and the classic terrestrial network. The progress made by online video, enabled by the increase in user numbers and in connection speeds, has created a whole new ball game. One that will make it harder for digital TV to find new sources of growth.
Breakdown of TV access modes in Europe (million TV households)
A host of new, intertwined challenges are emerging for market players: the advent of new video distribution networks with fixed and mobile LTE broadcasting, the end of uncertainties over the UHD TV market and the integration of IP: an expanded market for CDN, flexibility of cloud TV solutions, creation of hybrid TV or possibly IP broadcasting solutions?
The change in users’ viewing device of choice (connected TV, streaming sticks, tablets, smartphones) only increases the threat of veteran TV providers being cut out of the loop, as Internet giants work to leverage their platforms to secure a central role in managing the user interface.
A selection of platforms’ users and subscribers in Q1 2014 (millions) and YoY growth (%)
- If you want to read our study, go to our store.
- If you want to come to our seminar "TV Everywhere" during the Digiworld Summint 2014 wednesday november 18.