CEO, IDATE DigiWorld
What topics are likely to make headlines in 2016?
Looking at recent trends, and after having sadly dismissed topics such as – in no particular order – the onset of virtual reality, G-Fast and Docsis 3.1, 5G standardisation, IoT, AI, the introduction of E-SIM, verticals’ digital transformation… four topics in particular come to mind:
This encompasses an explosion in data traffic on all devices, fuelled by video traffic, the proliferation of video products and new pricing formats from cellular operators – in the vein of Verizon’s Go90 and T-Mobile’s Binge On – associated enhancements to LTE platforms (LTE-A/B/U), Netflix’s global strategy as it marches towards the 100 million subscriber mark by the end of 2016, the race to 4K and HDR TV, debates in Europe over harmonising copyrights, geoblocking limitations…
The increase in a wide range of dangers in digital ecosystems has given birth to a market that is expected to grow twice as fast as the IT market as a whole. By the end of the year, Europe is due to have passed a new data protection directive. Will it provide a new foundation for renegotiating the Safe Harbour agreement and making progress with the Transatlantic Trade and Investment Partnership (TTIP)? Will the ad-blocking phenomenon be made obsolete by the expected virtues of programmatic advertising?
In Europe, we will wonder just how far the consolidation trend can go as we question the hoped-for synergies, the attitude of the anti-trust authorities, the potential impact on prices, on fixed-mobile convergence, on ultrafast fixed and mobile network rollouts, etc.
Meanwhile, in the United States, we don’t yet know whether:
• A price war will break out in the mobile market?
• Cable’s dominance of the superfast access market could be threatened?
• A fixed-mobile convergence trend will take hold as it has in Europe?
These represent as much opportunity for innovation as threats to the banks which are involved in their own digital transformation, along with hopes for the creation of a large-scale mobile banking ecosystems with a strong competition between banks, Internet platforms, and telcos, plus concepts that are creating a lot of buzz such as crypto-currency, and more precisely, blockchain technology.
Here’s wishing you a very happy 2016 with IDATE DigiWorld, and we look forward to seeing you at DigiWorld Future and the DigiWorldSummit !
> Read also an opinion from Yves Gassot published in Les Echos «Media-Telecoms: convergence redux? »
Senior Consultant, IDATE DigiWorld
A Eur 30 billion worldwide market driven by automotive, consumer electronics & utilities
This IDATE DigiWorld report, published along a worldwide database, analyses the overriding trends and changes taking place in the M2M market around the globe. It explores the driving forces behind the market's growth and transformation, including an examination of major market trends, plus volume and value forecasts up to 2019 by geographical area and 25 countries.
Over the next few years, the M2M market will clearly be driven by three key verticals: automotive, consumer electronics and utilities.
• In recent years, the market has been driven by a few major verticals like Fleet management, Industrial asset management and Security. But the overall market in volume remains small, with potential for each market in tens of millions.
• In the upcoming years, there will be new major verticals (including Automotive, Connected consumer electronics and Utilities). Potential volume is definitely higher by expanding towards consumer objects (billions) rather than industrial objects only. Moreover, regulations will stimulate automotive in Europe and utilities though public policies in some regions worldwide. However, while they will theoretically drive the market, certain barriers could obstruct growth in these sectors. In the short term, some applications in these key verticals are recurrently delayed (as with the eCall regulation in Europe which is now expected to be rolled out from October 2018) and have a potential impact on the traditional M2M market. Moreover, the utilities market is seen as less attractive with business opportunity being somewhat limited for Telcos (concentrator will only be cellular connected). UK is a key exception as a cellular concentrator will be installed in almost all households (in two main regions out of the three).
• In the future, the market will be focused on emerging segments like healthcare with remote patient monitoring and smart homes.
The M2M market is still growing very fast
In 2014, the number of active M2M modules (all technologies included) reached 1.2 billion units. They will top over 4.1 billion by 2019 with a 29% CAGR.
• In 2015, the cellular market is expected to represent 290 million modules worldwide for a total market of 30 billion EUR. The annual growth of the M2M market is around 10% in value and 26% in volume, compared with 2014. Most revenues will come from software and IT services.
• Asia-Pacific will dominate Europe and North America in volume only. Europe will still lead in value, followed by Asia-Pacific. Since 2012, China has led the M2M world and has overtaken the USA in terms of cellular modules installed.
M2M players seeking business opportunities beyond their core expertise
M2M offers them attractive opportunities for Telcos, as, despite low and declining ARPU, projects offer high lifetime value, reduced churn rate and average deals representing thousands of SIM cards. Connectivity alone should represent more than 20% of total SIM cards for European telcos. Telcos are also trying to consolidate and reinforce their position on connectivity by looking at partnerships with LPWA providers, allowing them to address emerging applications.
Representing two thirds of the market, IT services are key in M2M and all players along the value chain are therefore attempting to position themselves by grabbing a piece of this lucrative market. Main players are looking at new services based on the cloud and Big Data (though analytics mainly), allowing them new business opportunities.
Finally, module providers are also challenged to break even in a market where unit prices are falling. In addition to services, they also attempt to offer connectivity services helping them provide end-to-end offering (MVNO acquisitions by Sierra and Neul purchased by Huawei).
Find out more details regarding market M2M in our dedicated market report
Emerging technologies expert, IDATE DigiWorld
3D printing market revenues will skyrocket from 3bn in 2013 to 21bn USD in 2020, representing an annual growth of 31.6%
Three-dimensional (3D) printing, included in the broader term of ‘Additive Manufacture’ (AM), refers to the various processes used in the manufacture of products which comprise the depositing or fusing of materials layer by layer. The 3D-printing process can, though, date back to the 1980s, when additive manufacturing (AM) was practised for rapid prototyping in industrial applications.
Today, 3D printing (3DP) has already been used across a wide range of verticals, spanning out from aerospace, automotive and medical industries to the consumer market. Rapid prototyping and tooling remain the most common applications, and SLS and FDM are still the dominant technologies for 3DP.
Regarding the landscape of 3D-printing ecosystems, 3D-printing systems are the largest value contributor, accounting for nearly 50% of total value and it will remain the same for the next five to ten years. 3D Systems and Stratasys, as market leaders, have a greater value chain integration and vertical coverage in an attempt to reinforce their leading position in the value chain.
Value chain positioning of manufacturers in 3D printing market
Source: IDATE in 3D Printing, December 2015
Since the process is still highly fragmented, software, marketplaces and service platforms, despite relatively lower value share, are very essential players to orchestrate the activities of designers, makers, distributors and customers across the whole value chain. Strengthened partnerships among those players has thus been observed: Autodesk and Dassault Systèmes, as software leaders, are working with, for example, Sculpteo and 3D Hubs to build a smoother 3D-printing experience.
On the market side, 3DP has been experiencing a strong uptrend since 2009 when key FDM patents expired - it took the AM industry only five years (2009-2014) to produce its second billion USD in revenue. The worldwide revenue, which comprises the printers, materials, software and associated services, is expected to grow at a CAGR of 31.6% on average between 2013 and 2020, from 3.07 billion USD in 2013 to 21 billion USD by 2020.
The industrial and enterprise markets (B2B) are the major targets for 3DP. On-demand manufacturing has become a ‘standard service’ for many established manufacturers, given its potential to disrupt the traditionally centralised production for industrial and enterprise customers. Meanwhile, IDATE sees the hybrid processes that additive manufacturing fits into existing production lines as being of high priority and having a high potential.
Consumer 3DP, where proliferating start-ups are active, is a reality today via service platforms. However, the opportunities in consumer 3DP (B2B2C, B2C) have no settled business norms yet - as more companies aggressively enter the playground; the rules of competition are constantly being re-written. Players would do well to consider their value chain coverage and business model for the next step, as well as their investment in printing capability.
Overall prices will continue to go down and standardisation and IP protection are underway, The pursuit of improved printing performance, ease of use, well-orchestrated processes and expanded global availability remain the centre of gravity for 3DP industry. However, IDATE expects that, the paradigm shift of 3D printing for mass manufacture of final parts is very unlikely to take place within the next decade – and neither is personal at-home printing – before there are breakthroughs in material availability, printing speeds and productivity levels.
Find out more on the outlook for additive manufacturing & 3D printing in our dedicated market report
Future networks wrap-up
The 2015 Digiworld summit sessions devoted to telecoms gave off a view on the move toward very high speeds in mobile, fixed and satellite and how this is strongly supported by the EC.
Customer push for higher speeds
The discussion started around customer needs. The Gigabit race is driven by a strong customer demand for speed and volume Mr. Maloberti highlighted. Customers need good network quality and experience and that is the reason why telcos will continue to invest in networks.
Valérie Chaillou, head of telecoms at IDATE confirmed this trend. She gave a wide panorama on VHB fixed broadband connections in the world. There were 265 million VHB fixed connections in the world as at end 2014 according to IDATE (three main architectures are considered as VHB: FTTH/B, FTTN and FTTx/D3.0 deployed by cablecos). FTTH/B is largely available in most-advanced Asian countries and the most deployed architecture among FTTX connections above VDSL and far from FTTxDOcsis3.0. At a regional level, major discrepancies remain.FTTx Docsis 3 dominates in North America while FTTH is the main technology deployed in other regions. Pierre Michel Attali had a more French focus and mentioned 3% of French people get fiber-based internet access at home. On the mobile side, Valérie Chaillou mentioned that more than 500 million LTE subscribers were registered at world level as at end December 2014. She illustrated how massive is LTE adoption among mobile operators and customers. Distribution of LTE mobile connections is not homogeneous and the most-advanced Asian countries are again eading the pack. China the guest country of this 37th edition of the Digiworld Summit jumped at the second world rank in the first half 2015 with a total of 225 million subscriptions (compared to almost 100 million at year-end 2014). And migration to LTE is really much faster than 3G.
A Gigabit race being run at a different pace across the globe, but with one thing in common: the growing involvement of local authorities
In the USA, Google fired the starting gun for the Gigabit race in the US. Google’s very local approach attracted a great deal of attention from cities which, when they failed to be chosen as one of the company’s rollout locations, elected to become involved in deploying their own infrastructures, in some instances in partnership with other local bodies such as universities. AT&T, which had initially focused its efforts on VDSL but is now also deploying FTTH networks, which has enabled the carrier to introduce its 1 Gbps Gigapower plan.
In Europe, Gigabit networks are also making headlines in Europe, although the situation is very different, largely because operators there are taking more wide-ranging technological and commercial approaches. Some were quick to gain a foothold in this new market, while others are waiting for market demand to build. Although the targets for connection speeds set in Europe Digital Agenda and mentioned by Anna K from the DG Connect are more modest, Gigabit-speed access could nevertheless become an industry standard for both public and private sector players, as local authorities begin to play a larger role in SFB/UFB network rollouts. The EC public consultation on the needs for internet speeds and quality beyond 2020 runs til December 7th, 2015.
Fixed, mobile and satellite are in the game
On the fixed side, several promising technologies to reach 1 Gbps. Valérie Chaillou highlighted that theoretically, several technologies (FTTH, Docsis3.1, FTTN) are capable of providing end users with a 1 Gbps connection. Naturally, end-to-end fibre connections are currently the fastest ones available. The technologies used by new solutions that rely in part on copper or coaxial networks, and which have recently been standardised, will become commercially available in the coming months.
A number of technologies on the mobile side should have a significant positive impact on mobile network performance: Licensed-Assisted Access (LAA) which will bring more flexibility to mobile operators for improving throughputs and capacity through the use of widely available and freely usable unlicensed spectrum, LTE Wi-Fi Aggregation/ LTE-H is expected to unify both LTE and Wi-Fi, improvements in carrier aggregation, adapting LTE for machine type communication, Device to Device (D2D) is a mode that enables two devices to discover themselves directly and communicate with or without the need for a network. 5G is expected to deliver 1 Gpbs internet. But 5G is at an early stage even widely accepted by the industry. 5G basic principles are widely accepted within the industry but it is in an early stage. It aims to provide 1000 times higher wireless area capacity and more varied service capabilities compared to 2010. The big change, understood to be the one that will not be compatible with LTE evolution, pertains to the new radio interfaces, which must be developed for 5G.
There are, currently, in the industry, many different proposals of foundations for those new air interfaces and there is still relatively little on the choice to be made. There are basically two types of air interfaces, one based on orthogonal waveform and that is more or less an evolution of that currently used in LTE OFDM, and another, based on non-orthogonal waveforms. To support those multiple waveforms, the frame structure will be adaptive.
But Jean-Hubert Lenotte from Eutelsat asserted satellite is coming back in the game. “Infrastructuer is the main bottleneck to more growth.”
He highlighted that satellite performs well compared to terrestrial standards and is also competitive in certain circumstances. "Cost for satellite is going down: divided by 5. Much more capacity on satellite, much more flexibility”
Satellite is able to deliver high speed internet at reasonable cost in remote regions and rural areas where terrestrial infrastructure/coverage is lacking.
Is network virtualization a game changer?
In addition to the race in the deployment of fixed and mobile superfast systems (link to forum#1), networks’ future evolution is directly linked to the integration of cloud architectures and the virtualization solutions. The concepts of SDN, NFV and network virtualization are considered as the main upcoming technological disruptions in networking architectures and are at the heart of telcos’ and equipment suppliers’ strategies.
Round-table: Reality check
Moderated by : Vincent BONNEAU, Head of Innovation Business Unit
Mervyn KELLY, EMEA Marketing Director, Ciena
François LEMARCHAND, Head of SDN product strategy, Ericsson
Michael RITTER, Vice President Technical Marketing and Analyst Relations, Adva Optical
Laurent BILLES, VP Network Architecture, Orange Labs Networks, Orange
Software Defined Network and Network Function Virtualization are two very hot topics in the telecommunication industry. What are their benefits, what is the maturity of the solutions available, the challenges that will have to be faced and most importantly as well with which outlook for the coming years? Those were the questions raised and answered during this session at the Digiworld Summit 2015.
What are NFV and SDN?
SDN stands for Software Defined Network. It is the decoupling of the control plane and the user plane in the network. Network Function Virtualization on the other hand is the softwarization of network function that are run on legacy x86 IT servers rather than on dedicated and proprietary hardware today found in mobile networks.
Several benefits can be listed
- Financial savings notably through CapEx and OpEx savings. While OpEx saving were mentioned as quite obvious in near future, interviewees were more cautious regarding possible CapEx saving, noting that it would take more time to materialize.
- Energy savings resulting from a more efficient and flexible network where resources can be pooled and distributed on the field depending on real usage,
- Time to Market: Virtualization of the network enable to launch services more quickly and thus to be more competitive and reactive on the market
- Foster innovation : thanks to the flexibility and agility that SDN and NFV enables, new services can be launched and the reliance of operators or service providers on infrastructure equipment provider development cycle is less important
Although a hot topic, the stage of maturity of SDN and NFV is still at the beginning. Around 30 proof of concept have been demonstrated, some trials have been carried out, some commercial solutions are even available but very few commercial deployments so far.
What came out is the fact that for a greenfield operator, SDN and NFV can be deployed quite quickly and quite successfully. Challenges are met when SDN and NFV have to be integrated in existing network.
Interoperability today is a real issue. There are a lot of solutions from multiple vendors using very different standards. While the number of initiatives tells a lot about the opportunity that the industry sees in SDN and NFV, it also raises its own problems.
In this context, open source will be a key driver to enable interoperability between all vendors and more importantly to foster innovation by enabling new players to develop their own service.
Extrait du live-tweet
— eric debeau (@ericdu22) 19 Novembre 2015
Challenges and impact on the industry
The two main challenges for the years to come will be the interoperability of solutions as well as the ability to handle security very seriously. Price of course is a big question. Of course, pricing should be more affordable but to what extent is the big question. By 10%? 20%? Currently we have not enough hindsight to give an answer to this question.
What is sure is the fact that SDN and NFV will have a big impact on the industry. Operators and service providers will have to move from a telecom centric environment to an IT centric environment, which is not a small thing for an operator. Indeed, Mobile Network Operators will have to rethink their organization and way of thinking. The transformation of infrastructure from a product to a service will usher many opportunities but surely require adaptation.
As warned during the session, business plans will have to be very conservative because we still lack some hindsight as most deployment will take place between 2018 and 2020
Extrait du Live-Tweet
— eric debeau (@ericdu22) 19 Novembre 2015
Consultante Senior, IDATE DigiWorld
The Gigabit race is now a reality, especially in the United States where private sector players and local authorities are all getting involved in furthering the deployment of new generation infrastructure. Elsewhere in the world, Gigabit access is already in place, notably in Asia, or, in places such as Europe, poised to become a new target product for the marketplace. This report explores regional approaches to Gigabit-speed access, and describes how the different types of players are positioned. Private and public sector stakeholders may not have adopted the same strategies, but all have a vital role to play in this search for increased network performance.
Several promising technologies but only a single solution available today
Theoretically, several technologies are capable of providing end users with a 1 Gbps connection. Naturally, end-to-end fibre connections are currently the fastest ones available. The technologies used by new solutions that rely in part on copper or coaxial networks, and which have recently been standardised, will become commercially available in the coming months. But this does not prevent ISPs wanting to implement them from already talking up how fast their new access plans will be. We should, however, be circumspect about the announcements we are hearing on Gigabit-speed networks. In the vast majority of cases, the headline speeds being announced are the absolute maximum speeds available, are not guaranteed, and depend heavily on circumstances such as where the customer is located, the technology employed, time of day, access conditions, etc.
A Gigabit race being run at a different pace across the globe, but with one thing in common: the growing involvement of local authorities
Even if Gigabit-related talk is still largely a marketing tool, most ISPs have set it as a target connection speed. It was Google that fired the starting gun for the Gigabit race in the US, as the company was tired of having to depend on ISPs’ disparate networks. Google’s very local approach attracted a great deal of attention from cities which, when they failed to be chosen as one of the company’s rollout locations, elected to become involved in deploying their own infrastructures, in some instances in partnership with other local bodies such as universities. The combined involvement of a local authority and a new competitor from the private sector has had a tremendous influence on veteran operators like AT&T, which had initially focused its efforts on VDSL but is now also deploying FTTH networks, which has enabled the carrier to introduce its 1 Gbps Gigapower plan.
Gigabit networks are also making headlines in Europe, although the situation is very different, largely because operators there are taking more wide-ranging technological and commercial approaches. Some were quick to gain a foothold in this new market, while others are waiting for market demand to build. Although the targets for connection speeds set in Europe’s Digital Agenda are more modest, Gigabit-speed access could nevertheless become an industry standard for both public and private sector players, as local authorities begin to play a larger role in SFB/UFB network rollouts.
Status of 1 Gbps plans around the world
Although users’ needs are increasing, Gigabit-speed access still seems “too much”
A great many companies, both ISPs and others, are working on developing new applications in the arenas of entertainment, video, health and education. Applications that need a great deal of bandwidth to run as smoothly as possible. But the bottom line has remained the same for years now: it is the increase in user numbers and devices being employed that drive up demand for bandwidth.
Two main pricing strategies: charge the same as before or monetise new features
Selling a Gigabit plan allows an ISP to position itself as an innovator, technically capable of delivering ever faster connections. From a marketing and commercial standpoint, however, the strategies being adopted for Gigabit plans are not really new: either an ISP will include a Gigabit plan in its product line as the logical next step in providing superfast access, charging more or less the same prices as before, or it will work to capitalise as much as possible on its new product by billing customers for each new feature enabled by its infrastructures – a good example of this being MyRepublic in Singapore.
Find out more information on "Content economics market" in our dedicated market report
Consultant Senior, IDATE DigiWorld
IDATE has just released its latest market report on connected cars, which is part of its ongoing series on the Internet of Things and M2M. The report provides an opportunity to take stock of a major market whose rate of development appears to be accelerating, with a series of announcements, veteran industry leaders such as Mercedes talking about driverless cars, the rise of newcomers such as Tesla, and connected car projects coming out of China, as foreshadowed by the new joint-venture between Internet giant, Alibaba and one of China’s first car-makers, SAIC Motors.
This is a market that every stakeholder along the value chain is gearing up for.
The strategy of most manufacturers is to make their cars connected. The main driver here is based on the regulation related to safety issues in Europe and the underlying revenue opportunity for them. In the USA, the recent GM announcement to embed 4G modules in all new cars is seen as a key trigger for market take-off. For telcos, the revenue opportunity could be interesting as the connected car will generate traffic that telcos will charge for indirectly (through the automobile manufacturer).
All main M2M mobile carriers are involved in the connected car space, as the connected car represents one of the major markets in volume. In a context where their traditional mobile revenues are flat and even declining in some regions, providing mobile connectivity in cars is a key business opportunity for telcos. Beyond car-related applications in driver assistance, from the perspective of a telco, the car can be seen as an additional cellular device, with a potential high-consumption service profile with such usage as the mobile Internet, entertainment on demand and mobile hotspot features. The prime business model remains the traditional wholesale relationship (B2B2C), even though some telcos like AT&T try to address end users directly through B2C models (through a retail data plan) and the integration of an automotive into the mobile share plan.
For Internet players, the strategy here is clear: the automobile is an additional connected device just as smartphones, tablets and laptops and needs to be addressed. However, Apple and Google do not have really the same approach. Indeed, whereas Apple aims to introduce its technology to interface with its products, Google is promoting the embedment of its technology into the car as a regular device. Google also wants to collect data to provide the most accurate advertising as possible, such as a related point-of-interest, based mainly on location.
A market that is starting to take off
On the market side, according to IDATE, in 2020, 420 million automobiles will be connected, representing a 34% CAGR on the 74 million connected vehicles in 2014. Nevertheless, this growth is not homogeneous for each category of connected cars. The embedded systems will lead the market by 2020.
Asia will lead the connected car market in 2020. Europe benefits from a 39% CAGR by 2020, mainly thanks to eCall regulation, entering onto market by end-2018.
In 2020, connectivity revenue for connected cars will exceed 9 billion EUR. In value, North America will be the leading zone, mainly due to higher ARPU than anywhere else in the world both for telematics and infotainment offerings. This encompasses direct connectivity through embedded systems but also indirect revenue related to smartphone usage. The major issues to be raised here are on the real willingness of the user to pay for such services. To encourage users to subscribe, telcos and manufacturers are already contemplating different revenue models including share plans. All the same, adoption is likely to remain limited over the next five years.
Forecast for connected car evolution, by implementation technique
worldwide, 2020 (%, Million units)
The headlines are full of the self-driving vehicle, which is on everyone’s lips in the industry. Automation could be framed at six levels, ranging from zero autonomy to fully automated. The leading manufacturers are, at the first steps, mainly luxury car providers. The traditional car manufacturers are focused on the semi-autonomous route, but the ‘upstarts’ from the realm of the Internet, such as Google and Apple, are straightaway testing the waters of the fully autonomous car. Nevertheless, many issues need to be removed to see the self-driving car market take off. Currently, they are legal (on how to handle accident responsibility), cultural (seeing no real demand from end users) and economical (on who will fund the infrastructure).
Find out more information on "Content economics market" in our dedicated market report
Published in Communications & Strategies n°99
IDEI, Toulouse School of Economics
Interview conducted by Marc BOURREAU,
C&S: The concept of platform is sometimes used in a very broad way in the policy debates. How would you define platform/multi-sided markets? What is the difference between a one-sided and a multi-sided market?
Bruno JULLIEN: It is difficult to provide a formal definition of a platform in economics and there is no consensus on such a definition. As a start I would say that a platform is a bundle of services that are used by several economic agents in order to interact. In such situations, a side represents a particular type of users (say sellers on a B2C marketplace, or merchants dealing with a credit card). Each side's benefits depend what other sides are doing on the platform. Moreover the platform may treat the various sides in a differentiated manner. For instance some may get free services while others pay for the right to access the platform.
From a theoretical perspective, a platform is not necessarily multi-sided. To be so requires two conditions. First the organization of the platform's services involves network externalities, i.e. participation and other actions of a user affect other users of the platform. Second the platform discriminates between different types of users. One criterion sometime used to determine whether an activity is multisided or not is whether the value of the service for each user depends on the whole structure of prices or not.
In a multi-sided platform the customers need to consider interactions with other economic agents to evaluate the value of the good or service and determine their behavior. The final value of the service for the customer is not fully controlled by the platform but results from agents' interactions. By contrast, in a one-sided market, firms choose the product or service characteristics and customers' value depends only on that choice.
The difficulty with the concept is two-fold. First it covers potentially a wide range of goods and services, so that the multi-sided externalities must be significant enough to be relevant. Second, all platforms are not necessarily multi-sided as this may depend on the business model of the platform. Consider for instance retailing: a chain store is typically not a multi-sided platform but Amazon marketplace is one. The chain store decides which products to carry at which prices and then consumers interact only with the store and don't care about suppliers. By contrast, online marketplaces let buyers and sellers jointly determine the products and prices.
The literature on multi-sided markets emerged in the early 2000's (and you were one of the first authors on the topic), but it is still vibrant. What do we learn from the recent research on platforms?
The early literature was mostly focused on price theory, explaining difference between pricing in multi-sided markets and one-sided markets by emphasizing the need to coordinate users and bring all sides on board. A main contribution has been the development of concept of opportunity cost where the cost of providing the service to a user is adjusted to account for the benefits (or costs) accruing to other users. This however needs to be put at work in practice, which is part of what the literature is aiming at. The recent literature developed along several lines. The first is the application of the concept to specific industries as it has been done for instance for the Internet, search engines, ads financed media or credit cards. For instance, in the case of media, the recent literature helps us understand the evolutions in terms of business models or the implications of mergers. Along the same dimension, the research is trying to develop new operational tools for competition policy where traditional results don't apply; there has been for instance work on bundling or econometric models for empirical work and policy evaluation.
At the theory level, what I retain mostly from recent work is the importance of participation patterns of the users (exclusivity, multiple vs single affiliation, switching) in shaping the competition between platforms.
On the other side of the coin, what do we still not know? What are the key questions where more research is still necessary?
While we have made significant progress in price theory and applications, there is a lot we don't know and a large scope for future research. For the theory I think that the main issue that we need to address is that our theories are mostly static. We need to better understand the dynamics of competition between platforms. What determines the emergence of a successful platform? What is the extent of barriers to entry? What are the respective roles of history and actual merit?
I expect also research to move away from price theory into design and organization, where most competition takes place. We need to understand when and how platform decides to interfere in transactions. A recent concrete example is the issue of MFN clauses for online booking systems (Most Favored Nation: this prevents registered hotels from offering lower prices on competing websites or direct sales).
For this we need more empirical work to guide research and applications. Currently we see many data originating from a single platform, so we may expect many studies of agents' behavior on a platform. But we will need also empirical work on platform competition.
For competition/regulation policy, we need more work to propose operational decision tools to competition authorities and regulators. Basic questions such as market definition or tests for predation are still not resolved for platforms. We have difficulties evaluating the optimal market structure, as more competition may not raise welfare and efficiency. This will require developing research at the frontier between law and economics.
There is a hot policy debate today in Europe on the regulation of platforms. What is your opinion on this question? What are the potential market failures in platform markets, which would justify a regulatory intervention?
The issue is not to identify market failures, which occur when there are externalities between users, network effects and market power as is usually the case with platforms. The main question is whether there is a scope for efficient ex ante regulatory intervention. In some cases, ex ante rules or principles are desirable, for instance for privacy issues. But in general I would be cautious and favor ex post intervention for several reasons. Platforms are very heterogeneous: platforms may propose very different activities, the same activities may be proposed by very different platforms, platforms may be more or less integrated vertically. This means that it is extremely complex to define ex ante the perimeter of a regulation. Moreover, the same regulation may affect different platforms in different ways, for instance a pay platform and a free platform are not affected in the same manner by restriction on data usage. Finally, the markets where platforms operate are dynamic and innovative. Market power has to be evaluated from a dynamic competition perspective and regulation should not impede this dynamic process.
Notice that it is in the broad interest of a platform to optimize the quality of interactions between its members and correct externalities because this raises their value. The literature has put some limits to this view, but intervention should occur only for clearly identified failure. I would point out two factors that may be matter for that.
A key distinction should be between situations involving bottlenecks and others where all users can easily switch or use several platforms. A bottleneck arises when each platform enjoys the exclusive rights for the conduct of transactions with some of its users. This gives some monopoly power on these transactions and we know that competition between platforms will not resorb it. We may then want to reduce this market power. This is similar to a one-way access problem familiar to telecommunication regulators.
Second, platforms providing free services to some sides rely on a limited set of instruments to coordinate users, which may not be enough to address issues of externalities. Indeed a good coordination of the sides would require as many prices (or subsidies) as there are sides. Free platforms by nature cannot pass on to consumers the true opportunity cost, which may induce excessive usage or may distort prices charged to other sides. This may induce inefficiencies and calls for special scrutiny.
Do you think that today regulators and competition authorities take sufficiently into account the specificities of multi-sided markets (provided you think they should)?
Regulators and competition authorities are now aware of the concept and its importance in some industries. However they lack tools and knowledge to incorporate this dimension in their analysis. I think this is a reason why we don't see as many applications to cases as we would like and why they prefer to rely on more conventional analysis. Some cases are more obviously two-sided than others, the credit card cases for instance. But even if the concept is not explicitly mentioned in decisions, it is often present in the reasoning (an example is the approval of the merger of the satellite digital radio services Sirius and XM by the FCC in 2008).
In platform markets, we observe some big multi-platform players, such as Apple, Google, Amazon, or Facebook, with distinct core businesses and overlapping activities. Do you think this multi-dimensional feature of the competition affects the ways these firms compete with each other?
I am not a specialist of strategy but I think this is the case. These platforms started with very different objectives and business models. This affects their priorities and strategies in terms of pricing, choice and organization of activities. Clearly Google Shopping is organized in a very different manner than Amazon marketplace, reflecting their different competencies and services. I always thought that part of the initial difference of strategies on e-books between Amazon and Apple was due to the expertise of Amazon in the domain of cultural goods.
Bruno JULLIEN is Senior Researcher at CNRS and the Toulouse School of Economics (TSE), and a senior member at Institut d'Economie Industrielle (IDEI). He is currently Scientific Director of TSE. His interests cover industrial organization, in particular in the domain of network economics, ICT and competition policy, as well as regulation, insurance and contract theory. He is recognized as a world leading academic researcher on the economics of two-sided markets, which he contributed to develop. Bruno Jullien has published numerous articles in renowned scientific reviews such as Econometrica, Journal of Political Economy, Review of Economic Studies, RAND Journal of Economics. He is currently co-editor of Journal of Economics and Management Strategy and associate editor of Geneva Risk and Insurance Review. He is Fellow of the Econometric Society, member of the Steering Committee of Association of Competition Economics and of the Economic Advisory Group on Competition Policy of the European Commission. He is a fellow of CEPR, CESIfo and CMPO. Bruno Jullien has also been advising firms and decision makers on regulatory and competition policy issues for more than 20 years. He graduated from Ecole Polytechnique, ENSAE and EHESS, and holds a Ph.D. in economics from Harvard University. He started his career as a researcher in Paris at CEPREMAP and CREST. He was also a Professor at Ecole Polytechnique. He joined the University of Toulouse in 1996. He has been Director of the research centre GREMAQ (1997-2004) and Deputy Director of Toulouse School of Economics (2010-2011). He received the Bronze Medal of CNRS, the "Palmes Académiques", the ACE best article award and the JIE best article award.
The Communications & Strategies No. 99 "The Economics of Platform Markets - Competition or Regulation?" is available!
DigiWorld Summit 2015
IDATE will contribute to the debate at the upcoming DigiWorld Summit on 17, 18 and 19 November, in Montpellier, with:
- Fatima BARROS, Chair BEREC
- Carlo d'ASSARO BIONDO, President EMEA strategic Relationship, Google
- Bruno LASSERRE, Président de l’Autorité de la Concurrence
- Eduardo MARTINEZ RIVERO, Head of Unit « Antitrust Telecom », DG Competition, EC
- Sébastien SORIANO, Président de l’ARCEP
Information & Registration:
Europe kicks off its review of the regulatory framework for electronic communications: issues and challenges
CEO, IDATE DigiWorld
The European Commission has officially begun a consultation on the new regulatory framework adapted to changes in the electronic communications sector
This review process, which is a regular occurrence in the life of European directives, is taking an unusual turn for at least five reasons:
1. It follows a less than glorious period during which the attractive idea of accelerating developments to achieve a single market for telecommunications led the Commission and its partners, the EU Parliament and Council, to devote many long months to endless negotiations over net neutrality and roaming in Europe. Ultimately, the Council approved a compromise-laden agreement, under terms that should also enable Parliament to bring its own Connected Continent negotiations to a close.
2. This review also comes at the end of a period of steadily decreasing revenues for telcos in Europe. IDATE forecasts that, even by 2020, the sector will not be back to 2008 levels. If we appear to be seeing the first hints of a recovery, it is still a slightly downhill slope for virtually every major European market. We could argue that this is due to disinflation and digital productivity factors, and point out, quite rightly, that it is margins that count. Margins are indeed getting back on track, but it would be difficult to argue that their decline over the past several years affected only dividends, and not investments as well. The top five European telcos’ combined CapEx on mobile systems in 2013 was only just over 50% of what carriers in the United States spent that year, on the same size population.
3. These problems, coupled with the restrictions inherent in deploying ultra-fast fixed and mobile networks, resulted in a series of primarily in-market mobile/mobile and fixed/mobile mergers and acquisitions, under financially favourable conditions. These deals did relieve competitive pressure to some degree, should helpto put an end to crumbling margins and should bolster operators’ spending. To date, they have not accelerated the creation of a single European market by triggering a wave of cross-border M&A deals, as the previous Commission had more or less explicitly hoped. We can nevertheless point out that, over the past several months, the sector’s regulation has been established largely during merger deals, and more as a result of decisions from the DG Competition and national competition authorities, than from the work of DG Connect and national regulatory authorities (NRA). It is worth mentioning that this has often been the case in the United States as well: one recent example being the approval given to AT&T’s takeover of DirecTV that was contingent on the carrier stepping up its fixe ultra-fast broadband network rollouts.
4. The previous review, and especially several EU recommendations, focused on working out the details of making the transition from regulation centred around accessing legacy copper networks, to regulation adapted to accessing new infrastructures. This issue is becoming more and more central, and is clearly the core concern of the current review. First, we are far from having achieved the objectives of the Digital Agenda, e.g. >30Mbps access for all Europeans by 2020, and a connection speed of 100Mbps and up for more than 50% of them. So the emphasis needs to be on provisions that stimulate competition, but also those that give market players incentives to invest. Second, the tremendous diversity in market configurations and technical options mean that, in terms of methodology, any regulatory guidelines that Europe adopts on ultra-fast broadband can only apply in a very general way. There are countries, such as Belgium, Denmark and the Netherlands, for instance, where competition comes down to a duopoly of cable versus the incumbent carrier’s superfast access networks, with all of the complex wholesale access issues this entails (on VDSL with vectoring, on cable systems, and in relation to new entrants’ investments in the ADSL unbundling market). Then there are other countries that have managed to increase connection speeds in part by investing in upgrades to the incumbent carrier’s network, while maintaining equal access rules for these infrastructures. Such is the case in the UK where, even though pushed to do so by BT’s rivals that were accusing the telco of a margin squeeze, a review conducted by Ofcom led the regulator to reconsider the principles of a managerial separation for Openreach. There is also the relatively singular case of France – even if there are some overlaps with the situation in Spain and Portugal – where the focus is on symmetrical regulation which, at least in the country’s largest cities, limits access to the indoor portion of FTTH networks, and promotes a sort of distribution of coverage initiatives depending on the location. Another singularity of the situation in France is the very high percentage of FTTH coverage achieved thanks to public-private partnerships. We should add that the review will not only have to take this patchwork of national circumstances into consideration, but also the emergence of very high-speed cellular networks with the launch of LTE Advanced and the first forays into 5G.
5. Lastly, the context has also been changed by the accelerated pace of the digital transformation in every vertical sector, and the growing influence of game changers such as mobility, the cloud, big data, etc. This new environment goes beyond the more or less clearly defined topic of level playing field, which had been confined to complaints over the imbalance in regulatory restrictions imposed on telcos and OTT. We can sense that regulatory imperatives are torn, on the one hand, between the desire to continue to restrain telecom markets that can be governed by ex ante regulation once effective competition is in place and, on the other, a tendency to want to apply this approach to all of the stakeholders along the value chain.
Clearly, there are many reasons to be interested in the regulatory review process that is underway. Talk will also focus on significant market power (SMP) and the three criteria used to test it, on harmonising spectrum policies, OTT-telco relations… But the process is proving more open and more complex than the last time around.
IDATE will contribute to the debate in various ways, through its publications (*), its market reports and its series of annual events. Let me take this opportunity to mention the outstanding panel of guests that will be on hand to discuss these very issues at the upcoming DigiWorld Summit on 17, 18 and 19 November, in Montpellier
• Fatima BARROS, Chair BEREC
• Sébastien SORIANO, Président de l’ARCEP
• Eduardo MARTINEZ RIVERO, Head of Unit « Antitrust Telecom », DG Competition, EC
• Bruno LASSERRE, Président de l’Autorité de la Concurrence.
Hope to see you there!
Director of the Innovation Business unit, IDATE
VOIP and instant messaging have not harmed EU telcos
IDATE has today published a new report, which shows:
• The introduction of VOIP and instant messaging have not harmed traditional European telcos and associated overall revenues
• In fact, there appears to be a small net benefit: losses to SMS revenues have been balanced by overall increases in revenue from data-tariffs -- driven by demand for services such as VOIP and instant messaging
• While there have undoubtedly been tough challenges for traditional telcos in Europe over the last 10 years, this report shows the biggest challenges have come from EU regulation and internal competition in the telecom industry, especially for voice calls (mobile termination, roaming, transition of telcos to managed VOIP, etc…)
The report does acknowledge that there has been some impact in two specific areas in Europe:
• In countries where SMS price was artificially high (in some cases more than 10 times the price of SMS in other European countries) the decline in SMS revenues was accelerated by instant messaging services, such as Whatsapp. However in countries where SMS has been cheaper or provided as part of an unlimited tariff, Whatsapp and other instant messaging services have had negligible impact on carrier revenues.
• VOIP calls have eaten into international voice calls but the relative losses here are small and in some cases the competing VOIP services have been provided by the carrier themselves.
> Written with financial support from Google, the report is available here.
> Indepth market elements can also be found in reports regurarly published in the DigiWorld Reasearh catalogue from IDATE: “Communication Services”
Senior Consultant, DigiWorld IDATE
A fast-growing market with 42 billion connected objects in 2015 and the promise of +14% annual growth up to 2025
IDATE has published its analysis of and forecasts for the global Internet of Things (IoT) market. An opportunity to deliver a synthesis of the Institute’s many reports on the matter (smart cars, M2M, smart grids, smart cities, smart toys…) to examine a market which, although developing rapidly, still raises a host of questions: is it really taking off, and how fast? Which business models seem the most reliable? Which market players and countries are in the best position to benefit from this new stage in the Internet’s evolution?
Although the Internet of Things is a powerful concept, it is not necessarily a market in and of itself. IoT encompasses a very disparate array of fields that need to be examined separately, to obtain an accurate understanding of their particular features, and their true growth potential.
IDATE forecasts that the global IoT market will grow from a base of 42 billion objects in 2015 to 155 billion in 2025, which translates into an average annual increase of 14%.
• Unsurprisingly, the Internet of Objects (IoO) represents the bulk of the IoT market (80%), thanks to its widespread adoption by a number of sectors, and to the very low cost of tags.
• The Connected information devices segment is the second largest in terms of volume, representing 13% of connected things, and set to grow by an average 13% a year up to 2025.
• M2M (machine to machine) represents only 6% of connected things today.
• And the smallest market in terms of volume is also the newest: Wearables & connected objects with 1% in 2015. But it is also the market that will grow the fastest over the next 10 years: by an average 30% per annum up to 2025.
World Internet of Things market, 2013-2025
Source: IDATE DigiWorld, “The Internet of Things”, October 2015
Compared to the size of the Internet of Objects and Connected information devices segments, the rest of the market is splintered between a host of vertical markets:
• the utilities market is reporting rapid growth, stimulated by regulations and public policies;
• the electronic equipment and automotive markets are also among the largest today, while the consumer electronics industry is incorporating connectivity into more and more traditional products, such as cameras.
The different sectors’ contribution to the global Internet of Things market, in 2015
Source: IDATE DigiWorld, “The Internet of Things”, October 2015
Is the IoT market changing shape?
To provide a clearer strategic analysis of this disparate set, IDATE has chosen to break down the Internet of Things market into four key areas. A distinction can be drawn between consumer and business products, on the one hand, and between the different types of connectivity, on the other:
• silo connectivity: a close loop of dedicated links between objects and servers, using direct connectivity or a hub, e.g. a smart meter or a payment terminal;
• interconnected connectivity: different types of communication between the objects themselves, mainly through the same hub, e.g. appliances in the home such as a washing machine that signals the end of the cycle on the TV screen.
The report provides a detailed analysis of the resulting, four key IoT markets:
• M2M, which covers production loops and closed loops based on applications;
• Wearables and connected objects which, by definition, do not talk to each other;
• Industrial Internet, which refers to the smart factory concept, with interactions between multiple applications that need to optimise their internal processes;
• The smart home, a concept under which applications can communication with one another without having to go through the Internet.
The Internet of Things market
Source: IDATE DigiWorld, “The Internet of Things”, October 2015
Connected Things Forum
These many facets of these topics will be explored at the Connected Things Forum on 18 November 2015, as part of the DigiWorld Summit, with:
• David d'AMORIM, Director of Innovation, La Poste
• Ezio ARMANDO, Managing Director in charge of Emerging Technologies, Accenture
• Xavier BOIDEVEZI, Vice President Business Development & Digital, SEB
• Bernardo CABRERA, Head of M2M Marketing & Projects Management, Bouygues Telecom
• Vincent CHAMPAIN, Operations Director, GE Corporate France
• Andreas FIER, Head of Academic Relations, Deutsche Telekom AG
• Didier GUILLOT, Innovation and multi-utilities Direction, director, Sagemcom
• Thibault KLEINER, Head of the Network technologies unit DG Connect, European Commission
• Ludovic LE MOAN, CEO, Sigfox
• Soline OLSZANSKI, VP Strategy & Innovation, Hub One
• Olivier ROUXEL, in charge of RFID & IOT assignments, DGE
• Marcus WELLER, Fonder & CEO, Skully
#DWS15 and on Twitter @DigiWorldIDATE