The Platform Law Blog

Ofcom’s discussion document on media plurality and online news: Lessons for future regulatory interventions in the UK and beyond

Photo by Dominika Greguu0161ovu00e1 on Pexels.com

On 16 November, Ofcom published a discussion document on media plurality and online news. The discussion document sets out Ofcom’s understanding of the role that online platforms, such as social media, search engines and news aggregators, play in the UK news ecosystem. 

Ofcom’s discussion document focuses on hot (yet understudied) topics. Even though it is now widely acknowledged that platforms may significantly influence the range and quality of news that citizens see, policymakers have not sufficiently engaged with empirical research on (a) the role of platforms in the news value chain, and (b) how citizens consume news in a platform’s environment. The discussion document contributes to a better understanding of these issues as it is largely based on the News Consumption Survey, which Ofcom conducts on an annual basis.

The discussion document poses a series of questions to interested stakeholders in order to facilitate exchanges between Ofcom and the industry. Such exchanges are expected to inform the recommendations that Ofcom will make to the UK Government on the reform of the existing framework. In other words, it is likely that UK media regulation will be revised to take account of the impact of platforms on plurality. 

Though the discussion document and the News Consumption Survey clearly focus on the UK market, Ofcom’s findings are arguably relevant to other jurisdictions that have recently adopted or are in the process of designing regulation that seeks to promote media pluralism, notably the EU and its Member States. This blog post summarises Ofcom’s main findings and discusses how they can inform both regulation that is still in the making (e.g., the European Media Freedom Act) and the implementation of regulation that has already been adopted (e.g., the Digital Markets Act and national prominence rules). It also assesses Ofcom’s proposals for future regulatory interventions against the backdrop of consumption patterns in digital markets.

Before we take a closer look at the discussion document, it is worth noting that Ofcom’s work on media plurality remains important. Even though audiences can access immeasurably more information than they could in the analogue environment, the changes brought about by digital technologies have not rendered outdated the discussion over how media plurality can be effectively protected. On the contrary, alongside concerns that policymakers have attempted to address in the past, such as concentration of ownership and interference with editorial freedom, which are still relevant, new (and arguably more complex problems) have arisen. As Ofcom puts it:  

“[T]he sheer scale of options may be as overwhelming as they are informative, with trustworthy content fighting for space and attention alongside more sensationalist and unreliable material. As [online platforms] increasingly play the role of gatekeepers, curating or recommending news content to online audiences, it is not clear that people are aware of the choices being made on their behalf, or their impact.”

Put differently, the real question for media policymakers is not whether plurality still warrants protection, but whether the legal framework is appropriate to address the challenges posed by actors that have emerged in recent years, notably online platforms (see my work on this issue here).

Ofcom’s main findings concerning the effects of online platforms on media plurality 

A word of caution regarding the scope of Ofcom’s initiative is warranted. The discussion document focuses on the impact of online platforms as it is experienced by citizens. Ofcom has not analysed the effects of platforms on the editorial incentives of news publishers, for example in terms of the stories they cover. Moreover, Ofcom has not discussed the effects of online platforms on the sustainability of news production, that is, issues relating to competition in digital advertising markets and the bargaining relationship between platforms and news providers over payment for the provision of news content. This does not mean that the above issues are not key to ensuring media plurality (they are). However, Ofcom will likely focus on the former in the near futurewhereas the latter is expected to be dealt with under the upcoming Digital Markets Unit (“DMU”) regime.

Returning to the discussion document, Ofcom’s main findings can be briefly summarised as follows: 

Online platforms play an important role in the news supply chain

Newspaper publishers and broadcasters remain the creators of news content. However, social media platforms, search engines, and news aggregators play an important role in the curation, discovery and monetisation of news. 

In terms of consumption patterns, it is striking that one in seven people told Ofcom they now only use online sources for news, while Facebook has become the third most popular news source overall in the UK after the BBC and ITV.

There are concerns about the effects of online platforms, and in particular social media platforms, on media plurality 

Based on the data gathered by Ofcom, citizens that primarily rely on social media to access news content are less likely to (a) correctly identify important factual information; (b) feel more antipathy towards people that hold different political views (the so-call “echo chambers effect”) and (c) be less trusting of democratic institutions, than people that primarily use traditional media for news consumption. 

It is noteworthy that Ofcom found that these outcomes are not generally associated with the use of search engines or news aggregators, both of which are perceived by users to provide higher quality and more accurate news than social media. These findings are supported by research undertaken in other markets. 

People are not always clear about the extent to which online platforms influence the news they consume and why this is possible

Even if audiences value the variety of news content they find online, they usually do not know why certain stories are served to them. Notably, large numbers of people have very little understanding of the key role played by personalisation and the use of their data in the news they see. Some participants were discovering for the first time the nature and scale of personalisation on platforms during Ofcom’s focus groups. Moreover, when introduced to existing media plurality regulations participants were surprised (and concerned) that current plurality rules do not apply to platforms. 

Lessons for the future of plurality rules in the UK and beyond

Ofcom’s findings are aligned with the results of empirical research conducted elsewhere. In other words, the UK market is no different than others in terms of how online platforms influence the news value chain. The interesting part consists in Ofcom’s reflections on how its findings should inform regulatory initiatives. Those reflections are arguably relevant to other jurisdictions and markets. There are two issues that stand out in my view, namely the scope of media ownership restrictions and the options for future regulatory interventions. Those are discussed in more detail below.  

The scope of rules setting restrictions on media ownership: Lessons for the EU 

Ofcom acknowledges the limitations of UK media ownership rules. Those rules, which establish (a) restrictions on certain entities holding broadcast licences, and (b) a Public Interest Test enabling the Secretary of State to intervene in media mergers that meet certain thresholds, apply to “traditional” media. Ofcom notes with respect to the existing regime that: 

“if it is to remain effective, a regulatory framework must have sufficient tools to address new risks to media pluralityas they arise. […] [M]ore and more news is accessed via [platforms]. Further, […] although there are significant gaps in the evidence, we think there is a risk that certain online [platforms] may cause or facilitate political polarisation, give people a distorted range of viewpoints or expose them to harmful levels of misinformation.”

This should alert the EU legislator. I have already discussed in a previous post that the European Commission has recently proposed a European Media Freedom Act (“EMFA”), which would require Member States to establish substantive and procedural rules for assessing “media market concentrations” that could have a significant impact on media pluralism and editorial independence (Article 21). A key issue that arises from the proposal is the term “media market concentration”, which is defined as a concentration within the meaning of the EU Merger Regulation that involves “at least one media service provider” (Article 2(13)). In turn, the term “media service providers” does not cover platforms, which are distinguished from the former and which are subject to a different set of rules. As I have argued elsewhere, this approach fails to consider that platforms exercise editorial control over news content (by e.g., moderating, removing and ranking media content). Unless platforms are brought in the scope of anti-concentration rules, the EMFA will suffer from two significant drawbacks. First, it will not reflect market realities. In other words, it will not be capable of addressing the problem it seeks to remedy. Secondly, platforms will remain subject to horizontal merger control rules (and, to the extent they qualify as “gatekeepers”, a mere reporting obligation under the Digital Markets Act) whereas “traditional media” will be subject to strict anti-concentration rules. As a result, the EMFA will widen the regulatory asymmetries between media service providers and platforms. This can be expected to harm both competition and media pluralism. 

The solutions considered by Ofcom: Lessons for all

According to Ofcom, in addition to existing rules (e.g., ownership restrictions), there are other tools that could address harms to media plurality. Ofcom divides those tools into four broad categories: increasing transparency, empowering user choice, direct interventions to secure the maintenance of plurality, and sustainability of news providers. As already mentioned, Ofcom did not focus on matters relating to the sustainability of news providers. It did, however, explore a range of solutions falling under the other categories.

Starting from transparency, it is worth noting that Ofcom distinguishes between transparency vis-à-vis the public (including news publishers) and transparency vis-à-vis the regulator. With respect to the former, Ofcom notes that greater transparency over how online intermediaries deliver news content may help users make informed choices about where they get their news. For publishers, it will provide insight into how platforms’ systems affect the visibility and accessibility of their content. Establishing transparency rules to advance the understanding of how platforms influence news content cannot harm. But it is not likely to help either. 

A good example is the “privacy paradox” whereby users claim to care about their privacy, but their behaviour suggests otherwise (because they always agree to the privacy policies of the providers they use). Concretely, for consent to be compatible with the General Data Protection Regulation, it must be “informed” (Article 4(11)). Even where privacy policies are transparent enough to ensure that the consent that is extracted is informed, most users do not read them. In other words, transparency has not contributed greatly to users making informed decisions about how their data is processed. Even if users participating in Ofcom’s focus groupsstated that plurality should be protected, there is nothing to suggest that users will read the Terms and Conditions explaining that news served to them is personalised or that it is promoted in ways that limits choice. In other words, considering how users behave, there is likely a “plurality paradox” and, if we were to rely on transparency rules to solve the problem, the problem would remain unaddressed. 

As regards transparency vis-à-vis the regulator, Ofcom explores a range of solutions that could be used to assess the impact platforms have on media plurality. These include: analysis of aggregated usage and traffic data (e.g., detailed data about the amount of time users spend reading news, which would enable Ofcom to assess the importance of each platform in the news ecosystem and how different groups are impacted); analysis of individual user data, which would enable a more complete assessment of concerns around issues such as echo chambers, misinformation and algorithmic bias; algorithmic audits, which would enable Ofcom to determine, inter alia, whether there is an explicit bias within an algorithm or whether there is the scope for manipulation of ranking systems in a way which could be harmful to media plurality; and A/B testing (i.e., experiments which compare two versions of a service to evaluate which achieves a goal more effectively), which could be used to assess whether differences in choice architecture or recommender systems could affect outcomes such as the diversity of news that people are exposed to. 

These proposals are worth considering for two reasons. First, they are likely to be more effective compared to rules that would enhance transparency vis-à-vis the public. Contrary to users, a media regulator has a mandate to protect plurality. Secondly, having access to data about consumption patterns, online choice architecture, and algorithms would enable the regulator to tailor solutions to the (type of) platform under consideration. For example, one of the main conclusions drawn by Ofcom is that news aggregators are different from social media platforms because the former are perceived by users to provide higher quality and more accurate news than the latter. Of course, the solutions discussed above require sufficient financial resources and industry expertise. Ofcom may meet these conditions, but the same does not necessarily apply to smaller regulators in other parts of the world. 

In addition to transparency, Ofcom further discusses solutions empowering the user. Ofcom refers to customisation of recommendations (e.g., allowing users to easily change default settings, offering a choice of interoperable third-party recommender systems within their services so people are able to select the system that best suits them); customisation of choice architecture (e.g., varying how choices are presented to their users and periodically providing them with options about the overall design of their news feeds); and measures to promote conscious choice (e.g., flagging content identified as misinformation). 

Such measures could certainly prove useful, but only to a limited extent. This is because digital markets, including digital media markets, suffer from user stickiness and customer inertia. This was examined, inter alia, in the Google Android decision (recently upheld by the General Court of the EU) and the Impact Assessment for the DMA. Based on the results of the research it conducted, Ofcom itself notes that some respondents said that they did not want increased choice and that they do not always make use of the control they already have. Put differently, relying on measures that seek to empower the user in order to protect plurality is unlikely to suffice. 

Ofcom appears to acknowledge that transparency rules and measures to empower user choice are not likely to address harms to pluralism, which is why it explores a further class of interventions that would be based on a more prescriptive approach to the news that users (should) see. Those interventions include prominence rules, which are not new to media regulation. Prominence rules are arguably more complex in the online sphere (e.g., how does one determine what information is in the general interest in a digital environment?). Howver, several EU Member States (e.g., Germany, Italy, France) have recently revised their framework to ensure that prominence obligations are imposed on platforms. Though it may be early days to assess the adequacy of each national initiative to promote pluralism, prominence regulation may be one way forward if we agree that transparency and empowering the user are simply not enough to prevent platforms from engaging in practices that are harmful to media plurality. 

Conclusions 

Protecting and promoting media plurality in online markets is far from an easy task. To determine what news users (should) consume, complex methodological questions arise (e.g., how can we define “trusted news”? How can we calculate shares of consumption?). Ofcom is certainly asking (and attempting to answer) these difficult questions through empirical work. Other regulators should follow suit. 

What is clear is that the measures examined in Ofcom’s document should go hand in hand with those that will be established under the DMU regime, which will seek to address, inter alia, the imbalances in bargaining power between platforms and news providers and level the playing field in digital advertising markets. 

Some additional thoughts on the General Court’s judgment in Google Android

This is the second post on the General Court’s judgment in Google Android (T-604/18) delivered earlier this year (which Google has in the meantime appealed to the Court of Justice). As in the first post (available here), I would like to discuss some issues that caught my attention while reading the judgment, rather than summarize each and every aspect of the ruling. In the first post I focused on the MADA pre-installation conditions, which the Commission had analysed as a form of tying that had the effect of strengthening Google’s dominant position in general search. In this post I would like to focus on the part of the judgment dealing with the so-called Anti-Fragmentation Agreements entered into between Google and OEMs. I will then lay down some brief remarks on how the Court approached the issue of market definition with respect to mobile ecosystems.

The anti-fragmentation obligations

By way of background, Google required OEMs wishing to pre-install on their devices the GMS Suite (which included Google apps like Search and Chrome) to first enter into an Anti-Fragmentation Agreement (“AFA”). Among others, the AFAs required OEMs to observe a minimum compatibility standard, as defined by Google. In practice, this meant that AFA signatories were prohibited from commercializing devices running on Android versions (“forks”) not approved by Google (“non-compatible” forks). Importantly, this obligation extended to all devices marketed by the OEM, including devices on which no GMS app was pre-installed.

In the contested decision the Commission did not consider the AFAs abusive as such (as e.g., exclusive dealing arrangements); instead, the Commission took issue with the fact that Google made the grant of licenses for the Play Store and Google Search conditional on OEMs accepting the anti-fragmentation obligations included in the AFAs. In other words, the Commission framed this as a tie-like leveraging abuse, whereby Google was using its market power in Android app stores / general search to impose additional obligations (=the anti-fragmentation obligations) that were capable of restricting competition. It is in this last part of the reasoning that the Commission examined the effect of the anti-fragmentation obligations on competition.

As observed by the Court, the Commission did not dispute Google’s right to impose compatibility requirements in respect of devices on which its apps were installed. Instead, the Commission took issue with the anti-fragmentation obligations only insofar these prohibited OEMs from marketing devices with non-compatible Android forks on which no Google apps were pre-installed. In other words, the Commission considered that, while it was legitimate for Google to impose compatibility requirements for devices featuring its apps, Google crossed the line by requiring OEMs to observe such requirements for all their devices.

The Court summarized the Commission’s theory of harm as follows:

Non-compatible Android forks constituted a competitive threat to Google, and in fact a greater competitive threat compared to compatible Android forks.

The anti-fragmentation obligations hindered the development of non-compatible Android forks.

The capability of Google’s conduct to restrict competition was reinforced by the unavailability of Google’s proprietary APIs to fork developers.

Google’s conduct maintained and strengthened its dominant position in the national markets for general search services, deterred innovation and tended to harm, directly or indirectly, consumers.

The Court essentially upheld the Commission’s analysis. The following points are of interest:

The anticompetitive nature of the objective pursued: Interestingly, when examining the anticompetitive nature of the practice in question, the Court started its analysis by discussing whether Google pursued an anticompetitive objective (paras. 837-841), which it distinguished from the issue of whether Google’s conduct actually restricted competition – despite there being no equivalent section in the Commission’s decision.

Relying on internal documents cited in the decision and Google’s own statements, the Court concluded that Google pursued an anticompetitive objective, in that its conduct was “knowingly implemented with the aim of limiting market access of non-compatible Android forks” (para. 841).

The potential threat from non-compatible forks: According to the Commission, non-compatible Android forks constituted a “credible” competitive threat to Google (and in fact a greater threat compared to compatible Android forks). This was a pre-requisite to the Commission’s analysis, since, if non-compatible Android forks posed no threat, then their exclusion could not amount to a restriction of competition.

The Court endorsed the Commission’s analysis, with one minor exception: it held that it was irrelevant whether non-compatible Android forks pose a greater competitive threat to Google compared to compatible Android forks; it sufficed to show that “the non-compatible Android forks would have been competitors of Android on the market for licensable OSs, which Google does not dispute” (para. 844). In that regard, the Court noted that non-compatible Android forks are licensable OSs, hence they are likely to compete with Android in the market for licensable OSs (para. 844).

Importantly, the Court set a rather low threshold for the Commission, as it held that Google did not establish that non-compatible Android forks could not in any event have constituted a competitive threat to it (para. 847).

Actual exclusion of non-compatible forks and causal link: The Court noted it was common ground that, during the infringement period, no non-compatible Android fork was able to exist on a lasting basis on the market. The Commission attributed this to Google’s conduct, while Google argued that the commercial failure of non-compatible forks was because of their inherent weakness and lack of commercial interest.

In that regard, the Court noted that Google did not dispute the evidence in the contested decision concerning the coverage of the AFAs; as such, the Court considered it was established that, during the infringement period, “the largest economic operators capable of offering a commercial market to developers of non-compatible Android forks were prevented from doing so by the AFAs.”

As regards the commercial failure of non-compatible forks (and in particular FireOS and AliyunOS), the Court was satisfied that the anti-fragmentation obligations played a role in the failure of FireOS and AliyunOS (paras. 850-851). In doing so, the Court implicitly rejected the need to distinguish between the effects of Google’s conduct from that of other factors (e.g., the quality of the non-compatible Android fork). It sufficed that AFAs were one of the reasons for the commercial failure of non-compatible Android forks. Put another way, Google had not managed to show that the commercial failure of such forks was exclusively the result of other factors.

The relevance of Google’s proprietary APIs: The Commission had argued that the effects of the AFAs were reinforced by the unavailability of Google’s proprietary APIs for non-compatible Android forks. The Commission did not dispute, as such, Google’s proprietary rights to the APIs it has developed (para. 853). The Court agreed with the Commission. On the one hand, the Court recited well-rehearsed case-law, whereby the exercise of an exclusive right linked to an IPR, cannot in itself constitute an abuse of a dominant position (para. 854). Even so, the Court held that the commercial policy of Google as regards the availability of its APIs must be taken into consideration as a contextual element in assessing the effect of the anti-fragmentation obligations (para. 855). Indeed, Google’s commercial policy in relation to its APIs constituted an incentive to enter into an AFA (para. 856).

Objective justification

Perhaps the most interesting part of the judgment is the section discussing Google’s proffered objective justifications. At a high-level, and at the risk of some simplification, the Court was willing to consider that Google’s conduct could be justified to the extent it was limited within the “Android ecosystem”, understood as the version of Android ‘controlled’ by Google. Even so, the Court did not accept the same to the extent Google’s conduct produced effects beyond the “Android ecosystem” and prevented the emergence of rival ecosystems. While Google argued that the latter (preventing the emergence of rival ecosystems) was necessary to achieve the survival of Android as a whole, the Court disagreed.

The need to protect compatibility within the Android ecosystem: Google argued that the conduct at issue was necessary to ensure compatibility within the Android ecosystem. The Court rejected this, recalling that the Commission did not take issue with Google’s measures to ensure compatibility of Android forks where the GMS suite was installed; it only challenged the AFAs insofar they prevented OEMs from offering non-compatible Android forks; as such, Google’s justification was considered as unrelated to the abuse and thus irrelevant (para. 878).

The need to prevent fragmentation to ensure the survival of Android: Google argued that the conduct at issue was necessary to prevent fragmentation, which would threaten the very survival of Android (para. 879). The Court rejected this argument based on the superior market power of the ‘Android ecosystem’ (para. 880). The Court admitted that, at the time of its launch, Android’s situation could have been likened to that of pre-existing open-source OSs which suffered from fragmentation (e.g., Symbian); however, “the extremely rapid growth of the ‘Android ecosystem’ from the early 2010s onwards makes Google’s claims regarding the hypothetical risk that the threat it describes to the very survival of that ‘ecosystem’ could have continued throughout the infringement period implausible” (Id).

In other words, the Court seems to be in principle open to the argument that the anti-fragmentation obligations may have been necessary to ensure the survival of Android at its early days, when it was first launched and its success was all but certain; however, once Android grew rapidly and attained a dominant position, Google could not credibly argue that fragmentation continued to pose an existential threat, and could thus not justify the anti-fragmentation obligations on this ground.

Freeriding: Google argued that the anti-fragmentation obligations are necessary to limit the “windfall effects” of its technology being made available to third parties. In essence, Google argued that developers of non-compatible Android forks would free ride on Google’s investments on Android.

The Court dismissed Google’s argument, holding that the right of an undertaking to reap the economic benefits linked to the services it develops should not be interpreted as a right to prevent any competitors from existing on the market (para. 868).

Moreover, the Court agreed with the Commission that it is inherent to an open-source software that information related to it can be used to develop forked versions thereof (Id). In essence, the argument is that, insofar Google itself chose an open-source model for Android – from which it profited – it could not rely on the risk of third parties benefiting from the disclosure of its technology.

Consideration of pro- and anticompetitive effects of the anti-fragmentation obligations:Google argued that the Commission had failed to weigh the pro- and anticompetitive effects of the anti-fragmentation obligations.

The Court recalled that the Commission did not dispute Google’s right to ensure compatibility within the Android ecosystem, and the pro-competitive effects of such compatibility, such as increased competition within the ecosystem (para. 889). Instead, the Commission took issue with the AFAs only insofar they erected barriers to the development of non-compatible Android forks, which lay outside the Android ecosystem; as the Court put it, “the obstacle in question [posed by the AFAs] produces its effects outwith the ‘Android ecosystem” (para. 890).

To borrow some analogies from vertical restraints, one could say that Google was trying to justify a restriction (in fact, an elimination) of inter-ecosystem competition relying on benefits to intra-ecosystem competition.

The Court did not rule out the possibility that such an argument could be raised as a matter of law; however, it held that Google had not established that restricting inter-ecosystem competition was necessary for Google to ensure compatibility within the Android ecosystem; as such, there was no need to engage in a balancing exercise, as suggested by Google (para. 891).

Market definition and dominance in mobile ecosystems

The part of the judgment dealing with market definition and dominance is not particularly groundbreaking, hence I will not dwell much on it. However, it does raise a couple of interesting issues.

By way of background, in its decision the Commission considered that Google held a dominant position in the markets for, among others, (a) licensable smart mobile OSs (with Android) and (b) Android app stores (with the Play Store). In both cases, Google faulted the Commission for failing to take proper account of the competitive constraint exerted by Apple and its ecosystem.

Dominance in the market for licensable OSs: The Court largely sided with the Commission, holding that the latter was entitled to take the view that, while Apple did pose an “indirect” constraint on Google (indirect in the sense that it was exercised at the level of app users and developers, not OEMs), such constraint was not sufficient to call into question Google’s dominance in the market for licensable OSs. In this context, the Commission was right to rely on a number of factors, including (i) the high user loyalty to OS; (ii) the relatively low user sensitivity to OS quality; (iii) the switching costs dissuading users from switching OS; (iv) Apple’s pricing policy; and (v) the behaviour of app developers.

There is nothing particularly surprising in this part of the judgment (which is in line with the recent findings of the CMA on the limited substitutability between Android and iOS), save perhaps for the fact that the Court endorsed the Commission’s application of a so-called SSNDQ (Small but Significant Non-Transitory Decrease in Quality) test to examine the reaction of users and app developers to a hypothetical deterioration in the quality of Android. The Court held that, in the case of a product which is very unlikely to lend itself to the classic hypothetical monopolist test (e.g., because competition on the market takes place on the basis of quality rather than price), the SSNDQ test constitutes relevant evidence for the purpose of market definition and assessing dominance, there being no need to define a precise quantitative standard of quality degradation. According to the Court, “[a]ll that matters is that the quality degradation remains small, albeit significant and non-transitory” (para. 180).

Dominance in Android app stores: Of interest, the Court did not engage in a separate examination of Google’s arguments concerning the relationship between the Play Store and the App Store. Instead, the Court held that the relationship between the Play Store and the App Store could not be disentangled from the relationship between Android and iOS: “to assess the competitive constraint exerted by the App Store on the Play Store is effectively to consider the competitive constraint exerted by iOS on Android” (para. 248). This leads to an assessment of competition between systems (para. 250). The merits of Google’s arguments concerning the App Store thus depended on the merits of Google’s arguments concerning iOS; since the Court rejected the latter as unfounded, it also rejected the former (para. 253).

The CMA’s investigation of competition restrictions regarding browsers

On 22 November 2022, the CMA launched a market investigation into cloud gaming and mobile browsers. In this post, I focus on this investigation as it relates to mobile browsers. This blog has already discussed cloud gaming in an earlier post, and we will return to the topic in the future.

CMA market investigations must typically be concluded within 18 months from the date that the reference is made. They consider whether there are features of a market that have an adverse effect on competition (“AEC”), in which case the CMA has the power to impose its own remedies on businesses and it can also make recommendations to other bodies such as sectoral regulators or the government. A market investigation is thus a powerful instrument.

In the Final Report of its Market Study on Mobile Ecosystems (the “Final Report”), the CMA devoted a full chapter to “mobile browser and mobile browser engine competition” (Chapter 5), in which it identified a variety of competition issues that should be addressed. The findings of the CMA were widely supported by the submissions made by third parties. Further analysis is provided in Appendix F of the same report.

Browsers are one of the most important and widely used apps on mobile devices and they represent a critical gateway to access the web on such devices. Browsers comprise two elements: (i) a browser engine, which transforms web source code into web pages and (ii) a user interface, which is responsible for user-facing functionality. Web content can also be accessed through native app’s in-app browsers, which can for instance be found in chat apps or social networks.

Mobile devices typically have either Chrome or Safari pre-installed and set as default at purchase, which gives Apple and Google an important advantage over other vendors. Browsers also generally come with a default search engine, which is the main source of monetization of browsers either because the browser vendor owns a search engine (e.g., Google Chrome) or sells the search default (e.g., Apple or Mozilla).

In its Final Report, the CMA made the following important findings:

First, the browser market is an effective duopoly. The combined share of supply for Apple’s and Google’s browsers on mobile devices in the UK is around 90%. Moreover, in 2021, 97% of all mobile web browsing in the UK was performed on top of the browser engine of Apple (WebKit) or Google (Blink). As a result, Apple and Google enjoy substantial market power in mobile browsers and browser engines. That is a source of concern.

Second, Apple’s requirement that all browsers on iOS use its WebKit browser engine is problematic:

Apple effectively dictates the features that browsers on iOS can offer. This impedes the ability of rival browsers to differentiate themselves from Safari on factors such as speed and functionality. As a result, Safari faces less competition.

As WebKit lags behind other browser engines in terms of the developer features it supports and its user-facing performance and capabilities, it limits the capability of all browsers on iOS devices, potentially depriving iOS users of useful innovations they might otherwise benefit from.

Apple inhibits the functionality of web apps (websites resembling native apps), which raises developers’ costs, deprives consumers of innovative apps and limits the competitive constraint web apps could exert on native apps (which is ironic considering that Apple and its lawyers often argue that Apple does not have market power on the market for the distribution of apps on iOS devices as the App Store is constrained by the availability of web apps).

While Apple’s WebKit’s restriction benefits Apple, the CMA considered it impedes competition and innovation to the detriment of users.

Third, Apple’s justifications for the WebKit restriction, which are (as usual) pretextually based on security and privacy considerations, lack credibility. The CMA finds that, according to evidence from security experts that it consulted, Apple’s ban on the use of alternative browser engines is not necessary in order to provide secure browsing. In certain respects, the ban could also potentially prove harmful to security, in that it limits competition to improve security.

Fourth, there are features that are available to Safari which are not available to other mobile browsers on iOS devices. This once again limits the ability of other browsers to compete with Safari.

Unsurprisingly, Apple rejected the findings of the Final Report in its response to the CMA consultation on a market investigation reference (“MIR”) for mobile browsers and cloud gaming, alleging that the evidence shows that, “with respect to mobile browsers, WebKit and Safari have pioneered innovation, enhanced user choice, and prompted responses from competitors”. Apple also submitted that the findings underpinning the CMA’s proposal “are based on a partial and erroneous analysis of the evidence submitted to the CMA during the market study.”

This submission apparently failed to impress the CMA and was directly at odds with the submissions of multiple market actors. In particular, the Open Web Advocacy’s extensive submission – which is truly worth reading – observed that “For the past decade, severe underfunding of Apple’s browser Safari combined with a ban of competitive browsers on iOS has removed competitive pressure and resulted in an unstable platform missing critical functionality ensuring that only vendor-specific native apps are competitive. Intervention is essential not only to the future of competition between Browsers but also to deliver a free and open, universal, interoperable application development and distribution platform.”

In its reference decision to make a MIR, the CMA observes that the examination of the four criteria set in its guidance on making MIRs – the scale of the suspected problem, the availability of appropriate remedies, the availability of undertakings in lieu of a market reference, and the presence of alternative powers available to the CMA or to sectoral regulators – justified making a MIR.

We note that a MIR will allow the CMA to deliver faster results that through the DMU regime considering the CMA is still waiting for the powers that will allow it to adopt codes of conduct and pro-competition interventions, although we recently heard some positive news. In any event, the CMA will likely use the MIR to lay the groundwork for the upcoming DMU regime with respect to mobile browsers.

As far as remedies are concerned, the reference decision suggests that potential measures could include: removing Apple’s restrictions on competing browser engines on iOS devices; mandating access to certain functionality for browsers (including supporting web apps); requiring Apple and Google to provide equal access to functionality through APIs for rival browsers; requirements that make it more straightforward for users to change the default browser within their device settings; and choice screens to overcome the distortive effects of pre-installation.

The CMA is not wasting any time as almost immediately after launching its MIR it started sending requests for information to what appears to be many companies regarding their reliance on browsers.

Of course, the CMA’s MIR is not the only worry for Apple and Google as the Digital Markets Act also contains a series of (browser-related) obligations, and there is no question that they will be designated as gatekeepers with respect to their web browser services.

As noted above, browsers are critical gateways allowing users to access the web. More competition and innovation in this sector will bring significant benefits to all of us.

One final word. While I am satisfied that the MIR covers browsers and cloud gaming, I am disappointed that it does not cover Apple’s App Tracking Transparency (“ATT”) framework, which is nothing but a cynical way for Apple to destroy under the guise of privacy the ability of app developers to monetize their content through targeted advertising at the very time Apple is developing its own mobile advertising business. To the extent Apple is not applying to itself the same restrictions it applies to others, it is currently being investigated by the French, German and Polish competition authorities. Considering the findings the CMA made in its Final Report, which are critical to Apple (see in particular Appendix J), it is disappointing that the CMA is not further investigating the matter (at least for now).

Photo by Dan Nelson on Unsplash

Full steam ahead for the UK Digital Markets Unit

UK politics has been in a parlous state. Today the UK’s Chancellor of the Exchequer attempted to stabilize the country’s finances by announcing some eye-watering tax rises and spending cuts in the Autumn Statement. However, of more relevance to readers of this blog, he also made a significant announcement about the legislation required to implement the Digital Markets Unit (“DMU”) regime in the UK.

Digital Markets Unit

In May 2022, the Government’s position was that it would legislate “as soon as parliamentary time allows” and it would publish a draft Bill during the current session of Parliament, i.e. before May 2023, with the implication that it might be formally debated in Parliament in the following session of Parliament. It was a vague promise and, in any case, that was two prime ministers ago. We did not know the current administration’s stance until today.

Today’s Autumn Statement has revealed that the full Bill will be brought before Parliament during this current session, therefore skipping the “pre-legislative scrutiny” stage altogether.

The Chancellor said:

Competition is fundamental for growth and productivity. The government will bring forward the Digital Markets, Competition and Consumer Bill in the third Parliamentary session to provide the Competition and Markets Authority with new powers to promote and tackle anti-competitive practice in digital markets. Opening these markets to greater competition will encourage new challenger firms, spur innovation, and provide consumers with higher quality products and greater choice.”

In more good news for the Competition and Markets Authority (“CMA”), he also confirmed that the legislation will include the additional competition law and consumer law enforcement powers that the CMA has been asking for:

Bringing forward the Digital Markets, Competition and Consumer Bill – The government is bringing forward the Digital Markets, Competition and Consumer Bill to provide new powers to the ‘Digital Markets Unit’ (DMU) in the CMA to foster more competitive digital markets; make changes to the competition framework that will include streamlined decision making and updating merger and fine thresholds; and protect consumers in fast-moving markets by tackling ‘subscription traps’ and fake reviews online.

This announcement probably brings forward the implementation of the new regime by at least a year, even on an optimistic interpretation of the previous position. More crucially, it shows that the Government is throwing its weight behind the Bill as a key pillar of its growth strategy.

Online safety

The DMU bill is not the only major piece of proposed tech regulation in the UK. Separately, the Online Safety Bill will impose an extensive system of content regulation to be overseen by the telecoms regulator, Ofcom.

This bill was ahead of the DMU bill because it has already received some detailed parliamentary debate. However, the new administration under Prime Minister Rishi Sunak has paused its progress while it considers how to deal with “legal but harmful” content without undermining free speech. It seems likely to carry on its progress through Parliament, perhaps with some revisions.

Innovation and growth

After a decade of poor economic performance, the issue of innovation and growth is now central to British politics. I believe that the proposed rules to be overseen by the DMU will further that aim. This is a regime that proponents of free markets and a small state should welcome with open arms. Indeed, as even Hayek said in his famous Road to Serfdom, first published in 1944:

to create conditions in which competition will be as effective as possible, to prevent fraud and deception, to break up monopolies – these tasks provide a wide and unquestioned field for state activity.

All advanced economies are grappling with the issues raised by the tech giants. There are various legislative proposals currently being debated in the US, and the EU has already passed its Digital Markets Act. If the UK acts in line with the timings announced today, it now has the opportunity to resume its place in helping to lead the debate internationally.

To keep to these timings, the lawyers at the Department for Digital, Culture, Media and Sport will be working some long hours over the next couple of months. The CMA will also need to ramp up its preparations, for example by launching its market study into e-commerce and consulting on draft codes of conduct for the gatekeeper firms.

Why it is now clear that the Australian ex ante regime will be much closer to the proposed UK regime than the EU Digital Markets Act

On 11 November 2022, the Australian Competition and Consumer Commission (“ACCC”) released the fifth interim report for the Digital Platform Services inquiry (the “Interim Report”). This report recommends a range of new measures to address harms from digital platforms to Australian consumers, small businesses and competition.

The diagnostic of the ACCC is not different from what could already be found in similar reports produced in Europe and the United States. The ACCC notes that its analysis had “identified significant consumer and competition harms across a range of digital platform services” and that the “conduct causing these harms is widespread, entrenched and systemic.”

The ACCC also found that existing laws, and in particular competition law, had “proven insufficient in Australia and overseas to address such conduct quickly or effectively, further increasing the risk and magnitude of harm.” Hence, it “recommends new and strengthened laws to better protect Australian consumers and small businesses, who are increasingly reliant on digital platforms, and new measures to promote competition in the supply of digital platform services.”

The most interesting question is, of course, what these new and strengthened laws will eventually look like. On 28 February 2022, the ACCC published a discussion paper outlining various options that could be taken to address the problems identified above and in reading this new report I was particularly interested in finding which of these options the ACCC recommends.

So far, we have observed three different approaches to ex ante regulation:

The EU Digital Market Act (“DMA”), which seeks to promote contestability and fairness in digital markets. With that aim in mind, the DMA applies a set of obligations to designated gatekeepers (with designation being based primarily on quantitative criteria), independently of their business model.

The UK ex ante regime, laid out in the UK Government’s consultation “A new pro-competition regime for digital markets”, suggests that the Parliament should adopt a legislation setting the high level “objectives”, with the detailed “principles” to be written by the Digital Markets Unit (“DMU”) embedded in the CMA. The regime would apply to firms with Significant Market Status (“SMS”), with each SMS firm getting its own principles and guidance encapsulated in a Code of Conduct.

The US approach is more fragmented as it revolves around several individual bills, such as the American Innovation and Choice Online Act (“AICO”) and the Open App Markets Act (“OAMA”) whose legislative fate is unclear.

Of these approaches, I have always felt that the UK model was the most promising one because it was more flexible and could account for differences in business models. Unfortunately, progress has been impeded by the fact that the Government has not yet tabled the enabling legislation to Parliament.

The ACCC’s fifth report makes it clear that the proposed regime would be conceptually close to the proposed UK regime (with some differences, however). Its main components would be as follows:

The first component would be the passage of primary legislation with three main elements: (i) a power given to the relevant regulator to make service-specific mandatory codes of conduct for Designated Digital Platforms; (ii) broad principles to guide the scope of these codes, and (iii) a power for a decision maker (either the relevant regulator or a government minister) to designate digital platform firms in respect of the provision of particular services, alongside clear criteria for making this designation decision.

Once empowered to do so by the new legislation, the relevant regulator would initiate the code development process for one or more codes. Each code would set out detailed obligations within the scope of the principles in the primary legislation. These obligations would be specific to, and tailored to, the type of digital platform service the code applies to.

The designation decision “would likely be made in parallel to, or after, a relevant code of conduct has been developed. However, the designation of a digital platform firm would not by itself apply any new obligations to that platform until or unless a relevant code has taken effect.”

Thus, as in the case of the proposed UK regime, the regulatory obligations applying to designated firms would be included in codes of conduct, which would implement in greater detail the broad principles comprised in legislation. The difference with the UK regime would be that these codes would not concern a given company, but a given type of digital platform service. For instance, the Interim Report evokes a “code of conduct for search services”, a “code of conduct for ad tech services”, etc. These codes of conduct would then apply to the designated platforms for such services.

The Interim Report discusses a series of targeted obligations that could be included in the codes of conduct to address the following sources of harm:

anti-competitive self-preferencing;

anti-competitive tying;

exclusive pre-installation and default agreements that hinder competition;

impediments to consumer switching;

impediments to interoperability;

data-related barriers to entry and expansion, where privacy impacts can be managed;

a lack of transparency;

unfair dealings with business users; and

exclusivity and price parity clauses in contracts with business users.

The type of interventions that the Interim Report suggests are very much in line with the obligations contained in the DMA with the Interim Report citing these obligations as a possible source of inspiration.

One of the oddities of the Australian approach is the incredibly long timeline for the regulation of digital platforms with the Digital Platform Services Inquiry being run from 2020 to 2025. After issuing six reports during that period, the ACCC will issue its final report on 31 March 2025. If we consider that it will take some additional time to adopt legislation and to design codes of conduct, the first obligations binding digital platforms will only come in 2027-28 under an optimistic scenario. This is peculiar considering that the ACCC was one of the first authorities to recognize that some of the conducts of these platforms can create significant harm and that urgent intervention is needed.

Photo by Johnny Bhalla on Unsplash

The Commission’s proposals for AI Liability Rules

As part of its ongoing work to regulate the digital economy, the European Commission (“the Commission”) recently put forward the proposal for a Revised Product Liability Directive (“RPLD”) and the proposal for the Artificial Intelligence (“AI”) Liability Directive. Both proposals are relevant for the regulation of AI systems in Europe, each approaching this issue from a different perspective.

Although these initiatives are presented as part of the same legislative package in that they seek to address damages caused by AI systems, the two proposals are different. The RPLD proposal seeks to modernize an existing strict liability system (also called “no-fault” liability because it does not require proof of intention or negligence). The AI Liability Directive is a brand-new initiative addressing fault-based liability issues (where intention or negligence must be proven). Since they present different approaches to liability, there is no overlap between the claims which can be brought under them.  This blog post will consider each proposal separately, discussing their key provisions and how they interact with one another and the AI Act (discussed in an earlier blog post).

The Revised Product Liability Directive Proposal (“RPLD Proposal”)

As the title suggests, this proposal seeks to modernize an existing legislative instrument, the Product Liability Directive (“PLD”), adopted in 1985. The purpose of this tool was to harmonize liability rules for defective products across Member States to avoid any potential obstacles to free movement of goods as well as any distortions of competition. It establishes a strict liability regime to the effect that injured persons are only required to prove that they suffered damage and that this damage was caused by a defective product (Article 4, PLD). The PLD revolves around the main elements of a claim, namely the existence of a product; the damage suffered by the injured party; the defect of the product; and the causal link between the defective product and the damage suffered. The RPLD Proposal is significant because it aims to expand the meaning of each of these key concepts in order to bring them in line with the challenges and realities of the digital age. These concepts are discussed below.

Software to fall under the definition of “product”

Starting from the concept of “product”, the RPLD Proposal lays down that software and digital manufacturing files fall under the definition of a product (Article 4(1)). Recital (3) provides further details on what a “product” will be under the new liability regime, making it clear that AI systems will also be included. According to the same Recital, this will encourage the roll-out and uptake of AI, while ensuring that claimants can enjoy the same level of protection irrespective of the technology involved.

Leaving AI aside, the fact that “software” in general is included in the definition of “product” significantly expands the scope of the RPLD, ensuring it will also cover software included in products (such as the software used in cars) and stand-alone software such as mobile apps. Consequently, the Revised Product Liability Directive is not only a key instrument for the regulation of AI but is also expected to play an important role in the regulation of the digital age more generally.

Loss or corruption of data as a type of damage

Another important change suggested by the RPLD Proposal is the expansion of the meaning of “damage”. Under the current system, damage is limited to death, personal injuries and damage to property which amounts to more than 500 euros (Article 9 PLD). According to the RPLD Proposal, the concept of damage should be extended to cover not only “medically recognized harm to psychological health” but also material loss resulting from the loss or corruption of data that is not used exclusively for professional purposes (Articles 6(a) and 6(c)). The 500 EUR threshold would also be removed. The RPLD Proposal provides a concrete example of a situation where this type of damage would be relevant, that is, the deletion of data saved on a hard drive (RPLD Recital, (16)). Finally, the wording of this provision, which only excludes data used exclusively for professional purposes, suggests a broad scope of application.

Systems that learn after development

Other than showing that they suffered the relevant type of damage covered by the PLD, a claimant must also prove that the product in question was defective. Article 6 of the RPLD Proposal stipulates (as does the PLD) that a product will be considered defective when it does not “provide the safety which the public at large is entitled to expect”, taking all circumstances into account. The RPLD Proposal expands the list of circumstances that should be considered, including the ability of a product to learn after deployment (Article 6(1)(c)), and the “moment in time when the product left the control of the manufacturer”, that is, after the product is placed on the market (Article 6(1)(e)). The proposal explains that these provisions have been specifically included to address the ability of algorithms to learn after being deployed (RPLD Recital, (23)).

Defectiveness, causality and rebuttable presumptions

Article 9 addresses the burden of proof, setting out several rebuttable presumptions concerning both defectiveness and causality. First, Article 9(2) lists three instances where the defectiveness of the product will be presumed: (a) failure of the defendant to disclose evidence required under Article 8(1); (b) failure to comply with mandatory safety requirements laid down in Union or national law; and (c) the presence of an obvious malfunction. Subsequently, Article 9(3) creates a presumption of causality where it has been established that the product is defective and the damage caused is of a kind typically consistent with the defect in question.

The second presumption applicable in the case of defectiveness that was discussed above (failure to comply with mandatory safety requirements laid down in Union or national law) deserves particular attention in that it is linked to the AI Act. Concretely, the AI Act lists several requirements that high-risk AI systems must meet. According to Article 9 of the RPLD Proposal, where a claimant can show that a high-risk AI system did not comply with the mandatory requirements listed under the AI Act, they will benefit from a rebuttable presumption that the said system was defective. Nonetheless, it may also raise implementation challenges, given the difficulties claimants may face in proving non-compliance with the AI Act.

Furthermore, Article 9(4) includes another rebuttable presumption meant to address the instances where a claimant faces “excessive difficulties, due to technical or scientific complexity” in proving defectiveness of a product, causality, or both. The same article gives national courts the power to decide when this presumption is to be applied, but stipulates that, to benefit from it, a claimant will still have to prove that (a) the product contributed to the damage and (b) the product was defective or that its defectiveness is a likely cause of the damage. As Recital (34) recognizes, this presumption is particularly relevant in the case of AI systems. But, given the requirements it sets for a successful claim, its applicability could remain limited in practice.

The AI Liability Directive Proposal

Main provisions

The purpose of the AI Liability Directive proposal is to simplify the legal process for victims when it comes to proving that someone’s fault led to damage. To that end, it introduces a right of access to evidence and a rebuttable presumption of causality.

Before discussing these novelties, it is worth making a few remarks on the scope of and definitions laid down in the AI Liability Directive Proposal (Articles 1 and 2 respectively). According to Article 1, the Directive would only apply to claims brought under national fault-based liability regimes. Article 2, which defines the term “claimant”, lays down that the term does not only include the person who was directly injured but also a person acting on behalf of one or more injured persons. This is significant because it explicitly recognizes the ability to bring collective actions for harms caused by AI systems, in accordance with Union or national law.

Moving on to the substantive provisions, Article 3 deals with the disclosure of evidence and a rebuttable presumption of non-compliance (not to be confused with the rebuttable presumption of causality, covered in the subsequent article). Put simply, this provision introduces a qualified right to get access to relevant evidence before trial. The right is qualified because the potential claimant must still prove the plausibility of their claim and show they have undertaken “all proportionate attempts at gathering the relevant evidence from the defendant” (Article 3(2)). The presumption of non-compliance applies where a potential defendant fails to comply with a disclosure order made by a national court (Article 3(5)). In other words, where no access to evidence is granted, it will be presumed that there is a breach of a relevant duty. This is significant because proving non-compliance is likely one of biggest challenges that potential claimants will face.

The second main feature of the AI Liability Directive is the introduction of a rebuttable presumption of causality in the case of fault (Article 4). Where this presumption applies, the claimant will no longer need to prove that the output produced by an AI system (or its failure to produce an output) which led to damage was caused by the fault of the defendant.

However, for this presumption of causality to apply, three conditions must be met, namely (1) failure to comply with a duty of care, either proved by the claimant or presumed by the court under Article 3(5) discussed above; (2) this failure must have influenced the output produced by the AI systems (or the failure to produce an output); and (3) causality between the output (or lack thereof) and damage. In cases where claims are brought against high-risk AI systems, the first condition will only be met where the claimant can show failure to comply with the specific mandatory requirements for high-risk AI systems laid down in the AI Act (Articles 4(2) and 4(3)).

Article 4 then limits the scope of application of this presumption by establishing that it will not apply to non-high-risk AI systems (save in instances where the national court considers it “excessively difficult” for the claimant to prove a causal link) (Article 4(5)); or to instances where the defendant uses an AI system for personal purposes (unless the defendant materially interfered with the conditions of operation of the system) (Article 4(6)). Finally, the presumption will also not apply to high-risk AI systems, where the defendant can show that sufficient evidence and expertise is “reasonably accessible” for the claimant to prove the causal link (Article 4(4)). It remains to be seen how these exceptions will apply in practice. Concepts such as “reasonably accessible evidence” or “excessively difficult to prove” can be interpreted in a broad manner and significant differences can arise between the interpretations of domestic courts.

Interaction with the AI Act and the RPLD

In terms of how the AI Liability Directive proposal will interact with the AI Act, the Explanatory Memorandum of the proposal sets out that safety (pursued by the AI Act) and liability (addressed by the AI Liability Directive) are two sides of the same coin. The proposal explains that the AI Act intends to reduce the risks posed to safety and fundamental rights through ex ante regulation, establishing several requirements that high-risk AI systems must meet before being placed on the market. However, it also recognizes that the AI Act does not provide any compensation for injured persons who suffered damage caused by an AI system. This is where the AI Liability Directive comes into play, offering remedies for the inevitable cases where AI systems will cause harm.

The Explanatory Memorandum also sets out the relationship between the AI Liability Directive and the Revised Product Liability Directive. It explains that the AI Liability Directive covers national claims mainly based on fault, with a view to compensating “any type of damage and any type of victim”. This is important because it suggests that the AI Liability Directive could be used in cases of algorithmic discrimination. Indeed, the press release from the Commission makes this point directly, arguing that this instrument will make it easier to obtain compensation if someone has been “discriminated in a recruitment process involving AI technology”. Given the popularity of AI-based solutions not only in recruitment but also in assessments determining access to benefits or loans, this is a significant development.

Looking ahead

The proposed changes are welcome efforts to regulate AI because they would empower individuals to bring claims where they have suffered damage. While these two instruments will make the process of bringing a claim against AI systems easier, injured parties will still have to overcome significant challenges (e.g., they will need to demonstrate non-compliance with the AI Act to benefit from the presumption of causality in Article 4 of the AI Liability Directive). Combined with the relatively broad exceptions that apply, bringing a successful claim before a court may still prove difficult.

The interplay between the AI Act and the two proposals discussed in this blog post is another important aspect to consider. For example, systems deemed to be high-risk will not only receive different treatment under the AI Act, but also under the AI Liability Directive and the RPLD.

This blog post was authored together with Ms. Konstantina Bania.

Photo by Christian Lue on Unsplash

Big Tech’s financial services activities – and their forthcoming regulatory attention

Apple, Google and Amazon (and to a lesser extent, Meta) have been tentatively expanding into the financial services sector for years now. It has been very interesting to watch their strategies and it seems things are hotting up.

Damien and I wrote an article on this topic, which was published in the European Competition Journal last year.

The FCA’s role in digital markets

The UK’s financial regulator, the Financial Conduct Authority (“FCA”), has announced a programme of work on the issue and is inviting input from interested parties. It has published a discussion paper on the Big Tech firms’ impact on four retail sectors: payments, deposit taking, consumer credit and insurance.

There are various directions in which the FCA’s work could go. For example, it could feed into the rules under the forthcoming Digital Markets Unit (“DMU”) regime, it could lead to regulatory action or rule-making by the FCA itself, or it could refer the issue to the Competition and Markets Authority (“CMA”) for a full market investigation at the end of which the CMA would use its order-making powers.

It is notable that some of the big issues that have paved the way for the DMA and DMU regimes are very relevant to financial services – for example, Apple’s denial of third-party access to its near-field communications chip, and Apple and Google’s restrictions on app developers’ ability to use competing payments providers. Yet, the FCA has not been at the forefront in the area until now. It did not even join the Digital Regulation Co-operation Forum when it was first set up.

The FCA has a large competition team and it has the same full legal powers to enforce the competition rules as the CMA. It also has its own wide-ranging regulatory powers. Perhaps now the FCA will start having more of an impact in digital markets.

Big Tech’s financial services activities

The four firms’ financial services activities are still a minor part of their businesses, and they seem to retreat as often as they progress. However, there have been some notable recent product launches from Apple and Amazon. To summarize:

Apple has recently announced a partnership with Goldman Sachs for a savings account in the US, to add to its existing credit card and cash transfer products in the US. It also offers its Apple Pay payments product worldwide and has announced a buy-now-pay-later product. It offers aftermarket breakdown insurance for its devices.

Amazon has announced its entry into offering price comparison services for insurance products in the UK (where such services are popular). It already offers a buy-now-pay-later product in partnership with the British bank, Barclays, as well as a credit card, certain insurance products, small business loans, and a payments product.

Google has arguably been less successful in financial services thus far. Its own price comparison service was discontinued in 2016, and it backed away from its current account product about a year ago, but it offers its payments product Google Pay worldwide and a cash transfer product in the US. It seems unlikely that this is the end of Google’s financial services ambitions.

Meta tried to build its Diem (previously known as Libra) digital currency, but that seems to have been shelved. It has plans for Meta Pay to act as a digital currency in the metaverse.

In FCA terms, all four firms have some payments permissions, and Google and Meta also have some e‑money permissions. Apple and Amazon have some consumer credit and insurance permissions.

The four firms seem unwilling to become fully regulated banks (e.g., they cannot take deposits or issue mortgages). But, perhaps they have reached such a large size whereby, if they want to continue growing, they cannot ignore a significant and potentially lucrative sector such as financial services that is in many ways adjacent to their existing business activities. They seem to be entering markets through partnerships with regulated financial services firms, although Apple’s buy-now-pay-later product is an exception because Apple says it will be holding the loans on its own balance sheet.

In last year’s article, Damien and I argued that consumers can benefit from the innovations of the Big Tech firms and others without suffering the long-run effects of their further accumulation of market power. However, new rules should ensure that they do not benefit from an asymmetry of regulatory obligations whereby they are not subject to the same rules as their financial services competitors. They should not be able to leverage their market power from core activities into financial services such that their financial services competitors are hindered in reacting to their competitive threat.

The FCA’s proposed work is very much in line with our recommendations. For example, it is concerned that, “if Big Tech firms can exploit their ecosystems by attracting consumers to their financial services products, and later lock consumers in, this could be a credible way to gain market power and use it to lessen competition and harm consumers. Across all four sectors that we have studied, Big Tech firms may be able to lock consumers into their ecosystems, thus reducing competition.”

The FCA is also concerned about, “the access to, and use of, consumer data” whereby “Big Tech firms may be able to act as data providers to incumbents and fintechs, and potential entrants, in existing financial services” and also, “Big Tech firms may use financial services and other data themselves in ways which harm competition and consumers.” The FCA says it would “be concerned if data can be used exclusively by Big Tech firms, who are also able to place data access restrictions on incumbent providers or potential entrants. Big Tech firms’ access to unparalleled data, and an ability to combine data across their ecosystems provides them with a unique competitive advantage that incumbents and fintechs do not possess.”

The FCA concludes that, “[i]n the longer term, there is a risk that the competition benefits from Big Tech entry in financial services could be eroded if these firms can create and exploit entrenched market power to harm healthy competition and worsen consumer outcomes.”

Conclusion

The financial sector (including the fintech sector) is of huge importance to the UK economy, so it would not be surprising if the DMU focuses to a greater extent on financial services than the EU’s Digital Markets Act (“DMA”) does. The proposed structure of the DMU, whereby each gatekeeper firm will be subject to its own bespoke code of conduct, will be well suited to preserving a level playing field in these activities. The regime would benefit from the FCA’s insights. And these insights will be useful internationally too.

Industry players such as banks, payments firms and fintechs should make their views known to the FCA, especially if they are in the four retail sectors highlighted by the FCA: payments, deposit taking, consumer credit and insurance. If they do not, the risk is that the Big Tech firms will eventually take over the most lucrative parts of their industry and use their gatekeeper positions to disintermediate them from their customers.

The DMA has been published: Now the real challenges start

The Digital Market Act (“DMA”) has been published today. It is a remarkable instrument in many ways. Since the publication of the Commission proposal in December 2020, it took less than 1.5 years for the Council and the Parliament to agree on the final text. The supersonic adoption of the DMA was due to several factors, including: the quality of the Commission proposal; the desire from key Member States, such as France and Germany, to regulate digital gatekeepers; and the leadership of Andreas Schwab in the Parliament. The DMA is also one of the very few instances where the text adopted by the Council and the Parliament is stricter than the text initially proposed by the Commission.

Yet, it is now that the real challenges start with the implementation and enforcement of the DMA. The DMA will be as good as its implementation and enforcement, and it is expected that gatekeepers will fight their corner. The task of the Commission will not be easy, and it is questionable whether it will have enough resources. That is a key question considering that, unlike in the case of EU competition law, the DMA provides for centralized enforcement. Companies that rely on the core platform services (“CPS”) of a gatekeeper will get frustrated if implementation and enforcement are inadequate and they may resort to national courts to privately enforce them. The success of the DMA will also depend on the ability of the Commission to enforce the DMA in a timely manner, while at the same time respecting due process.

The DMA will also pose significant challenges for gatekeepers. The DMA follows a relatively rigid model, which does not (sufficiently) take into account the differences of the various business models gatekeepers may pursue. Some of the DMA obligations are not always clear and reasonable people may disagree over their interpretation. In some instances, these obligations will force gatekeepers to revisit their business model and in-house lawyers may find resistance from their business colleagues. Thousands of hours will have to be spent on preparing compliance with the DMA.

It is against this background that my co-bloggers and I will publish a series of posts on the DMA in the coming days. The topics we will deal with include:

Designation metrics: The DMA sets quantitative thresholds, such as the number of (end and business) users of a core platform service, that if exceeded give rise to a presumption that the undertaking in question is a gatekeeper. However, calculating user numbers is easier said than done. Although the DMA includes an Annex that sets out how gatekeepers should make such calculations, the metrics identified are not always clear. For example, in the case of virtual assistants, the Annex states that gatekeepers should submit the “number of unique developers who offered at least one virtual assistant software application or a functionality to make an existing software application accessible through the virtual assistant during the year”. Does this mean that certification of a specific app by the gatekeeper platform is required for the software developer to qualify as “business user” or is that aspect not relevant? What is more, again according to the Annex, gatekeepers are responsible for identifying the metric that best reflects “engagement with the platform”. Combined with the fact that the DMA refers to several metrics that may be relevant for measuring engagement with a specific category of core platform services (e.g., a provider offering online intermediation services may rely on clicks, queries, transactions, etc.), this leaves significant room for designing different methodologies. What will be deemed acceptable and what not remains to be seen.

Rebuttal: Another question relating to designation concerns the ability to rebut the presumption that, despite meeting the thresholds with respect to a core platform service, the undertaking in question is not a gatekeeper. Recital (23) limits the range and type of arguments that may be put forward when submitting a rebuttal to quantitative criteria (e.g., by how much the user thresholds are exceeded). However, Article 3(8), which enables the Commission to designate as a gatekeeper undertakings that do not meet the thresholds, refers to several qualitative parameters that can be taking into account when conducting this assessment (e.g., switching costs and multi-homing, network effects). The question then arises whether these diverging approaches expose designation decisions to legal challenge.

Delineation of CPS: The delineation of the core platform service that will fall in scope is another complex matter that will determine the scope of designation. The DMA gives us a mixed message. On the one hand, according to the anti-circumvention rule, the gatekeeper should not artificially segment the core platform service. On the other hand, the Annex notes that, if services are offered in an integrated way and are of the same category (e.g., they are all online intermediation services), those services should be segmented if they are used for different purposes. As a result, how core platform services will be delineated for the purposes of designation decisions (and compliance plans) remains to be seen.

DMA Litigation: One question is whether we should expect significant litigation. The answer is yes as companies that are designated as gatekeepers will likely challenge the designation decision. In the process, they may seek to challenge the DMA as a whole or some of its provisions. This will not delay the implementation of the DMA as appeals to the General Court do not have suspensive effects, but the judgments of that Court will have significant implications. The enforcement of the DMA is also expected to trigger litigation. The case-law of the EU courts will certainly shape the boundaries of the DMA in the years to come.

Cooperation of the gatekeepers: It is hard to predict the attitude of the gatekeepers. Will they collaborate with the Commission or engage in obstruction? Probably a bit of both. Seeking to obstruct processes in a systematic manner is a dangerous strategy as the DMA is here to stay, hence this will be a long game. But at the same time, each gatekeeper will have some red lines over which they may engage in protracted fights.

Enforcement resources: The Commission is aware that the DMA is an extensive and complex piece of legislation that will require significant resources. The initial number of 80 FTEs is not sufficient and even if this number is a bit extended, it will remain insufficient. There will be a clear asymmetry of resources between the Commission and the gatekeepers. One additional challenge relates to the availability of technical expertise. The Commission will not be able to match the salaries of the private sector for good engineers or data scientists. On the other hand, the model of the Chief Economist Team has allowed the recruitment of talented economists at all levels in that spending time with the CET enhances a CV.

Enforcement: role of the NCAs:  The DMA provides for a centralized enforcement system in the hands of the Commission. The NCAs are allowed to have a role – for instance, they may launch an investigation into cases of possible non-compliance on their territory, but only the Commission will be able to take decisions. This raises the question of whether the NCAs will be willing to devote resources to this exercise where they have at best a secondary role. Time will tell.

Enforcement: private actions: As noted, companies that rely on gatekeepers may resort to national courts to seek injunctive relief if the Commission inadequately implements and enforces the DMA. But in addition, DMA breaches – whether confirmed by the Commission or simply alleged – may form the basis for damages actions.

Coordination with competition law: Many of the conducts addressed in the DMA find their basis in competition law (self-preferencing, MFNs, fair and non-discriminatory access conditions). Although the DMA regulates gatekeepers ex ante, where a breach of the DMA is established, this may also qualify as a competition infringement. How will the Commission and NCAs prioritize between these tools? Will the existence of ex ante regulation dampen competition enforcement (as happens in other regulated sectors) and stunt the development of the case-law on the concepts of “abuse” and “restriction of competition”?

The interplay between the DMA and other regulatory rules: The DMA will not apply in a vacuum; it will interact with existing EU and national rules that establish obligations for (gatekeeper) platforms. The DMA explicitly provides that it will apply “without prejudice” to several instruments, suggesting that all legislative instruments that govern the conduct of platforms (in the EU and domestically) will harmoniously co-exist and complement each other. However, it is doubted whether the DMA will indeed apply without prejudice to (i.e., without detriment to any existing right or claim enshrined in) all the rules which have recently been revised or adopted to regulate platform practices. In certain cases, the DMA may qualify as lex specialis, thereby prevailing over other rules. In other cases, based on the principle of supremacy of EU law, the DMA may override national rules that pursue objectives other than fairness and contestability. In such cases, despite a “without prejudice” clause, the DMA would not necessarily complement (but could possibly endanger) the effectiveness of existing rules.

General Court largely upholds the Commission’s decision in Google Android (Case T-604/18)

Earlier this month the General Court of the EU (the “Court”) delivered its much-anticipated judgment on Google’s action for annulment of the Commission’s decision in Case AT.40099 (Google Android) – the decision currently holding the record for the highest antitrust fine ever imposed, namely EUR 4.3 billion.

By way of reminder, the Commission found that Google had infringed Article 102 TFEU by entering into a number of agreements with OEMs and MNOs relating to the installation of its mobile apps and the licensing of the Android OS, namely: (1) the Anti-Fragmentation Agreements (“AFAs”), which OEMs and MNOs had to sign before being eligible to distribute Google apps on their smartphones, and which prohibited them from marketing devices running Android versions (known as “forks”) not approved by Google; (2) the Mobile Application Distribution Agreements (“MADAs”), which required OEMs/MNOs wishing to pre-install Google Play on their devices to also pre-install the Google Search and Chrome apps; and (3) the portfolio-based Revenue Sharing Agreements (“RSAs”), according to which Google provided payments to OEMs/MNOs in return for the Google Search app being exclusively pre-installed on a given portfolio of smart mobile devices. The Commission considered that Google’s practices formed a single and continuous infringement, in that they all served the same overarching objective of protecting and strengthening Google’s dominance in general search. Specifically, Google’s practices were said to be part of an overall strategy aimed at anticipating the effects from consumers’ shift to mobile devices, and the risks such shift posed to Google’s business model centring around its eponymous search engine.

As predicted last year, the Court largely dismissed Google’s action, but annulled the decision (on both procedural and substantive grounds) insofar it condemned Google’s portfolio-based RSAs (this is the third case in a row where the Court has annulled a Commission decision on exclusivity payments, the other two being Intel (RENV) and Qualcomm). Exercising its unlimited jurisdiction, the Court decided to vary the amount of the fine, setting it at EUR 4.125 billion, but not without first engaging in a fairly aggressive analysis of Google’s conduct.

I shall not endeavour to summarize every aspect of this landmark ruling – this a very long and detailed judgment, and the more one reads it the more one discovers new elements. Rather, I would like to discuss several issues that caught my attention while reading the judgment, including issues of principle and of broader significance for the application of EU competition law in digital markets.

In this post, I will focus on the first abuse identified in the contested decision, namely the MADA pre-installation conditions (in a subsequent post I will discuss the AFAs and market definition/dominance), which the Commission had analysed as a form of tying arrangement.

Threshold issue 1: the concept of exclusionary effects

After citing well-rehearsed case law on the concept of abuse, the Court recalled that not every exclusionary effect is necessarily detrimental to competition, in that competition on the merits may by definition lead to the marginalisation or departure of less efficient competitors (para. 278, citing Intel). Then, in paragraph 281, the Court stated the following with respect to the concept of exclusionary effects:

“Exclusionary effects characterize situations in which effective access of actual or potential competitors to markets or to their components is hampered or eliminated as a result of the conduct of the dominant undertaking, thus allowing that undertaking negatively to influence, to its own advantage and to the detriment of consumers, the various parameters of competition, such as price, production, innovation, variety or quality of goods or services.”

This is a rich passage which neatly summarizes the case-law, and which is worth breaking down:

First, the Court recalls that exclusionary effects relate to the effective access of actual or potential competitors to markets or their components. For such an effect to arise, it suffices that effective access of rivals is made harder (“hampered”), there being no need to show that access is eliminated. This is what the Court of Justice held in TeliaSonera (see para. 63, referring to conduct capable of marking market entry “more difficult, or impossible”).

Second, the Court recalls that the exclusionary effects must be the result of the conduct of the dominant undertaking (causal link).

Third, the Court distinguishes between (i) exclusionary effects and (ii) consumer harm, the latter being captured by the second part of the passage, referring to the dominant undertaking “negatively [influencing], to its own advantage and to the detriment of consumers, the various parameters of competition such as price, production, innovation, variety or quality of goods or services”. This is in line with the view in the case-law (see e.g., Google Shopping, paragraph 443) that a restriction of competition is presumed to result in consumer harm in the form of e.g., higher prices, less choice or lower quality.

Later on, the Court supplemented the above, by adding that exclusionary effects are to be distinguished from the existence of a competitive advantage. The Court nevertheless held that the contested decision was careful to draw this distinction, in that it established, first, the existence of an advantage linked to the MADA pre-installation conditions that cannot be offset by competitors, and second, the anticompetitive effects of that advantage (para. 564).

Threshold issue 2: the legal test for tying and actual vs potential effects

Next, the Court considered the legal test for tying. In essence, the Court adopted the legal test laid down in Microsoft, where the Court of First Instance assessed the bundling by reference to the four conditions analysed in the Commission decision, including the condition that the bundling “forecloses competition”.

In Google Android, the Commission considered that the condition relating to foreclosure of competition is satisfied if the conduct of the dominant undertaking is “capable of restricting competition” (to that end, the Commission had cited the Court’s judgment in Microsoft, para. 867). The Court noted that in reality, however, the Commission sought to establish the actual exclusionary effects of Google’s conduct (para. 290). By way of example, the Commission found that Google’s conduct had the effect of, among others (i) making it harder for rival search engines to gain search queries and the related revenue/data needed to improve their services; (ii) increasing barriers to entry in the market for general search services; and (iii) reducing the incentives of rivals to innovate (para. 294).

The Court endorsed the Commission’s approach of examining actual effects. In fact, the Court went beyond simply endorsing it; it held that the Commission was required to engage in “a close examination of the actual effects” (para. 295), and this for the following reasons:

This was not a case of classical tying since users could easily download rival search or browser apps (see paras. 292-293). The Court drew an analogy with Microsoft, where the Commission took the view that it could not assume that the tying of Windows Media Player had by its nature foreclosure effects, since consumers could download rival media playing software through the Internet; as such, the Commission engaged in a “close examination of the actual effects” of the bundling arrangement on the relevant market.

Another consideration is the that the practices at issue took place over a long period (para. 296); this is not a case of the Commission carrying out a prospective analysis based on effects that will arise in light of assumptions that cannot yet be verified (citing Generics). Contrast this with Google Shopping, where the Court rejected Google’s argument that the Commission had to demonstrate actual (as opposed to potential) effects because of the duration of the practices at issue (see paras. 426 and 442).

From a policy perspective, the Court noted that close examination of the actual effects has two advantages (para. 295): (i) first, it reduces the risk of penalizing conduct which is not actually detrimental to competition on the merits; and (ii) second, it serves to further clarify the gravity of the conduct in question, which will facilitate the penalty calculation.

Now, it is rather unlikely that the Court’s approach can be easily transposed to other practices; after all, the case law is clear that, for the purposes of establishing an infringement of Article 102 TFEU, it suffices, in principle, for the Commission to show potential effects. Rather, the Court’s approach is best understood by having regard to the circumstances of Google Android, and in particular the fact that users could easily download rival software.

The effects of the MADA pre-installation conditions

Against this background, the Court examined the Commission’s analysis of the MADA pre-installation conditions. The Commission’s reasoning for the Search-Play Store bundle involved two steps (the Commission used similar reasoning for the Chrome-Play Store/Search bundle):

First, the Google Search-Play Store bundle provides Google with a significant competitive advantage which rivals cannot offset. This is because pre-installation is an important distribution channel, as it can increase significantly on a lasting basis the usage of the app due to the “status quo bias”. Rivals cannot offset such advantage, whether through downloads, agreements with search engine developers or pre-installation agreements.

Second, the bundle helps Google maintain and strengthen its dominant position in general search by increasing barriers to entry and deterring innovation. This is because Google’s conduct makes it harder for rivals to gain search queries and the respective revenues and data needed to improve their services.

The Court by and large upheld the Commission’s reasoning. The following points are worth noting.

The “status quo bias” linked to pre-installation

Much of the first part of the Commission’s reasoning was predicated on the “status quo bias” linked to pre-installation, a concept from behavioural economics which refers to consumer’s tendency to stick to what is offered to them. Google, on its part, argued that the pre-installation conditions did not create a “status quo bias”, and that the evidence in the contested decision in fact concerned the setting of defaults, not pre-installation.

The Court noted that Google agreed that, like any form of promotion, pre-installation increases the likelihood of users trying the pre-installed app; as such pre-installation has at least a promotional value for Google (para. 331). In the present case, pre-installation of Search and Chrome had significant consequences in quantitative terms (paras. 336-339).

The Court dismissed Google’s arguments seeking to call into question the evidence relied on by the Commission in the contested decision. It also held that the Commission was entitled to rely on the evolution of usage shares to support its theory of harm built around the “status quo bias” effect (para. 574).

Causal link between the MADA pre-installation conditions and evolution of usage shares

On this point, Google argued that the contested decision did not demonstrate that the increase in Google’s usage shares was caused by the pre-installation conditions, instead of the superior quality of Google’s products. In essence, Google argued that consumers made greater use of its products because these were better, not because they were pre-installed.

The Court dismissed Google’s argument, holding that the Commission was not required to determine precisely whether Google’s usage shares could also be explained by the alleged superior quality of Google’s products (para. 575). Rather, it was for Google to show that its usage shares were linked to the alleged superior quality of its products (para. 575). However, the evidence relied on by Google was insufficient (para. 576).

Interestingly, the Court expressed in passing certain doubts over the alleged superior quality of Google’s products:

The Court observed that the needs of consumers are not necessarily met by the technically best solution; variables such as privacy protection or the account taken of specific linguistic features also play a role (para. 578).

The Court agreed with the Commission that the alleged quality advantage is not borne out by the ratings given to products of Google and rivals in the Play Store (para. 579). The Commission was entitled to rely on this to find that the various rivals offered a service capable of meeting consumer demand (para. 582).

Google’s counterfactual argument

Google argued that the Commission failed to assess whether the MADA pre-installation conditions were capable of restricting competition that would have existed in their absence, having regard to their legal and economic context. According to Google, the MADA pre-installation conditions were part of the free licensing model for the Android platform; they thus created, rather than removed, opportunities for rivals (para. 585).

This is essentially an argument about the counterfactual (=absent the MADA conditions, Google would not have been able to develop and maintain the free and open Android platform, in which case competition would have been less). The Court notes that the Commission acknowledged that the Android platform increased opportunities for Google’s competitors (para. 590).

Rather disappointingly, the Court sidestepped Google’s arguments on the basis that the Commission did not take issue with the MADA as a whole (or the open and free licensing system developed by Google), but with a specific aspect of them, namely the pre-installation conditions (paras. 591 et seq). The Court also refrained from framing this as a counterfactual issue, instead preferring to state that the Commission considered all the relevant circumstances when assessing Google’s conduct (para. 596).

One may express doubts as to whether this approach is correct; one could argue that the pre-installation conditions were an integral part of the MADA. If this is correct, then taking issue with the pre-installation conditions would be tantamount to taking issue with the MADA as a whole. I expect Google to come back on this point if it lodges an appeal against the Court’s ruling.

Google’s objective justifications

Google had argued that the MADA pre-installation conditions were justified, in that (i) they enabled it to monetize the Android platform through the advertising revenue generated by Google Search; and (ii) they enabled it to offer the Play Store free of charge to OEMs. The above arguments had prompted commentators to consider that Google Android was a rather exceptional case, in that the Commission was interfering with the business model of a dominant undertaking and the way in which it sought to monetize its platform.

Even so, the Court dismissed Google’s arguments, holding that Google had failed to discharge its burden of proof.

In the first place, Google had not demonstrated that the bundles were necessary for it to monetise its investment in Android. The Court relied on the following considerations:

Google has always been in the position of having significant revenue sources to finance its investments (para. 608).

Google would have still (i) generated revenues from the Play Store; (ii) generated PC search advertising revenue; and (iii) captured valuable user data on Android (paras. 609-610).

It is reasonable to consider that Google would have the incentive to develop and maintain the Android platform without even being certain that it would recoup its expenditure by the revenues generated by that platform, in order to counter the risks to its search-advertising business model resulting from the switch to smart mobile devices (para. 613). In other words, the Court was not convinced that, from an ex ante perspective, Google would not have the incentive to invest in the Android platform.

In the second place, the Court was not convinced that Google’s conduct was justified because it allegedly enabled it to offer the Play Store free of charge to OEMs. In fact, the Court held that Google’s preferred solution (free licensing in return for pre-installation) did not preclude the other solutions envisaged by the Commission in the contested decision (e.g., licensing against a fee) (para. 617).

That’s all for the moment – stay tuned for Part II of this post!

Photo by Adrien on Unsplash

Unravelling the Media Freedom Act proposal: Ambitious yet underwhelming?

On 16 September, the European Commission (“Commission”) published its much-anticipated proposal for a Media Freedom Act (“MFA”). The proposed MFA is an ambitious initiative. It includes rules that would apply to all actors of the media ecosystem (Member States, broadcasters, press publishers, on-demand players, online platforms). It also seeks to address several complex issues which have been facing the media sector for decades and which have become more pronounced in recent years. Those include political interference with editorial decisions, the independence of public service media (“PSM”), media concentration, and platforms’ arbitrary decisions over media content. Against this background, proposing a regulatory tool that may contribute to the protection of media freedom and media pluralism, which are prerequisites for a well-functioning democracy, is a worthy initiative. That said, this initiative raises doubts as to the adequacy of the means it proposes to safeguard those principles (e.g., it may exacerbate the regulatory asymmetries between platforms and media service providers). This blog post discusses whether the MFA proposal can tackle the issues it is set to address in an effective and legitimate manner. 

Brief description of the MFA proposal 

In terms of the substantive matters it tackles, the MFA proposal can be divided into five different parts: 

The first part establishes safeguards for media freedom, such as the Member States’ obligation to respect the editorial independence of media service providers; the Member States’ obligation to set up a process for the appointment (and dismissal) of PSM management that is governed by transparent and non-discriminatory criteria; and the obligation of media service providers to ensure disclosure of any conflict of interest that may affect the provision of news and current affairs content (Articles 3-6).

The second part is about the institutional set up. It entrusts national media regulators with the task of ensuring the effective implementation of the MFA and creates a mechanism of cooperation between them. It further establishes the European Board for Media Services, which will replace and succeed the European Regulators Group for Audiovisual Media Services. The task of the Board will be to promote the effective and consistent application of the MFA by, inter alia, supporting the Commission through technical expertise (Articles 7-16). 

The third part establishes a set of obligations for platforms, such as procedural requirements that must apply to a platform’s decision to suspend content offered by a media service provider (Articles 17-19).  

The fourth part establishes a framework for the well-functioning of the media market, including the Member States’ obligation to have in place rules for the assessment of media market concentrations that may have a significant impact on media pluralism (Articles 20-22). 

The fifth part establishes rules to ensure a transparent and fair allocation of resources, such as the obligation to ensure that audience measurement systems comply with the principles of transparency and non-discrimination and the Member States’ obligation to ensure that public funds granted to media service providers for the purposes of advertising are awarded according to transparent and objective criteria (Articles 23-24). 

The MFA proposal is accompanied by a Recommendation. As its name suggests, this document does not have binding legal force. Moreover, the Recommendation focuses on internal safeguards for editorial independence and ownership transparency. In other words, it does not cover the full set of issues tackled by the MFA proposal (e.g., how media market concentrations should be assessed). 

On the face of it, the MFA proposal is as multi-faceted as it should be. It takes account of the wide range of actors that may pose challenges to media freedom and media pluralism by proposing obligations for Member States, “media service providers” (a broad term that covers TV and radio broadcasters, on-demand media content providers, and press publishers), and platforms. However, many gaps remain to be filled. Notably, the Commission has arguably not made a convincing case that the proposal is aligned with the principle of conferral (which governs the limits to EU competences), nor has it explained how exactly the obligations established in the MFA have “teeth” to improve the status quo ante. What is more, as it currently stands, the text is likely to exacerbate the regulatory asymmetries between media service providers and online platforms and largely misses the target as to how platforms should be regulated to protect media pluralism. Within the sphere of “traditional” media services, certain issues remain unaddressed (e.g., effective supervision of specific media service providers). Finally, further reflection is needed on the institutional set up in terms of the tasks assigned to the regulators that will oversee compliance with the MFA (e.g., monitoring the press sector) and on ensuring that the competent regulators have sufficient resources to perform their mission effectively. I address these issues below. 

Legal basis, subsidiarity and proportionality

As I have explained in a previous blog post, the EU has limited law-making powers in the areas of media freedom and media pluralism. Pursuant to Articles 167(1) and 6(c) TFEU, the EU may merely carry out actions to support, coordinate or supplement action taken at the national level in order to promote media policies. In that regard, Article 167(5) TFEU provides that the EU may adopt incentive measures and recommendations, but not instruments that would harmonise national media laws and regulations. Post-Lisbon, the Charter of Fundamental Rights of the EU (“CFREU”) became legally binding. Though the Charter establishes the (EU’s) obligation to “respect” media freedom and media pluralism (Article 11(2)), it also explicitly provides that it does not afford new powers or tasks to the EU in the field of fundamental rights and principles. In other words, Member States remain primarily responsible for protecting these values at the domestic level. The rationale for this division of competences is that Member States are better placed to design media regulation in accordance with their traditions, community needs, and specificities of domestic markets. 

Against this background, the legal basis on which the MFA proposal rests is Article 114 TFEU, the provision on which the EU relies to “adopt measures for the approximation of the provisions laid down by law, regulation or administrative action in Member States, which have as their objective the establishment and functioning of the internal market”. In light of the competence limitations introduced by the TFEU and the CFREU discussed above, one may wonder about the choice of legal basis given that the MFA will seek to harmonise a range of issues pertaining to national media policies. 

However, choosing Article 114 TFEU does not mean that the proposal is doomed from the outset. For example, the Audiovisual Media Services (AVMS) Directive, which regulates the activities of TV broadcasters, on-demand audiovisual service providers and video-sharing platforms, also relies on Article 114 TFEU. Yet, there is an important difference between the two instruments. In addition to establishing rules specific to media content (e.g., obligations concerning the fight against hate speech and the protection of minors), the AVMS Directive has a clear “internal market component”, namely the “country of origin” principle whereby providers that operate legitimately in the Member State where they are established can offer their services to the audiences of other Member States. It is currently unclear what the “internal market component” of the MFA proposal is. The Explanatory Memorandum is restricted to mentioning that the initiative “aims to address the fragmented national regulatory approaches related to media freedom and pluralism and editorial independence”. It then lists the issues which hinder the completion of the internal market (e.g., State interference with editorial decisions, obstacles to distributing media content on very large online platforms) and merely notes that “the objectives of the intervention cannot be achieved by Member States acting alone, as the problems are increasingly of a cross-border nature and not limited to individual Member States or to a subset of Member States”.  Similar remarks are made in Recitals (4) and (5) of the MFA proposal.

The above seems hardly enough to justify intervention on the basis of Article 114 TFEU. There is no detailed discussion of the national measures that effectively restrict the freedom to provide media services, nor does any evidence-based analysis take place to set out the extent to which affected providers are discouraged from penetrating the market of another Member State. In other words, the MFA proposal appears to lack context, and the justifications put forward by the Commission could be relevant to anyinitiative that seeks to harmonise domestic regulations. Moreover, unless a more robust internal market rationale for the MFA is offered, it is dubious whether reliance on Article 114 TFEU would be supported by the Court of Justice of the EU (“CJEU”). When assessing the validity of the legal basis on which an instrument rests, the CJEU takes account of the “aim and content” of the measure (i.e., the primary objective that the instrument concerned pursues, the principles that underlie it and its ideological premises). The legal test is strict: “recourse to Article [114] is not justified where the measure to be adopted has only the incidental effect of harmonising market conditions within the [EU]” [emphasis added]. For the reasons discussed above, it is dubious whether the MFA proposal would pass this test as it currently stands. 

As a result, in order to survive judicial review, a more convincing case needs to be made as to the chosen legal basis and how the MFA complies with the principle of subsidiarity

The same remark can be made about compliance with the principle of proportionalityThe Explanatory Memorandum simply notes that the MFA proposal “is limited to issues on which Member States cannot achieve satisfactory solutions on their own”. Aside from the fact that the point on national issues is not sufficiently developed, the Commission refrains from discussing how the MFA would remedy problems that may not be resolved by the existing EU toolkit, including competition and State aid rules, the “Rule of Law” mechanism, and the AVMS Directive (see my analysis here). 

All in all, unless the lacunae discussed above are addressed, the MFA will be vulnerable to legal challenge.  

Exacerbating the regulatory asymmetries between media service providers and online platforms? 

It is widely known that contrary to “traditional” media, online platforms have been subject to “light-touch” regulation. This is gradually changing as the legislator has begun to acknowledge the role that platforms play in determining the variety and quality of content users access. For example, the revised AVMS Directive establishes obligations for video-sharing platforms. The DSM Copyright Directive establishes an ancillary copyright for press publishers for the online use of their publications. Such changes not only reflect users’ increasing use of platforms in order to engage with media content, they also attempt to address the regulatory asymmetries that are pervasive in the media landscape. Those regulatory asymmetries can distort competition. Media service providers and platforms often compete for the same audiences and the former must incur significant costs in preventing risk (e.g., by ensuring that their services are aligned with content standards and ownership restrictions) and complying with decisions adopted by the competent regulator. Instead of narrowing that gap, the MFA is likely to exacerbate the existing asymmetries. I will illustrate this point by referring to the rules included in the MFA proposal for the assessment of media market concentrations.   

One of the major changes that the MFA will bring about is the Member States’ obligation to provide, in their national legal systems, substantive and procedural rules which ensure an assessment of “media market concentrations” that could have a significant impact on media pluralism and editorial independence (Article 21). A key issue that arises from this obligation is the term “media market concentration”, which is defined as a concentration within the meaning of the EU Merger Regulation that involves “at least one media service provider” (Article 2(13)). In turn, the term “media service providers” does not cover platforms, which are distinguished from the former and which are subject to a different set of rules. As I have argued elsewhere, this approach fails to consider that platforms exercise editorial control by moderating, removing and ranking media content. Using a provocative example, if Facebook and Twitter merged, this would not qualify as a “media market concentration” for the purposes of the MFA (i.e., it would not be subject to an assessment that would examine the impact of the merger on opinion-forming).   

Unless the term “media market concentration” is defined in a more nuanced manner, the MFA is likely to widen the regulatory asymmetries between media service providers and platforms. In order to understand the implications of the Commission’s suggested approach, the following needs to be considered. In recent years, several Member States that had in place rules to prevent media concentration have abolished them in order to enable national media to scale in a more globalised landscape dominated by platforms. If the proposal moves forward, those Member States would need to re-introduce such rules to comply with the MFA. Other Member States have not revised them (to the effect that they still bind traditional media only). Against this background, platforms are (still) only subject to horizontal merger control rules (we all know how well that worked) and, following the adoption of the Digital Markets Act (“DMA”), gatekeeper platforms will only be subject to a reporting obligation. The approach adopted by the MFA proposal needs to change. 

 Obligations for platforms: Does the MFA proposal miss the target?

The Commission suggests rules that will govern the relations between VLOPs (within the meaning of the Digital Services Act) and media service providers. Those rules will apply to the extent that a media service provider submits a declaration that it is editorially independent from Member States and third countries and that it is subject to (hard, self-, or co-) regulation (Article 17(1)). 

As the proposal currently stands, the obligations for platforms may miss the mark. For example, under Article 17(2), where VLOPs decide to suspend the provision of their online intermediation services in relation to content provided by a media service provider on the grounds that such content is incompatible with their terms and conditions, they must take all possible measures to communicate to the media service provider concerned the statement of reasons accompanying that decision, as required by Article 4(1) of the platform-to-business (“P2B”) Regulation. It is not clear why this obligation needed to be included in sector-specific regulation. VLOPs providing online intermediation services for the purposes of the MFA are also “online intermediation service providers” within the meaning of the P2B Regulation. Similarly, media service providers that rely on such VLOPs to reach audiences are “business users” for the purposes of the P2B Regulation. In other words, it is not clear what Article 17(2) seeks to achieve especially given the fact that the P2B Regulation applies irrespective of a platform’s size (i.e., it is not limited to VLOPs). Moreover, according to the DMA, social networks and video-sharing platforms are not “online intermediation services”; they are distinguished from the latter and they form two different categories of platform services. These definitional inconsistencies must be addressed. 

Furthermore, pursuant to Article 17(3), VLOPs are required to take all the necessary technical and organisational measures to ensure that the complaints submitted by media service providers in the context of Article 11 of the P2B Regulation are processed and decided upon with priority and without undue delay. By means of a reminder, Article 11 of the P2B Regulation establishes the obligation to set up a complaint-handling mechanism. This provision requires online intermediation service providers to handle complaints “swiftly and effectively”. This arguably prevents undue delays in complaint processing. In other words, the only added value of Article 17(3) is that it establishes that complaints submitted by media service providers deserve differentiated treatment because they must be decided upon with priority. This is justified by the fact that media service providers distribute information about matters of common concern, including news and current affairs content. Such content is perishable and, by the time a complaint is handled, it may lose its relevance to the public. Moreover, this provision can contribute to the fight against the spread of disinformation. However, without specifying what “with priority” means, the provision may not make a significant difference in practice. In that regard, it is worth mentioning that timeframes are not new to content regulation. For example, the Online Terrorist Content Regulation sets specific time limits within which platforms must process a removal order. A similar solution can be envisaged in order to ensure that socially relevant content finds its way to the audiences. 

The other provisions that apply to platforms refrain from setting any strict obligations that could effectively prevent abuses of power over the distribution of media content. For example, in the case of frequent suspensions, the VLOP concerned must engage in a meaningful and effective dialogue with the media service provider (Article 17(4)); VLOPs are bound by a reporting obligation that requires them to disclose the number of instances of restriction or suspension as well as the grounds for those decisions (Article 17(5)); and VLOPs are expected to participate in the structured dialogue between interested stakeholders that the Board will organise to discuss experience in the application of the above obligations. 

In sum, further reflection is needed throughout the legislative process in order to ensure that the MFA proposal is fit for addressing platform practices that affect the distribution of media content. Overarchingly, it remains unclear why the obligations discussed above would only bind VLOPs. It is reminded that the definition of VLOPs is quite rigid, covering platforms that have 45 million monthly active service recipients. No other criteria, including qualitative parameters, apply for a platform to qualify as a VLOP. As a result, the approach suggested by the MFA proposal ignores the fact that there may be platforms which do not reach that threshold but are nonetheless popular for the consumption of news content and may vary from one Member State to another.  

Complex problems require “simple” solutions? 

The MFA proposal seeks to address controversial issues. This may explain why several provisions are drafted in “high level” terms. Combined with the competence limitations discussed above, this may be justified. However, it also runs the risk of not putting any meaningful rules on the table. This concerns, for instance, Article 20 that establishes, inter alia, that any measure taken by a Member State that is liable to affect the operation of media service providers in the internal market must be proportionate, reasoned, transparent, objective and non-discriminatory. Similarly, Article 3 enshrines the users’ right to receive a plurality of news and current affairs content, produced with respect for editorial freedom of media service providers, to the benefit of the public discourse. Though such rules confirm the EU’s commitment to media freedom and media pluralism, they also raise the question as to whether they add to the CFREU (which is primary EU law and binding on Member States) and the long line of case law of the European Court of Human Rights. To prevent such rules from becoming political declarations, an adequate enforcement mechanism is needed (more on that below). 

Another issue that arises from the sensitive nature of the matters the MFA proposal tackles is that certain problems are simply not addressed. For example, it is not clear why the transparency safeguards that the MFA proposal suggests for the allocation of State advertising (Article 24) do not apply to other schemes, such as schemes supporting PSM or other media service providers. Related to this point, it is also unclear why the MFA proposal is restricted to requiring Member States to have in place independence safeguards for the appointment of the head of management and the members of the board of PSM, but it does not suggest any independence safeguards for the bodies that are entrusted with assessing compliance with the PSM remit (which may not necessarily be national media regulators). 

The institutional set up 

The implementation of the MFA will rely on a number of actors that will oversee compliance with the obligations it establishes, including national regulators, potentially national competition authorities (that may be entrusted with examining the impact of media market concentrations on media pluralism), the European Board for Media Services (“the Board”), and the Commission. 

There are several issues that arise from the institutional set up of the MFA proposal, which are worthy of another blog post. However, there are two issues that have stricken me after a first reading of the text. First, national media regulators (mainly regulators monitoring audiovisual media) and the Board, which will replace and succeed the European Regulators Group for Audiovisual Media Services, will oversee compliance with the MFA. But the MFA will cover several players, including press publishers that have traditionally been subject to self-regulatory codes of practice. The implications of this change for the free press can be significant and this must be reflected upon in the legislative process. Secondly, the remit of national media regulators is becoming more and more demanding. For instance, in addition to overseeing compliance with national media regulation and the laws transposing the AVMS Directive, national media regulators will also need to contribute to the effective implementation of the DSA and -apparently- the MFA. Though Article 7(3) of the MFA proposal lays down that “Member States shall ensure that the national regulatory authorities or bodies have adequate financial, human and technical resources to carry out their tasks under this Regulation”, this is easier said than done in the light of the avalanche of recently adopted regulations that add to their to-do list. 

Conclusions 

The MFA proposal is an ambitious and complex initiative. This blog post has only scratched the surface, but it illustrated that, as the legislative process unfolds, several issues need to be addressed for the MFA to (a) survive judicial review, and (b) establish a set of obligations which add meaningfully to the existing toolkit.  

Authored by Konstantina Bania

Image Source Pixabay

Google’s latest attempts to squeeze app developers in the face of regulation: When principles and coherence no longer matter

As the readers of this blog know, app developers selling digital content have been unhappy for many years with Apple’s App Store policies, which force them to use its in-app payment solution (“IAP) and charges them a 30% commission, which is reduced to 15% in limited circumstances. These policies led Apple into trouble in the Netherlands, the UK and South Korea.

Until recently, Google took a softer stance by allowing many app developers to use their own in-app payment system. However, in September 2020, Google decided to adopt a more aggressive approach to the enforcement of the obligation imposed on app developers offering in-app purchases of digital goods on Google Play to use Google Play Billing (“GPB”) and pay the associated 30% commission (see our blog post on this development here). Google announced that new apps submitted after 20 January 2021 would need to comply, whereas the deadline for compliance for existing apps that needed to be updated was extended to 30 September 2021. In July 2021, Google however announced that it was giving developers an option to request a 6-month extension, thus giving them until 31 March 2022 to comply with its payment policy.

This policy shift by Google came at the same moment the EU institutions were seeking for finalize the formal adoption of the Digital Markets Act (“DMA”), which comprises a provision that would ban gatekeeper app stores from imposing their own in-app payment solution on app developers. It also came at a time where Apple had been found by the Dutch Competition Authority (“ACM”) in breach of Article 102 TFEU for imposing its in-app payment solution (“IAP”) on dating app developers, and where the obligation imposed by major app stores on app developers to use their in-app payment solution was declared illegal in South Korea. Google’s policy change regarding GPB was thus a poke in the eye of regulators.

As announced in the press, it therefore did not take much time for DG COMP to initiate a preliminary investigation into Google’s conduct based on Article 102 TFEU. The CMA also announced that it was “investigating Google’s conduct in relation to Google’s distribution of apps on Android devices in the UK, in particular Google’s Play Store rules which oblige certain app developers to use Google’s own payment system (Google Play Billing) for in-app purchases.”

Threatened by these investigations, Google suddenly decided to amend its new GPB policies although as we explain below this is nothing more than a PR stunt to seek to appease regulators.

First, in a blog post of 19 July 2022, Google stated that as part of its efforts to comply with the EU Digital Markets Act (although it is not yet entered into force), it was

announcing a new program to support billing alternatives for EEA users. This will mean developers of non-gaming apps can offer their users in the EEA an alternative to Google Play’s billing system when they are paying for digital content and services.

Developers who choose to use an alternative billing system will need to meet appropriate user protection requirements, and service fees and conditions will continue to apply in order to support our investments in Android and Play. When a consumer uses an alternative billing system, the service fee the developer pays will be reduced by 3%.”

By contrast “Google Play’s billing system will continue to be required for apps and games distributed via Play to users outside the EEA, and for games distributed to users within the EEA.”

According to Google’s support center, alternative billing systems would have to comply with various requirements, some of which were still to be defined. Additionally:

For participants in this program, service fees, which support our investments in Play and Android, will continue to apply. Developers must pay Google the applicable service fees. When a consumer uses an alternative billing system, the service fee the developer pays will be reduced by 3%.”

More recently, Google indicated that starting 1 September 2022, app developers who meet the certain eligibility and requirements may join a pilot schemedesigned to test offering an alternative billing option next to Google Play’s billing system and to help us explore offering this choice to users.” Calling this a “pilot” is of course misleading since before Google’s recent GPB policy changes some app developers were already allowed to use their own in-app payment solution alongside GPB. There is therefore no need for a pilot scheme. Google has been able to assess the impact of “user choice billing” for years.

Now, if one looks at the conditions of eligibility, one finds a series of oddities.

First, to be eligible for the pilot, the app must be a non-gaming app. No explanation as to why this is the case is given. Now, the only plausible reason why gaming apps are treated differently is because they represent the bulk of Google’s Play Store revenues. Google rewards its best customers by discriminating against them!

Second, developers participating in the pilot can only offer user choice billing to mobile and/or tablet users in announced pilot countries, currently: European Economic Area (EEA) countries, Australia, India, Indonesia, and Japan. I am not sure about Indonesia, but the jurisdictions where the pilot scheme is allowed include those where Google is under regulatory pressure. This should incentivize app developers to maintain or even increase the pressure. Now, would I be a UK or US regulator, I would not be impressed by Google’s selective approach.

Third, app developers participating in the pilot will be subject to various (technical) requirements that are not entirely clear yet. As the devil is in the detail, it is hard to tell at this stage whether these requirements will be reasonable or designed to frustrate the process. Apple’s efforts to frustrate the implementation of the order of the ACM forcing it to allow dating app developers to use their own in-app payment solution is a reminder that regulators need to pay attention to details as even a small amount of friction can disincentivize users to select an alternative payment option.

Finally, “for participants in this pilot, service fees, which support our investments in Play and Android, will continue to apply. Developers must pay Google the applicable service fees. When a consumer chooses to use an alternative billing system, the service fee the developer pays will be reduced by 4%.” This last condition is both interesting and important.

It is interesting for the following reasons. First, Google seems to have now increased the fee reduction granted to app developers from 3% (which was the amount announced in July for app developers operating in the EEA) to 4% (as was already the case in South Korea). Second, the Q&A that accompanies the Google announcement provides an explanation to as to the reason why it intends to charge a 26% commission to app developers using an alternative payment solution. This commission is said to reflect “the value provided by Android and Play and supports our continued investments across Android and Google Play, allowing for the user and developer features that people count on.” This observation fails to convince. For instance, why should Uber, which provides a match-making service that is no different than a dating app, pay no commission while it benefits from Android and Google Play in the same manner as a dating app? And why do super profitable apps like Instagram pay nothing, while some small app developers are loaded with a massive commission?

The payment of a 26% commission is also important because it shows that Google’s recent announcements regarding GPB are nothing but a PR stunt. The reason is that a fee reduction of a mere 4% will not be sufficient to allow app developers to use the in-app payment solution of their choice. Payment processing represents a material cost for app developers (with – although this may vary among countries – credit card operators such as AMEX and payment processors such as PayPal and Stripe charging fees between 3 and 4%). To this, one needs to add the cost of engineering (as developing your own in-app payment solution requires work) and customer care. For most operators, the cost of using alternative payment solutions may thus exceed 4%.

In sum, Google’s GPB policy is now totally fragmented:

while all apps benefit from the same app store services, most apps do not pay any commission, and a limited number of apps are hammered with a 30%/15% fee;

whether or not an app developer can take part in the so-called pilot depends not only on the type of apps it operates (e.g., gaming apps are excluded), but also on where its app operates (as the benefit of the pilot only applies in a limited number of countries).

This illustrates that this policy is totally unprincipled. There is no logic or coherence. Google seeks to maintain its privileges as a monopolist wherever it can and as far as it can.

With the entry into force of the DMA, Google’s strategy will however be short lived as the DMA mandates gatekeeper app stores to allow app developers to use the in-app payment of their choice (and there will be no scope to make an exception for “gaming apps”). Moreover, it seems very unlikely that Google’s 30/15% commission will survive Article 6(12) of the DMA, which requires the gatekeeper to “apply fair, reasonable, and non-discriminatory general conditions of access for business users to its software application stores.” I will come back to this topic in a later post.

Disclosure: I advise a variety of app developers over competition and regulatory issues. I am also the outside EU competition counsel of the Coalition for App Fairness.

Amazon/iRobot: The flywheel spins once more

It was reported last week that the US Federal Trade Commission has started investigating Amazon’s $1.7 billion acquisition of iRobot, the maker of Roomba smart vacuum cleaners. It seems to us that the UK and EU authorities will also have jurisdiction to investigate. Why should anyone (apart from iRobot’s shareholders) care about a large retailer buying a smart vacuum cleaner manufacturer?

A recap on current merger control policy

Merger control has moved a long way in recent years, and competition authorities on both sides of the Atlantic nowadays really do worry about just about any Big Tech acquisition (see e.g. the FTC’s recent opposition to the Meta/Within deal). 

To caricature slightly, a competition authority used to define some relevant markets (e.g., the manufacturing of smart vacuum cleaners) and look at the merging parties’ positions in those markets.  In this case, they would find that iRobot has a strong position in smart vacuum cleaners, for example it reportedly sells three-quarters of them in the US.  However, Amazon would have a zero market share in such a market. Its strong market position sits downstream at the retail level and in adjacent markets for smart home products such as smart speakers, connected TVs, etc. The competition authority would have decided this was mostly a merger between companies at different levels of the supply chain and different relevant markets. “Vertical” and “conglomerate” mergers were rarely seen as problematic or even subjected to an in-depth investigation, particularly in the US.

Competition authorities are nowadays criticised for having “missed the wood for the trees” by taking such a narrowly siloed approach (actually, much of the criticism comes from the current leaders of the authorities themselves).

Competition authorities still use traditional horizontal and vertical theories of harm (and the European Commission unusually blocked a vertical merger earlier this week), but in the case of Big Tech they nowadays also examine how their various activities will interrelate with the acquired business, especially where the firm has “gatekeeping” activities that give it a privileged position in the economy. Authorities are willing to attempt to predict the future by looking at the dynamic elements of the industry.

The UK’s Competition and Markets Authority (“CMA”) has arguably been the most aggressive competition authority on these “dynamic” issues. One of the main reasons behind last year’s update to the Merger Assessment Guidelines (the “MAGs”) was to improve the CMA’s ability to deal with this type of situation. The introduction to the MAGs notes that the Big Tech firms have made over 400 acquisitions in recent years, none of which had at that time been blocked and few had even been reviewed by competition authorities, risking particular harm for consumers. The same paragraph of the MAGs refers with approval to the Stigler Center report, which recommends that competition authorities need to recalibrate how they assess digital mergers because of the risk of underenforcement.

The CMA has also asked the UK Government for additional powers to tackle Big Tech mergers (see Digital Markets Taskforce report, paragraph 4.120 et seq), citing concerns that digital mergers have been underenforced and that vertical mergers have rarely been challenged, although the Government recently said it was not minded to give these powers, perhaps because the CMA does not currently seem overly restricted in blocking large mergers under the existing rules.

This does not, however, mean that competition authorities have the ability to block Big Tech acquisitions on vague grounds that they are too large or they are paying a seemingly large purchase price for a seemingly small target company. They need to prove the alleged post-merger anti-competitive effects using specific evidence rather than general theories. They need to show how the Big Tech firm’s various activities will interrelate to exclude competitors or exploit customers in a way that could not happen before the target company is added to the ecosystem. They also need to show that the proven harm of a merger outweighs the benefits brought about by the merger in terms of strategic synergies or cost savings. If the Big Tech firm has a clear strategic rationale for an acquisition, which is not anti-competitive, the authority’s job ought to become much more difficult in this regard.

Whether it’s under the US or a European legal system, if a competition authority wants to intervene in a company’s rights to commercial freedom and property ownership, it needs to prove its case to a high standard.

Amazon’s flywheels

Blocking a Big Tech acquisition is therefore not straightforward. However, it is less difficult in Amazon’s case because there is a ready-made mechanism that ties together the disparate parts of its business and explains why each extra acquisition will harm competition and consumers. Amazon calls its membership system, Amazon Prime, and its voice assistant, Alexa, its “flywheels” (see e.g. pages 308-309 of the House Antitrust Sub-Committee report).

Bear in mind that more than half of British households now have Prime membership, and Prime members spend more than twice as much on the platform than non-members in the US (see page 260 of the House report).  

Amazon has a significant presence in many markets including smart home devices, online commerce, video streaming, TV and film production, music streaming, video advertising, voice assistants, cloud storage, print books, audio books, and e-books. It is well known that Amazon views its business as a whole, with Amazon Prime membership and Alexa at its core. The more power Amazon gains in its smart home business, the more products it sells on its e-commerce platform, which in turn further reinforces its smart home business. It bundles its products together in a mutually reinforcing strategy that was summed up in Jeff Bezos’ famous quote (at the Vox Conference 2016):

We get to monetize [Prime Video] content in a very unusual way, because when we win a Golden Globe, it helps us sell more shoes. And it does that in a very direct way because, when people, if you look at Prime members, they buy more on Amazon than non-Prime members.”

The flywheel theory of harm would posit that Amazon’s market power grows with each acquisition because its ecosystem insulates it from competition. It strengthens Amazon’s ability and incentive to impose anti-competitive and discriminatory terms on its customers and competitors for whom it is an unavoidable trading partner. The acquisition of a valuable asset such as iRobot harms Amazon’s competitors (and ultimately consumers) in each discrete business line that Amazon offers.

The effects would flow through Amazon’s sprawling business. For example, the integration of Amazon’s voice assistant, Alexa, into Roomba products and the increased promotion of Roomba products on Amazon’s e-commerce platform may increase their attractiveness and sales. Their increased attractiveness will increase the attractiveness of Amazon’s connected TVs and smart speakers, and its smart home solutions more generally, as Amazon finds way to link the products together. The addition of Roomba products will arguably make Amazon’s smart home bundle more attractive even if no technical enhancements are made using Alexa or technical links made between the different products. This will strengthen Amazon’s market position in the emerging internet-of-things sector. In this way, the iRobot acquisition is possibly reminiscent of Google/DoubleClick if a competition authority can find evidence that Amazon believes iRobot fills in a gap in its portfolio (smart home devices in this case; the ad tech stack in Google’s case) where it otherwise has a very strong position.

However, the addition of iRobot will also increase customer lock-in for Prime as customers accumulate more and more Amazon smart home devices and become more immersed in Amazon’s ecosystem. This will in turn strengthen its TV streaming service and its e-commerce platform (amongst others). Some effects are more remote than others – for example, competing smart vacuum, smart speaker or connected TV companies are perhaps more directly harmed by this merger than TV streaming companies, online retailers or book publishers, but the effects spread throughout the ecosystem through the mechanisms of Alexa, Prime membership and its increasingly locked-in customers. A consumer whose home is operated through Amazon’s products and who has Prime membership is more likely to buy her next novel from Amazon than a competing bookseller, whether it is due to nudges from Alexa software that is installed in the home products or simply because the consumer is increasingly immersed in Amazon products; indeed, Alexa and Amazon’s product listings may also nudge her towards one of Amazon Publishing’s own book titles. Each small increment of market power acquired by Amazon may ultimately feed through into the commercial terms offered to book publishers, for whom sales on Amazon’s platform represent the majority of their business.

Maybe it is possible to argue that the acquisition of iRobot would have a negligible effect throughout the ecosystem. But, a competition authority might worry that at some point surely Amazon will have bought itself too much market power. The digital economy is dying by a thousand cuts.

These competition problems arise even without more esoteric concerns regarding Amazon’s access to data about the layout of our homes, although one can conceive of the increased data playing an important role in making it ever more difficult for smart home competitors, and perhaps also online retailers, to compete against Amazon.

But don’t forget the traditional theories of harm

You do not need to subscribe to the flywheel theory of harm to be worried about the iRobot acquisition.

In some respects, the acquisition of iRobot looks like the previous acquisition of Ring, the leading maker of home security systems. In that case, it seems that Amazon’s motivation was primarily to acquire a leading market position and integrate its own technology into it.

Jeff Bezos stated to the US House Judiciary Committee in July 2020:

There are multiple reasons that we might buy a company. Sometimes we’re trying to buy some technology or IP, sometimes it’s a talent acquisition. But the most common case is market position.

As regards the Ring acquisition, Bezos stated in an email published by the committee:

To be clear, my view here is that we’re buying market position — not technology. And that market position and momentum is very valuable.

Amazon has not built its leading position in smart home devices by its own engineering genius alone. It bought Ring (2018), Blink (2017), and Evi Technologies (2013) to help develop its door security, cameras and voice assistant respectively.

The iRobot deal would be a logical next step.

First, it would combine Amazon’s extremely strong position at the retail level (for example, almost 90% of UK shoppers use Amazon) with a leading position in the upstream smart vacuum manufacturing market. A traditional vertical effects theory of harm could make competition authorities worry about the ongoing viability of competing smart vacuum products. Even without Amazon’s flywheel effect, competing manufacturers like Eufy, Roborock, Dyson or Neato would have every reason to worry about their post-merger access to online consumers. What terms would Amazon give them for selling through its unavoidable platform? In addition to the explicit terms, they may experience other more subtle disadvantages. For example, where would these competitors’ products appear in Amazon’s product listings? They may well find themselves dropping down the page and therefore find themselves deciding they must pay Amazon more in advertising fees simply to maintain the level of prominence that their products received pre-merger. (Remember, Amazon’s ad revenues are now huge, and the same size as the entire global newspaper industry’s ad revenues, so it seems that plenty of sellers are making that type of decision).

Smart vacuum competitors will also worry about Amazon pricing Roomba products below cost. This strategy may not have been available to iRobot pre-merger because it will have needed to make a profit from its principal product line. When it becomes part of Amazon’s wider business, there would be no need to make a short-term profit in smart vacuum products. A strategy to price below cost would make a lot more sense in the long term, and indeed Amazon is widely alleged to have done exactly this for its Echo smart speakers and its Kindle e-readers.

Second, as well as vertical effects, the iRobot acquisition would help Amazon to build an unmatchable portfolio of smart home products. This would be a fairly traditional horizontal effects theory of harm. A competition authority should investigate to what extent consumers see smart home products as a portfolio, and even would be interested in buying them as a bundle if they were linked via Alexa, such that a leading position in all the related smart home products would help Amazon monopolize the sector in the longer term. We certainly know that Amazon thinks long term.

One acquisition too far?

This blog has previously argued that Amazon is due to move to center stage after some years of Google, Apple and Meta having received more attention from competition authorities.

Amazon’s merger activity has certainly escaped attention so far. Arguably, the types of concerns discussed above also arose in Amazon’s acquisition of MGM Studios, for which it paid $8.45 billion last year. However, in that case, competition authorities sat on the side-lines. Perhaps competition authorities took the view that a bundle of creative content, even if it is really valuable content, is unlikely to lead to exclusionary effects because it cannot represent a bottleneck or a unique asset, and it is unlikely to significantly alter the evolution of the industry. The TV industry is well accustomed to certain content being exclusive to certain platforms, and there are other well-resourced companies who are creating their own content all the time.

Some readers might have difficulty getting worked up about the acquisition of a vacuum cleaner company. However, at some point, competition authorities may feel that Amazon’s accretion of market power across many closely-related markets surely becomes detrimental to consumers in the longer term. An acquisition that helps Amazon to dominate the emerging internet-of-things sector could be worrisome enough for competition authorities to intervene.