Advocate on Record, Supreme Court of India & Litigation & Trial Attorney, Los Angeles, USA
Non-Resident Fellow (Law), Dhirubhai Ambani School of Law, India
In today’s digital economy, artificial intelligence (AI) and algorithmic pricing have transformed how markets operate. What was once human-driven price coordination has evolved into a new era where algorithms can learn, adapt, and—potentially—collude. This phenomenon, known as algorithmic collusion, challenges the very foundation of antitrust law across jurisdictions like the United States, European Union, and India. As courts and regulators grapple with this invisible form of coordination, the question arises: can existing competition laws handle algorithmic collusion?
Algorithmic collusion occurs when firms use AI-driven pricing tools that, intentionally or unintentionally, align prices in a way that harms competition. Traditionally, antitrust laws require evidence of an agreement or a meeting of minds between competitors to establish collusion. However, in algorithmic markets, human intervention may be minimal or absent altogether. AI systems, using self-learning models such as reinforcement learning or Q-learning, can observe competitors’ behavior, adjust prices dynamically, and reach outcomes that mirror cartel-like coordination—all without direct communication.
This raises a critical question: if machines can collude without human intent, can law, built around human behaviour, still apply?
Under Section 1 of the Sherman Act, the U.S. antitrust framework penalizes unreasonable agreements that restrain trade. Courts traditionally focus on evidence of explicit communication among competitors. The landmark United States v. Airline Tariff Publishing Co. (1994) case was one of the first to expose how digital systems could facilitate collusion. Airlines used a shared online booking platform to signal future price changes—amounting to indirect coordination.
Two decades later, United States v. David Topkins (2015)—known as the “Poster Cartel” case—became the first to criminally prosecute algorithmic price-fixing on Amazon Marketplace. Here, sellers programmed algorithms to avoid undercutting each other’s prices, ensuring artificially stable prices. Although the collusion was executed via code, the human agreement behind the algorithm made prosecution straightforward.
More complex was Meyer v. Kalanick (2016), where Uber’s surge-pricing algorithm was alleged to facilitate a “hub-and-spoke” cartel among drivers. While the court acknowledged the potential for algorithmic coordination, it required stronger evidence of explicit conspiracy—highlighting the limits of U.S. antitrust law when algorithms act autonomously.
Overall, U.S. enforcement agencies like the Federal Trade Commission (FTC) have maintained that using algorithms is not illegal per se; what matters is intent and coordination. Yet, as algorithms grow more autonomous, this focus on human intent may become inadequate.
The European Union (EU) takes a broader approach. Article 101 of the Treaty on the Functioning of the European Union (TFEU) prohibits not just explicit agreements but also “concerted practices” that distort competition. This gives EU regulators more leeway to address algorithmic collusion, even in the absence of direct communication.
The Eturas UAB v. Lietuvos Respublikos konkurencijos taryba case illustrates this flexibility. A Lithuanian booking platform restricted travel agencies’ discount rates via a software update embedded in its system. The European Court of Justice (ECJ) held that if the agencies were aware of the restriction, it constituted concerted practice under Article 101—recognizing that awareness, not explicit consent, could suffice for liability in digital settings.
Another landmark was the Electronic Goods Manufacturers Case (2018), where the European Commission fined companies like Asus and Philips for using algorithms to monitor resale prices and penalize retailers deviating from set prices. Here, algorithms acted as enforcement tools in resale price maintenance—an explicit violation of EU competition law.
The Google Shopping case (2017) further expanded the EU’s interpretation. The Commission found Google guilty of “self-preferencing,” using algorithms to favor its own comparison-shopping service. This was treated as an abuse of dominance under Article 102 TFEU. Unlike the U.S. FTC, which saw such practices as legitimate product improvements, the EU emphasized the “special responsibility” of dominant firms to maintain fair competition.
Collectively, EU jurisprudence signals a readiness to adapt existing legal principles to algorithmic contexts. By focusing on outcomes and market effects rather than human intent, the EU framework is better equipped to capture the subtleties of algorithmic coordination.
India’s Competition Act, 2002 prohibits anti-competitive agreements and abuse of dominance. Under Section 3(3), price-fixing among competitors is presumed anti-competitive. Yet, as with other jurisdictions, enforcement faces hurdles when dealing with algorithms.
The Matrimony.com Ltd. v. Google LLC (2017) decision marked a turning point. The Competition Commission of India (CCI) found Google guilty of search bias, holding that the company leveraged its algorithm to favor its own services. Drawing from the EU’s reasoning, the CCI emphasized Google’s “special responsibility” as a dominant platform to ensure fairness and transparency.
Conversely, in Samir Agarwal v. ANI Technologies (Ola and Uber), the CCI declined to find a hub-and-spoke cartel, reasoning that algorithmic pricing determined by the platform did not amount to collusion among drivers. Critics argue that this reasoning underestimated the potential for tacit algorithmic coordination—a gap the Competition (Amendment) Act, 2023 seeks to address by empowering the CCI to investigate digital markets more effectively.
India is now moving toward proactive regulation, with discussions around a Digital Competition Act and the establishment of a Digital Market Unit—paralleling the EU’s Digital Markets Act. This reflects recognition that traditional enforcement tools must evolve to meet algorithm-driven challenges.
Across jurisdictions, a pattern emerges: courts and regulators acknowledge that algorithmic pricing can facilitate collusion, yet proving it under existing laws remains difficult. The absence of direct human communication blurs the line between legal tacit coordination and illegal collusion. Furthermore, regulators lack access to proprietary pricing data and the technical expertise needed to analyse complex algorithms.
To address these challenges, scholars advocate a risk-based regulatory approach, combining legal, technological, and ethical oversight. Governments must require transparency in algorithmic design, mandate audits for pricing software, and establish accountability mechanisms for AI-driven decision-making. As the European Digital Markets Act and similar legislative efforts demonstrate, regulation should balance innovation with fairness.
Algorithmic collusion represents a new frontier in competition law. As markets become more digital and data-driven, algorithms can learn to cooperate—sometimes in ways their creators never intended. While the U.S. relies on intent, the EU focuses on market outcomes, and India is cautiously catching up, all jurisdictions face the same fundamental challenge: laws designed for human conspiracies must now govern machines.
The solution lies not in rewriting antitrust law entirely but in reinterpreting it for the algorithmic age—emphasizing transparency, accountability, and technological literacy. As the European Commission’s Margrethe Vestager aptly warned, “The idea of automated systems reaching a meeting of minds may still be science fiction, but we must be ready when science fiction becomes reality.”
