INVISIBLE AGREEMENTS, VISIBLE HARM: CAN INDIA REGULATE ALGORITHMIC CARTELS?
- RFMLR RGNUL

- 1 day ago
- 7 min read
Updated: 7 hours ago
This post is authored by Kabir Kumar, third-year B.B.A. LL.B. (Honours) Student at OP Jindal Global University.
INTRODUCTION
The world is undergoing the fourth industrial revolution, where Artificial Intelligence (“AI”) is not only a technological innovation but is a structural force shaping modern digital markets. A recent study by the Competition Commission of India (“CCI”) on AI and its effects identified risks, analysed regulatory frameworks (US and UK), and provided recommendations.
One of the major challenges identified by the CCI from the study was algorithmic collusion, where firms achieve cartel-like outcomes by using self-learning algorithms powered by AI. These algorithms learn from a wide range of data sources such as general information, competitors’ prices and market trends.
This phenomenon creates a fundamental tension in competition law under Section 3(3) of the Competition Act, 2002, which requires an “agreement” for cartels to be penalised under the law. This piece argues that India’s framework under the Competition Act, 2002, is presently ill-equipped to address self-learning algorithmic collusion. It first explains the concept of algorithmic collusion, with particular focus on self-learning pricing algorithms. It then examines the limitations of the existing framework under the Competition Act, 2002, especially the agreement requirement and the law’s reliance on plus factors to distinguish collusion from parallel conduct. The piece further highlights the enforcement and evidentiary difficulties such conduct creates for the Competition Commission of India, and concludes by proposing reforms better suited to AI-driven markets.
WHAT IS ALGORITHMIC COLLUSION?
For the scope of this article, algorithmic collusion refers to collusion caused by self-learning/machine-learning algorithms. A pricing algorithm refers to a set of instructions that are fed into a computer program to make decisions regarding the pricing of products. Algorithms would accomplish this by analysing extensive data on market conditions, such as competitors’ prices, past prices and the firm’s costs for warehousing and production. Companies use pricing algorithms to make pricing dynamic, personalised, and competitive. Online platforms such as Uber, Ola and Amazon utilise such algorithms.
Pricing algorithms can be used by enterprises to collude by fixing market prices and exchanging information. This problem is amplified by self-learning algorithms that adjust prices automatically based on extensive data analysis.
This leads to tacit collusion, which is very difficult to prove. Although Section 3(3) requires an “agreement”, authorities usually look for intent or other signs of coordination while distinguishing collusion from mere parallel conduct. This becomes harder in cases of self-learning algorithms, where similar pricing may emerge without clear human communication. This creates a gap in how Section 3(3) is presently applied.
CRITICAL ANALYSIS: WHY DOES INDIA’S FRAMEWORK FAIL TO CAPTURE ALGORITHMIC COLLUSION?
India’s framework fails to capture algorithmic collusion because of challenges, which are:
A. Human-Centric Agreement
Section 3(3) of the Competition Act, 2002, is still applied through a largely human-centric lens. Although Section 2(b) defines “agreement” broadly to include any arrangement, understanding or action in concert, authorities usually look for signs of intentional coordination while distinguishing collusion from mere parallel conduct. This is why parallel conduct alone is often treated as insufficient and must be supported by plus factors. In that sense, plus factors matter because they help infer human coordination, communication, or conscious alignment. The difficulty in algorithmic collusion is that similar pricing may emerge without these usual human indicators, which creates a gap in the present application of Section 3(3).
In Re M/s Sheth & Co., the CCI held that mere parallel conduct is not sufficient to establish collusion. It must be supported by plus factors that point towards coordination. This shows that the present framework works best where there is some evidence from which human intent or coordination can be inferred.
This premise is further solidified in the Maruthi Suzuki case, where the CCI held that anti-competitive agreements often lack formal documentation and are encompassed by informal interactions through subtle gestures.
A joint reading of these two cases proves an underlying assumption of human interactions when dealing with collusion. Plus factors such as informal interactions and gestures are an attempt to bridge the gap between parallel conduct and intentional coordination.
In algorithmic collusion, coordination may emerge without any human communication as these algorithms can pattern-match and exchange information with other algorithms. In such a scenario, meeting of the minds does not occur between humans but between algorithms exposing a blind spot that disrupts the method which the Act recognizes as collusive.
For example, in the Cement Manufacturers case, the CCI treated the exchange of sensitive commercial information as a plus factor supporting an inference of collusion. The difficulty in algorithmic collusion, however, is that self-learning algorithms may produce similar pricing without these usual indicators. They may respond to market conditions and rival behaviour without any direct communication, meeting, or data exchange between firms. This creates a gap in the present application of Section 3(3), since existing evidentiary tools are better suited to cases involving human coordination.
B. Enforcement and Evidentiary Challenges
Machine-learning and deep-research pricing algorithms operate as opaque black boxes. These algorithms are called “black boxes” because they do not operate like traditional algorithms that are coded based on rules. These systems derive pricing strategies from data and are derived through a hidden layer of neural networks. These algorithms also generate massive, high-frequency datasets. In industries such as e-commerce millions of price changes occur daily.
This creates enforcement and evidentiary challenges for the CCI. While investigating cartels the CCI usually relies on circumstantial evidence for proof of collusion when direct evidence is not available. The greater difficulty in cases of algorithmic collusion lies in establishing sufficient circumstantial evidence, since traditional indicators of collusion such as communications, meetings, or coordinated conduct may be harder to identify when pricing coordination occurs through self-learning algorithms.
The CCI lacks the investigative toolkit to assess such algorithms as they generate massive, high-frequency data sets that manual or traditional review cannot handle. Collusive patterns might occur during the intra-day period. A meaningful analysis of such datasets requires continuous and automated data collection that CCI lacks. The CCI does not have statutory power to conduct algorithmic audits, access training data, interrogate model architecture and employ computational tools at scale to combat algorithmic collusion.
Algorithmic collusion also raises questions of attribution under the Competition Act, 2002. Section 3 is broad in scope and is not confined only to agreements between enterprises, it also extends to persons and associations of persons. Even so, when pricing coordination occurs through self-learning algorithms, it becomes harder to determine how liability should be attributed in practice, especially between the enterprise deploying the system and those involved in its design or operation. The difficulty, therefore, lies less in the text of Section 3 itself and more in its interpretation and application to evolving forms of algorithmic coordination. In the absence of sufficient technical capacity and clear attribution standards, such conduct may still create an evidentiary and enforcement blind spot..
RECOMMENDATIONS FOR THE CCI AND LAWMAKERS
The CCI must employ a multi-pronged approach to prevent algorithmic collusion due to the usage of machine-learning models. The multi-prong approach would include the following:
A. Gradual Shift from Ex Post Enforcement to an Ex Ante Approach
A more effective response to algorithmic collusion would be a gradual shift from a purely ex post enforcement model towards a limited ex ante approach in markets where algorithmic coordination is reasonably apprehended. At present, competition law usually intervenes only after anti-competitive effects have materialised and evidence of collusion can be gathered. In the context of self-learning or constantly responsive pricing algorithms, that delay may be costly because coordinated outcomes can emerge and stabilise quickly while remaining difficult to detect through conventional evidentiary methods. The CCI should therefore make greater use of its suo motu powers under Section 19 in sectors where pricing algorithms, automated repricing tools, or common AI intermediaries create an early risk of coordinated market behaviour. This would not mean penalising firms without proof of contravention; rather, it would allow the Commission to initiate timely scrutiny, seek information, conduct market studies, and monitor patterns before competitive harm becomes deeply entrenched.
The EU’s Digital Markets Act (“DMA”) offers a useful jurisdictional example of this preventive approach. It reflects the broader regulatory insight that waiting for fully realised anti-competitive harm in fast-moving digital markets may often be too slow, especially where automated decision-making, data advantages, and platform power can distort market conditions before traditional ex post proceedings conclude. To address this, the DMA adopts an asymmetric ex ante model for designated gatekeepers, laying down advance obligations and prohibitions under Articles 5 to 7, supported by a compliance framework under Article 8 and market investigation powers under Article 19 to identify unfair or non-contestable practices not yet effectively covered by the Regulation. Although the DMA is not aimed specifically at algorithmic collusion, it remains instructive because it shows how digital competition law can move beyond a purely reactive model towards earlier scrutiny, continuous monitoring, and predefined obligations in structurally risky markets.
B. Self-Audit of AI systems for compliance
The Competition Act, 2002, should statutorily mandate regular internal audits of algorithms by businesses. These audits have to be documented to understand the algorithm’s decision-making process, objectives, and the data source it draws from. This would act as a proactive measure to identify potential competition concerns. The audits would have to be conducted biannually to ensure the algorithm’s adaptive behaviour has not resulted in anti-competitive conduct.
The self-audit should be in the form of a checklist that has to be submitted to CCI at regular intervals. This checklist would involve testing for specific risks, such as whether the algorithm adjusts itself based on the price of competitors or if it could adjust itself based on the rival’s algorithm.
C. Expansion of CCI’s investigative powers
The CCI’s investigative powers should be expanded under empirical and technical audits. This would include increasing the data-gathering powers of the CCI. Machine-learning algorithms are usually considered “black boxes”. Business entities should be required to disclose details such as (1) Design logic of algorithms, (2) Data Sources for training, and (3) Parameters for price setting.
This would allow the CCI to conduct risk assessments. Similar to Article 6 of the European Union’s (EU) AI Act, pricing algorithms should be classified as “high-risk” systems. This is because pricing algorithms can influence market-wide prices at scale, operate opaquely, and create a serious risk of anti-competitive harm that may be difficult to detect in time. This expansion of investigative powers, coupled with self-audits, would assist the CCI in effectively identifying anti-competitive algorithms.
CONCLUSION
Algorithmic collusion challenges the very architecture of Indian competition law. As market coordination moves away from human intent to machine logic, the definition of an “agreement” under Section 2(b) is rendered obsolete. Machine-learning models pose the risk of collusion, but the law will fail to recognise such collusion if it remains static.
The Competition Act, 2002 must move from a purely ex post enforcement model towards a limited ex ante approach in markets where algorithmic coordination is reasonably apprehended. In other words, the law should enable earlier scrutiny and timely intervention in such cases instead of waiting until anti-competitive harm has fully materialised before acting.
Ultimately, the challenge for the CCI is not to slow AI-driven markets, but to future-proof antitrust enforcement. In an economy that is increasingly driven by machine-learning algorithms, technology should not be used as a tool for undetectable collusion rooted in anti-competitiveness.
.png)
Comments