[By Shreya Jha]
The author is a fourth year student of Amity Law School, Delhi and can be reached at shreya.jha78@gmail.com.
Introduction
In general parlance, the concept of “algorithm” refers to the set of rules which should be followed in order to carry out a certain task. They can be represented in the form of plain language, diagram, codes, programmes, etc. The advent of the digital economy has led to the increasing use of “algorithms” in businesses to improve decision-making and predictive analytics. This is because the algorithms have the ability to process and create value out of large data sets in the form of targeted advertising, data-driven innovations, product recommendation, etc.For example, firms like Amazon and Flipkart employ “dynamic pricing” which allows them to monitor and alter the prices of goods due to changes in demand and supply. Similarly, Uber uses algorithms to adjust the price of car rides based on the demand for cab services and supply of drivers. Given its increasing use, firms’ use of algorithms has garnered the attention of antitrust regulatory authorities across jurisdictions.
In India, the competition watchdog, Competition Commission of India (“CCI“) is conducting a study to understand and examine the algorithmic trends in the digital market and its antitrust implications. The use of these algorithms to fix prices might lead to unintended collusion of prices as similar prices are set across the board, leading to “tacit” collusion.
How Algorithms can be used to Collude?
According to Ariel Ezrachi and Maurice Stucke, there are four ways in which algorithms may be used for collusion.
First, is the “Messenger” where specific algorithms are used to implement the will of humans beings who agree to collude.An example of this is the Poster Cartel case in which David Topkins, the founder of Poster Revolution and his co-conspirators were prosecuted by the US antitrust authorities for agreeing to price coordination by adopting specific algorithms for the sale of posters in the Amazon marketplace.
The second type of algorithmic collusion is the “Hub and Spoke” where the same algorithm is adopted by the market players. In this type of collusion, the spokes are colluding competitors and hub is a facilitator of collusion by the spokes. There is a horizontal agreement among the spokes which is referred to as the rim as it connects the spokes. An example of this is United States v. Masonite Corp., in which Masonite, a patent-holder for hardboard entered into agency agreement with nine competitors to sell Masonite hardboards. According to the agency agreement each agent knew that others were entering into an identical agreement with Masonite. Thus, the Court inferred a horizontal agreement among the agents in this case.
Third, in the case of “Predictable Agent” type of collusion, there is no agreement among competitors. Each firm unilaterally adopts its own pricing algorithm and they act as predictable agents who monitor and adjust to each other’s prices. Therefore, even though same algorithm is not used by the competitors, by programming algorithms to adjust to each other’s price, tacit collusion is affected.
Fourth, is the “Digital Eye” collusion which involves machine learning algorithms who are not programmed to adjust to each other’s price or market data, but by virtue of self-learning, they collude on their own. According to an OECD Report, it is not clear how machine learning algorithms may reach a collusive outcome, however, once it has been asserted that the market conditions are prone to collusion, it is likely that algorithms learning faster than humans are able to achieve a cooperative equilibrium.
Legal Framework
Section 3(3) of the Indian Competition Act, 2002 (“the Act“) prohibits collusion. The Section has a broad scope as it includes both horizontal agreements as well as those practices which are done in a collusive manner. Section 3(3)can be broken down into three components:
- “agreement entered into”, “a practice carried on” or “decision taken”;
- by persons, an association of persons, enterprises or association of enterprises which directly or indirectly determines the purchase or sale prices;
- shall be presumed to have an appreciable adverse effect on competition.
Applying the Legal Framework to Algorithmic Express Collusion
Express Collusion is when anti-competitive price is achieved through “direct and express communication” about an agreement. There exists a mutual understanding among the competitors in the market. Section 3(3) of the Act is applicable in the first two scenarios of “express collusion” which involve express collusion as the algorithm merely implements the collusive structure.
- The Messenger Scenario
In this case, algorithms are employed to simply implement the anti-competitive agreements which have been previously entered into by the human market players. The programmers feed specific instructions to achieve collusive outcomes. Hence, these fall under “agreements” entered into by “persons”.
- The Hub and Spoke Scenario
In the Eturas case, the Court of Justice of the European Union considered the coordination of discount rates by travel agencies through a third-party intermediary’s common electronic platform. The third-party intermediary, in this case, had sent a notice to the travel agencies to vote on discount rates. Even though no agent replied, the third party intermediary unilaterally limited the discount to 3%. The Court held that this behaviour would constitute a concerted practice under Article 101 of the TFEU. Following the Eturas Case, in case of usage of a common third party algorithm for price coordination, the persons infringing Section 3(3) of the Act can be held responsible.
Applying the Legal Framework to Algorithmic Tacit Collusion
Algorithms, without human interference, are capable of tacit collusion where a substantive part of the collusive agreement is achieved without express communication. A recent example of this is the sudden rise in airfares for flights between Delhi and Chandigarh during the Jat agitation. The rise in airfares was attributed to the collusion among self-learning algorithms. This has been a cause of concern among competition authorities.
- The Predictable Agent Scenario
The act of programming algorithms in a certain way to meet market stimulants would amount to “action in concert” and “practice”. Further, for the predictable agent the algorithms merely reflect the logic of the competitor. Thus, the competitors can be indirectly brought within the ambit of “persons”.
- The Digital Eye Scenario
In this scenario, the act of self-learning algorithms to price fares at a certain level in response to the price fixed by the algorithms of other competitors would amount to an “action in concert.” However, whether “algorithms” will fall under the definition of “person” is a contentious issue.
In this regard, reference can be made to the objective of competition law i.e. the protection of consumer welfare. Hence, a non-exhaustive definition of “person” would include algorithms in it.
Determining Liability under Digital Eye Scenario
Section 48 of the Act deals with the personal liability of employees in case the company has acted in violation of the Act, but whether the anti-competitive act of an employee is attributed to the company is a question which is yet to be answered. This is because in the case of Predictable Agent and Digital Eye scenario the algorithms have taken the role of employees of the competitors. Thus, one of the major issues before the CCI is whether and how far are the competitors liable for the acts of algorithms.
In the Predictable Agent scenario, the competitors will themselves be liable. However, the problem arises in the case of Digital Eye scenario.
Self-learning algorithms in the Digital Eye scenario are capable of processing huge volumes of data and creating value. These algorithms are designed to “maximize profit”. However, they learn to collude as the best way to maximize profit. In this regard, it is pertinent to note that while programming the algorithm, the competitor did not have any express or implied intention to collude with other market players. Hence, they cannot be held liable. However, it can be argued that the knowledge among competitors that self-learning algorithms would ultimately result in collusion is enough to hold them liable. Thus, the extent of competitor’s liability is yet to be decided by the CCI.
Conclusion
The preceding paragraphs serve to demonstrate that algorithmic collusion presents a challenging frontier for the competition authorities. There exists a legislative gap in enforcement against algorithmic collusion for which it is suggested that a careful study of both the pro and anti-competitive impacts of an algorithm is made and an attempt is made to create legal certainty for the market players through unambiguous criteria for illegal tacit collusion.