AI Pricing and Evidence Avoidance: Competition Law Risks
Summary
JD Supra highlights emerging competition law risks associated with AI-powered pricing and evidence avoidance tools. The guidance warns companies that traditional antitrust principles apply to algorithmic conduct, citing enforcement actions in the EU, UK, and US that have resulted in significant fines.
What changed
This guidance from JD Supra addresses the increasing competition law risks posed by AI and algorithmic pricing tools. It explains how established antitrust principles, such as those concerning cartels, hub-and-spoke coordination, and autonomous algorithmic collusion, are being applied to AI-driven conduct by authorities in the EU, UK, and US. The document notes that these actions have already led to substantial fines and settlements, emphasizing that algorithmic implementation offers no shield from liability if underlying human agreements exist.
Companies utilizing algorithmic pricing systems must understand that these tools can lead to violations of competition law, including algorithmic collusion and resale price maintenance. Furthermore, AI-powered compliance tools may inadvertently create new regulatory exposures. The guidance advises companies to review their practices and implement strategies to navigate this evolving landscape to avoid potential enforcement actions and penalties.
What to do next
- Review AI and algorithmic pricing systems for potential antitrust violations.
- Assess compliance tools for inadvertent regulatory exposure.
- Develop strategies to ensure adherence to traditional antitrust principles in AI-driven conduct.
Penalties
Fines and settlements valued in the hundreds of millions.
Source document (simplified)
March 27, 2026
Algorithmic Pricing and AI-Powered Evidence Avoidance: Competition Law Risks and Compliance Strategies
LinkedIn Facebook X Send Embed
Artificial intelligence (AI) and algorithmic pricing tools are transforming competition law enforcement at an unprecedented pace. What competition authorities once dismissed as theoretical concerns have become the focus of major enforcement actions across the European Union (EU), the United Kingdom (UK), and the United States (US), resulting in fines and settlements valued in the hundreds of millions.
For companies deploying algorithmic pricing systems, the message is clear: Traditional antitrust principles can apply with full force to AI-driven conduct. Enforcement actions demonstrate that authorities will pursue algorithmic collusion, algorithmic resale price maintenance, and algorithmic self-preferencing as aggressively as their analogue predecessors. At the same time, a new frontier is emerging around AI-powered compliance tools that may inadvertently create additional regulatory exposure.
This alert examines recent enforcement trends, analyses how established legal principles apply to algorithmic conduct, and provides practical guidance for companies navigating this rapidly evolving landscape.
The Evolution of Algorithmic Pricing: Market Transparency and Coordination Risk
Digital markets enable unprecedented price transparency. Real-time price monitoring that once required manual effort now occurs automatically through web scraping and data aggregation. While economic theory suggests transparency should benefit consumers through easier price comparison, empirical evidence reveals what can sometimes be a complex reality.
When analysing algorithmic pricing cases, one can identify three distinct scenarios, each presenting different legal challenges.
First, traditional cartels may use algorithms to implement preexisting human agreements to fix prices. Here, the algorithm serves merely as a tool for executing a conspiracy; the criminal intent remains decidedly human. This scenario fits comfortably within existing legal frameworks because the core requirement of agreement or concerted practice is clearly satisfied.
Second, hub-and-spoke coordination can emerge when multiple competitors knowingly and intentionally coordinate on pricing by using the same algorithmic pricing software and delegate pricing decisions to that common algorithm. Coordination may occur without direct communication among the competitors themselves.
Third, autonomous algorithmic collusion occurs when AI systems trained merely to maximise profits independently discover that coordination produces better outcomes than competition. Through reinforcement learning, algorithms can converge on supracompetitive prices without any human instruction to collude. This scenario presents a fundamental challenge to competition law, which has historically required some form of agreement or concerted practice between competitors.
Lessons From Recent Cases
UK: Amazon Marketplace Sellers
In 2016, the UK Competition and Markets Authority (CMA) pursued two sellers of celebrity posters on Amazon’s UK marketplace. The companies had agreed not to undercut each other and programmed their repricing software to implement this arrangement, specifically instructing their algorithms to undercut all rivals except one another. When the software occasionally malfunctioned, employees exchanged emails that made the coordination explicit: “Presume your software is broken, so had to remove you from ignore list,” complained one employee. “You just switching ignore each time is not doing either of us any good …” read another message.
The case demonstrates that algorithmic implementation provides no shield from liability when underlying human agreement exists. More critically, for compliance purposes, it shows that even companies using sophisticated automation resort to human communications when systems malfunction, creating the documentary evidence that remains central to enforcement. Companies deploying repricing software must ensure employees understand that discussions about competitor treatment create significant antitrust exposure.
EU: Consumer Electronics Manufacturers
The European Commission and the Netherlands Authority for Consumers and Markets brought multiple cases against major consumer electronics manufacturers, including Asus, Denon, & Marantz, Philips, Pioneer, and Samsung. These companies had deployed sophisticated monitoring software that continuously tracked online retailer pricing. When retailers discounted items below recommended levels, manufacturers intervened with alleged threats or sanctions to force prices upward.
The European Commission extracted more than 110 million euros in fines, while the Dutch authority imposed 40 million euros on Samsung alone. The cases establish that monitoring technology combined with enforcement against discounters constitutes resale price maintenance under European laws regardless of the algorithmic nature of the monitoring. Moreover, because retailers themselves often use algorithms to automatically match competitor prices, unlawful restrictions imposed on one low-price retailer potentially can cascade throughout the market.
These cases carry particular significance for manufacturers in digital distribution channels. The fact that price monitoring occurs through automated web scraping rather than manual surveillance does not necessarily insulate the conduct from scrutiny. If the monitoring feeds into a system of maintaining prices through pressure on retailers, the full apparatus can become suspect under European laws.
EU: Google Search
Perhaps no case better illustrates algorithmic liability for dominant firms than the European Commission’s Google Search decision, which resulted in a 2.4 billion euro fine subsequently upheld by EU courts. At the heart of the case was Google’s algorithm for ranking search results, which systematically demoted competing comparison-shopping services while favouring Google’s own service.
The statistics were stark: Users clicked first-page results 95% of the time, meaning algorithmic demotion to page four constituted a commercial death sentence. The case established that dominant firms can bear responsibility for how their algorithms affect competition. Algorithmic decision-making does not eliminate liability for unlawful self-preferencing or exclusionary conduct. When algorithms shape consumer choice and market outcomes, their design and implementation must comply with applicable competition laws.
How Competition Law Applies to Algorithms
Article 101 TFEU: The Agreement Requirement
European competition law prohibits agreements and concerted practices that restrict competition. The European Court of Justice has consistently held that economic operators must independently determine their conduct on the market, precluding direct or indirect contact that influences competitive behaviour.
The Eturas case, which involved a Lithuanian travel booking platform that sent messages to travel agencies about discount caps, established the framework for proving algorithmic coordination. Competition authorities can establish a concerted practice when objective evidence demonstrates that parties were aware of a common system, tacitly assented to it, and failed to distance themselves publicly. Once the authority meets its initial burden, the evidential presumption shifts to accused parties to demonstrate innocence through systematic deviation from the coordination or public objection to it. Eturas is also instructive in redirecting analytical focus toward the design of the system itself: When a pricing rule is built into the technical infrastructure of a platform, the central legal inquiry becomes less about whether competitors exchanged information and more about whether the architecture as a whole serves to enforce a shared commercial constraint.
This framework may work reasonably well for hub-and-spoke scenarios involving shared algorithmic tools where competitors can be shown to be aware, or reasonably expected to be aware, of a coordination mechanism, since liability under Eturas depends on inferred awareness and the absence of distancing rather than mere participation in the system. However, the framework does not address autonomous algorithmic collusion. When AI systems independently converge on supracompetitive prices through reinforcement learning, without any human agreement or even awareness of coordination, Article 101 of the Treaty on the Functioning of the European Union (TFEU) finds no purchase. This represents the classic oligopoly problem in digital form: parallel conduct that harms consumers but cannot be classified as concertation. The legal framework, conceived in an era of smoke-filled rooms and handwritten notes, strains to reach conduct occurring entirely within silicon circuits.
Article 102 TFEU and the Digital Markets Act
For dominant firms, Article 102 TFEU prohibits abusing market power through exclusionary or exploitative conduct. The Google Search case confirms this applies fully to algorithmic self-preferencing and discriminatory ranking. The newly effective Digital Markets Act reinforces these obligations, requiring designated “gatekeepers” to apply transparent, fair, and nondiscriminatory conditions to ranking and refrain from self-preferencing.
An emerging question concerns AI-driven personalised pricing. While Article 102(c) TFEU prohibits price discrimination, that provision has historically addressed discrimination among trading partners rather than between end consumers. Moreover, Article 102 TFEU requires dominance, which most firms deploying personalised pricing lack. Other regulatory instruments — including the General Data Protection Regulation, the Digital Services Act, the Consumer Rights Directive (amended by Directive (EU) 2019/2161), and the newly adopted AI Act — fill some gaps, but their interplay remains uncertain. Companies implementing personalised pricing should monitor guidance as these frameworks develop.
US Antitrust Framework
US authorities apply some similar principles under the Sherman Antitrust Act. Section 1 requires agreement or conspiracy, with standards that have traditionally demanded more explicit coordination evidence than the EU “concerted practice” doctrine.
There appears to be a practical dividing line for Section 1 analysis. Liability does not lie where competitors merely use shared data or algorithms. More is required, and courts have considered whether the software architecture at issue operationalises coordination by, for example, ingesting competitors’ nonpublic, current data to set individual competitor prices or embedding design features that steer users to aligned pricing. When a platform centralises sensitive inputs and standardises outputs, enforcers increasingly argue it may function as a coordination hub. By contrast, when tools offer nonbinding, overridable recommendations and do not commingle rivals’ confidential, commercially sensitive data, courts have been far less receptive to Section 1 theories at the pleadings stage.
Section 2 addresses unilateral conduct by monopolists and applies to algorithmic self-preferencing and exclusion. However, US monopolisation standards generally require both monopoly power and exclusionary conduct, creating a higher bar than EU dominance standards. The Federal Trade Commission has signalled increased attention to algorithmic pricing and personalisation in its ongoing rulemaking and enforcement agenda.
Emerging Risk: AI-Powered Compliance Tools and Evidence Avoidance
A new category of legal technology tools has emerged that uses AI to scan outbound business communications and flag language that might later constitute competition law evidence. These systems analyse draft emails and messages in real time, alerting senders when content references competitor discussions, pricing coordination, market allocation, or other antitrust red flags. The technology enables senders to rephrase potentially problematic language before communications leave the organisation.
Research confirms these systems work effectively. Even freely available language models can identify communications that create antitrust risk and suggest sanitised alternatives. The technology is neither expensive nor difficult to deploy, making it accessible to organisations of all sizes. And it can be valuable for avoiding language that may inadvertently suggest unlawful intent or behaviour where there is none.
Email evidence has been central to virtually every major cartel prosecution over the past two decades. From London Interbank Offered Rate manipulation to truck cartel cases to allegations against technology companies, the documentary record has provided the factual foundation for enforcement. If AI systems systematically eliminate potentially incriminating evidence before it is created, the implications cascade throughout the enforcement ecosystem.
Discovery costs multiply as authorities must deploy more resource-intensive investigative techniques to compensate for documentary evidence that does not exist. Dawn raids, witness interviews, forensic analysis of deleted files, and economic modelling become necessary in cases that previously would have turned on clear email trails of unlawful behaviour. Facts become harder to establish when the documentary record has been systematically cleansed of candour. Economic theory must increasingly substitute for direct evidence, turning many cases into a battle between competing expert witnesses whose models and assumptions can be endlessly debated. Enforcement can become prohibitively expensive except for the most egregious violations, while marginal anticompetitive conduct may proliferate unchecked where actors have used these sanitising tools to conceal unlawful conduct.
Reputational and Practical Risks
Beyond regulatory exposure, companies adopting evidence-avoidance tools to hide unlawful conduct could face significant reputational risks. When tool usage becomes public through litigation discovery, whistleblower disclosure, or journalistic investigation, the optics are decidedly poor. The revelation that a company systematically deployed AI to scrub its communications could create an inference of wrongdoing that may prove more damaging than whatever underlying conduct the tools were meant to obscure.
Moreover, these tools may prove less effective than anticipated. Sophisticated forensic analysis can often detect when communications have been systematically sanitised. Patterns in language, metadata anomalies, and testimony from employees about tool usage can reveal evidence suppression even when the underlying communications appear innocuous. The attempted cover-up may ultimately create more problems than it solves.
Emerging International Enforcement Landscape
The enforcement trajectory is already visible across multiple jurisdictions. Within the EU, national competition authorities are contributing their own perspectives, both through policy initiatives and enforcement action.
In early 2026, the Portuguese Autoridade da Concorrência published a paper examining competition issues associated with access to chips for training and running AI models, highlighting infrastructure-level concerns that sit upstream of pricing conduct itself. Meanwhile, the French Autorité de la concurrence launched a public consultation on AI agents, signalling that the competitive dynamics of agentic AI systems are becoming a distinct regulatory priority.
These policy developments sit alongside concrete enforcement activity. In September 2025, Poland’s antitrust authority confirmed that it was investigating potential collusion involving algorithmic pricing tools in the banking and pharmaceutical sectors. Earlier that year, the Netherlands Authority for Consumers and Markets launched a market investigation into algorithmic pricing practices in the airline industry, followed by Italy’s competition authority which indicated that it was engaging with the European Commission on ways to improve the price comparison of airline fares.
The UK appears likely to follow a similar trajectory. In September 2025, CMA Chief Executive Sarah Cardell stated that the authority is “watching and learning” from its “friends over in the US” as it intensifies scrutiny of how algorithms and generative AI may influence pricing behaviour. Reinforcing this direction, in its “Draft Annual Plan 2026 to 2027,” and as part of implementing its 2026 to 2029 strategy, the CMA has identified deterring algorithmic price collusion as a priority area. This increased focus has translated into concrete enforcement action.
On 24 February 2026, the CMA launched an investigation into suspected sharing of competitively sensitive information among competing hotel chains (Hilton, IHG Hotels, and Marriott) through the use of a hotel data analytics tool. In its press release, the CMA was careful to set out the broader context for this action. It acknowledged that companies use a wide range of data analytics tools and algorithms to support commercial decision-making, which can deliver significant benefits, including more intense competition, lower costs, and faster price adjustments to reflect changes in supply and demand. At the same time, the CMA emphasised that, when rival businesses share competitively sensitive information, whether directly or via a third-party data analytics provider, the uncertainty that normally exists between competitors is reduced. That reduction in uncertainty can weaken competitive pressure by making it easier for companies to predict one another’s behaviour and, ultimately, coordinate their conduct.
This enforcement action should, however, be read alongside the CMA’s broader and more nuanced position on algorithmic tools. The CMA published a research paper in 2021 examining the potential adverse effects of algorithmic pricing on competition and has continued to keep the use and impact of algorithms under its active review. Its guidance on horizontal agreements, published in August 2023, recognised that algorithms are not inherently anticompetitive. This position is reinforced in the CMA’s more recent publications, such as “Agentic AI and consumers” and “AI and collusion: frontiers, opportunities and challenges,” which were both released in March 2026. In these materials, the CMA goes further by emphasising that businesses (i) remain responsible for pricing and commercial outcomes shaped by AI systems; (ii) must take proactive steps to understand, test, and govern the technologies they deploy; and (iii) should audit input data and statistical methodologies, whether they are developed in-house or sourced from third parties.
In North America, the Canadian Competition Bureau published a report in January 2026 highlighting public feedback on algorithmic pricing and competition, reflecting growing attention to this issue across the region. The breadth of international scrutiny is further underscored by developments in the Asia-Pacific region. In October 2025, the Indian Competition Commission published a market study on AI and competition, addressing pricing-related practices arising in the AI market.
Together, these developments confirm that regulatory attention to algorithmic pricing conduct is not confined to any single jurisdiction.
Practical Compliance Guidance
Companies deploying algorithmic pricing should treat these systems as creating antitrust exposure requiring active management rather than as technical tools outside the scope of legal oversight. The starting point is comprehensive documentation. Organisations should maintain detailed records of algorithm design, including the inputs each algorithm considers, the business logic it applies, the outputs it generates, and the decision-making process used in its development. This documentation serves dual purposes: It enables internal compliance assessment and provides evidence of lawful intent should questions arise.
Periodic auditing can examine whether algorithms could facilitate coordination either directly or through common platforms. When a third-party pricing tool is used, companies should understand how many competitors employ the same system, demand transparency about its operation, and make clear in writing that the company is deciding to use the system for its own individual purposes, not because others also are using it. Companies also should get clarity on what guardrails are in place to protect individual competitor data in designing, training, and operationalising the system and to ensure individualised output from the system. Hub-and-spoke coordination risks increase dramatically when multiple market participants delegate pricing to a common algorithm that does not produce individualised outputs. Companies should consider whether customised solutions might reduce risk compared to off-the-shelf tools used industry-wide.
For dominant firms or platforms, algorithm audits should specifically assess whether ranking or pricing algorithms systematically advantage the platform’s own services over competitors’, or whether they apply discriminatory conditions to different market participants. The Google Search precedent makes clear that algorithmic implementation provides little to no defence to self-preferencing by dominant firms.
Technical controls should be embedded into algorithmic design from inception rather than retrofitted after deployment. This “compliance by design” approach makes lawful conduct the default rather than an afterthought. For example, algorithms can be designed with technical guardrails that prevent them from responding to competitor signals or ensure their ranking decisions apply consistent criteria across competing services. Building such controls into initial architecture proves far more effective than attempting to audit and constrain systems after deployment.
Contemporary documentation of business decisions remains critical even in algorithmic contexts. Companies should maintain records demonstrating that pricing and other competitive decisions reflect independent business judgment rather than coordination signals. This may require technical documentation showing how algorithms process information and make decisions, supplemented by business records explaining strategic rationale. Such documentation proves invaluable if authorities question whether observed price parallelism reflects genuine independent decision-making or tacit coordination.
Employee training must address antitrust risks specific to algorithmic tools. Staff deploying or maintaining pricing algorithms should understand that discussions with competitors about algorithm design or operation can create substantial risk even when lawful because such communications may be misconstrued as collusive. Clear policies should prohibit coordination even through intermediaries or shared technological infrastructure.
AI-Powered Compliance Tools: Proceed With Caution
Companies considering AI-powered compliance tools should distinguish between systems that identify risks for remediation and those that modify communications before they are sent.
Tools that generate post-sending alerts, notify legal or compliance teams, integrate with voluntary disclosure processes, or trigger escalation procedures can support effective compliance. These systems are designed to help organisations detect potential issues, investigate them, and address root causes before they result in harm.
Other tools operate earlier in the communication process by scanning outbound messages and suggesting or applying revisions in real time. These systems can help reduce the risk of ambiguous or problematic wording and may prevent improper communications from being sent externally. When used appropriately, they can contribute to risk prevention and clearer internal communication.
However, their use also raises important considerations. If such tools are primarily used to remove or alter language that could later be interpreted as evidence of misconduct, without addressing the underlying behaviour, they may be viewed as limiting transparency rather than supporting substantive compliance. In these cases, regulatory and reputational risks may outweigh potential litigation advantages, depending on how the tools are implemented and governed.
Companies should therefore prioritise building a strong compliance culture. Effective compliance programmes typically include clear policies prohibiting unlawful conduct, regular training for employees in higher-risk roles, channels for raising concerns without retaliation, and accountability mechanisms to address violations. Technology is most effective when it supports these elements by improving risk detection and enabling timely remediation.
Organisations should also monitor regulatory developments in this area. Competition authorities are actively assessing the role of AI in compliance, and guidance is likely to emerge. Some jurisdictions may introduce restrictions on certain uses of these tools. Companies that have already implemented them should periodically reassess their use, ensuring that their approach to compliance remains aligned with evolving regulatory expectations and risk tolerance.
The Anticipated Regulatory Response
While no competition authority has yet issued formal guidance on AI-enabled compliance tools, their likely approach can be inferred from existing enforcement priorities and public statements. Authorities are likely to examine closely any tools that could be used to limit the creation or preservation of evidence, particularly where such use may affect the detection or investigation of potential anticompetitive conduct.
In this context, early adopters may face increased scrutiny, especially if the purpose or effect of a tool appears to extend beyond risk management and into limiting transparency. The adoption of such systems, depending on how they are designed and used, could be interpreted by authorities as a relevant factor in assessing intent or compliance culture, even in the absence of a proven substantive violation.
Competition agencies may also issue guidance clarifying acceptable and unacceptable uses of these technologies. In some jurisdictions, this could include restrictions on functionalities that are seen as undermining record-keeping or investigative processes. Where authorities conclude that tools have been used to impede fact-finding or obscure relevant information, this may be taken into account in enforcement decisions, including the assessment of penalties.
Recent enforcement trends, such as increased attention by the European Commission and US agencies to evolving technologies and novel theories of harm, suggest that the treatment of AI-enabled compliance tools will continue to develop. As with other aspects of compliance, the specific regulatory response is likely to depend on how these tools are implemented, governed, and integrated into broader compliance frameworks.
Overlapping Regulatory Frameworks
Companies using algorithms must navigate an increasingly complex web of overlapping requirements. The Digital Markets Act imposes specific obligations on designated gatekeepers regarding algorithmic ranking, self-preferencing, and data access. The AI Act establishes risk-based requirements for high-risk AI systems, potentially including those that materially affect competition or consumer outcomes. Sector-specific regulations in financial services, healthcare, and other industries impose additional algorithmic accountability requirements. Companies operating across multiple sectors and jurisdictions need coordinated compliance strategies that account for these intersecting frameworks rather than treating each in isolation.
International Divergence
Competition authorities are developing divergent approaches to algorithmic pricing and AI. The EU has moved toward expanding substantive rules through the Digital Markets Act and sector-specific regulations, supplemented by continued enforcement under traditional competition provisions.
US authorities are focusing primarily on enforcement against specific practices, with ongoing debates about whether existing guidelines require updating for algorithmic contexts.
The UK is developing its distinct post-Brexit approach through the Digital Markets, Competition and Consumers Act as well as the CMA’s evolving enforcement priorities.
In the Asia-Pacific region, China, Japan, Korea, and other jurisdictions are rapidly developing regulatory frameworks that blend competition law with data protection and consumer protection concerns.
This divergence creates challenges for multinational companies that cannot simply design global algorithmic systems to the highest common denominator. Different jurisdictions emphasise different concerns, require different documentation, and apply different enforcement approaches. Companies need sophisticated compliance strategies that account for these variations while maintaining operational efficiency.
Anticipated Enforcement Priorities
Based on authority statements and recent actions, several areas appear likely to attract increased enforcement attention. Hub-and-spoke coordination through common algorithmic platforms will likely see continued scrutiny. Enforcers will likely not call off the inquiry if an intermediary formally describes its output as a “recommendation”; instead, they are likely to analyse whether the system’s design — through defaults, delegation mechanisms, or behavioural inducements — displaces the independent pricing autonomy of participating firms such that coordinated outcomes are intentionally implemented as a matter of course.
Personalised pricing will face questions about potential discrimination, though the legal framework remains unsettled. Authorities will need to distinguish exploitative personalisation by dominant platforms from ordinary commercial pricing, and the interaction between algorithmic pricing and equality or data protection law is likely to generate enforcement activity that crosses regulatory boundaries. Cases in which the personalisation mechanism is opaque and affected consumers lack any meaningful ability to contest the differentiation will attract the greatest scrutiny.
Algorithmic exclusion by platforms may see increased enforcement as authorities build on the Google Search precedent. Consistent with the implementation-control framework developed above, liability will attach not to discrete exclusionary acts but to platform architectures, such as ranking systems, self-preferencing mechanisms, and interoperability restrictions, that are designed so exclusionary outcomes are produced structurally rather than episodically.
Finally, AI-powered tools used to eliminate evidence of unlawful behaviour will likely attract specific regulatory attention and possible prohibition. When such tools are adopted with the intent to frustrate investigative discovery, regulators may treat them not merely as obstruction instruments but as components of the cartel management architecture itself, potentially attracting facilitator liability and serving as an aggravating factor in penalty calculations.
Conclusion
The rapid evolution of AI and algorithmic pricing creates significant competition law risks, but these risks are manageable through proactive compliance measures. Companies should begin by auditing existing algorithmic systems for competition law risks, paying particular attention to pricing algorithms, ranking systems, and tools that incorporate competitor information. When third-party pricing tools are used, organisations should assess hub-and-spoke coordination potential and consider whether alternative approaches might reduce exposure.
Employee training should specifically address antitrust risks created by algorithmic tools, ensuring that technical staff understand the legal implications of algorithm design choices and that business personnel recognise when algorithmic conduct creates regulatory exposure. Clear policies should establish permissible and prohibited uses of pricing algorithms, with particular attention to any systems that respond to competitor signals or coordinate through common platforms.
Organisations should exercise extreme caution regarding AI tools used to eliminate evidence of unlawful business conduct. While technology can support compliance programs through risk identification and escalation procedures, systems aimed primarily at evidence avoidance create significant regulatory and reputational risks that likely exceed any benefits. Companies should focus instead on building genuine compliance culture supported, not replaced, by technology.
Strategic priorities include building compliance-by-design approaches into algorithmic development processes from inception, maintaining robust documentation of independent business decision-making, monitoring regulatory developments across relevant jurisdictions, and fostering organisational culture that treats competition law compliance as a business priority rather than an obstacle to overcome.
Competition authorities have made it clear that they will pursue anticompetitive conduct regardless of whether it occurs through traditional means or sophisticated algorithms. The companies that thrive in this environment will be those that embrace genuine compliance, deploy technology thoughtfully in support of lawful conduct, and recognise that the most sophisticated algorithm cannot substitute for principled business judgment.
[View source.]
Related Posts
Latest Posts
- Algorithmic Pricing and AI-Powered Evidence Avoidance: Competition Law Risks and Compliance Strategies
- Weaponising Access Rights: CJEU Confirms Abusive Access Requests May Be Refused
- Bio-Thera and Intas Expand Golimumab Biosimilar Partnership to India
- HHS-OIG Announces an Audit of Medicare Payments for Chronic Care Management Services and Signals It Remains an Enforcement Priority See more »
DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.
Attorney Advertising.
©
Goodwin
2026
Written by:
Goodwin Contact + Follow Maria Belen Gravano + Follow Stephen Mavroghenis + Follow
PUBLISH YOUR CONTENT ON JD SUPRA
- ✔ Increased readership
- ✔ Actionable analytics
- ✔ Ongoing writing guidance Join more than 70,000 authors publishing their insights on JD Supra
Published In:
Algorithms + Follow Amazon + Follow Amazon Marketplace + Follow Antitrust Provisions + Follow Artificial Intelligence + Follow Competition + Follow Competition Authorities + Follow Corporate Counsel + Follow Digital Marketplace + Follow Enforcement Actions + Follow EU + Follow European Commission + Follow Google + Follow Machine Learning + Follow Pricing + Follow Product Pricing + Follow Treaty on the Functioning of the European Union (TFEU) + Follow UK + Follow Unfair Competition + Follow Antitrust & Trade Regulation + Follow International Trade + Follow Science, Computers & Technology + Follow more
Goodwin on:
Solve with 2Captcha
Solve with 2Captcha
Named provisions
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Data Privacy & Cybersecurity alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when JD Supra Technology & Cyber publishes new changes.