The European Union (EU) has taken a significant step in artificial intelligence (AI) regulation with the approval of the Artificial Intelligence Act or AI Act, a groundbreaking piece of legislation aimed at ensuring the safe, ethical, and responsible development and use of AI within its territory. This unique regulatory framework not only establishes safeguards to protect fundamental rights but also profoundly impacts how tech companies must manage their AI models and systems.
Risk levels of AI applications
The AI Act classifies AI applications according to their level of risk to society and individual rights. This structure allows for an adaptive approach to regulation:
- Unacceptable Risk: Completely prohibited uses, such as social scoring systems akin to those in China or the use of AI to exploit user vulnerabilities.
- Riesgo Alto: Incluye sistemas como los de diagnóstico médico, herramientas en procesos de contratación y algoritmos de identificación biométrica. Estos deben cumplir requisitos estrictos, como:
- Evaluaciones de conformidad antes de su implementación
- Garantías de seguridad y calidad de los datos
- Transparencia en su funcionamiento
- Limited Risk: Refers to applications that must provide clear information to the user, such as chatbots or recommendation systems.
- Minimal Risk: AI tools such as video games or spam filters, which require little to no additional regulation.
Recommendations for safe practices under the AI Act
One of the cornerstones of this law is the obligation of transparency and oversight to ensure that companies adopt ethical and safe practices. Among the main requirements are:
- Model Registry: Companies should maintain a centralised repository of their AI models.
- Meaningful Explanations: Users have the right to receive clear explanations about how and why an AI makes certain decisions, fostering trust and understanding of the system.
- Continuous Supervision: National authorities and a European AI Office have been established to monitor the implementation of these regulations and adapt the rules as technology evolves.
Prohibitions to protect fundamental rights
The Act prohibits applications considered too dangerous to fundamental rights, such as:
- Real-time biometric identification systems in public spaces, except for specific exceptions such as terrorist threats.
- AI designed to manipulate personal decisions or exploit vulnerabilities of specific groups
- Use of AI for behavior control through social scoring systems, which can unfairly discriminate against individuals based on personal characteristics or past behaviors.
These restrictions seek to prevent the abuse of technology, promoting its ethical and respectful use.
How will the AI Act impact technology companies?
The law introduces stiff fines of up to 6% of annual global turnover for companies that fail to comply with the regulations. Companies must therefore restructure their operations around AI by adopting robust compliance practices. This includes:
- Proactive Risk Management: Identify risk areas in your systems and ensure they are aligned with IA Act requirements.
- Investments in Governance: Implement specialised teams and tools to continuously monitor compliance with the law.
- Technological Adaptation: Ensure that AI systems are auditable and can be easily integrated into proposed monitoring mechanisms.
This new framework may generate high upfront costs, but offers long-term benefits by building consumer confidence and positioning online businesses in line with the demands of the global marketplace.
Regulating AI for responsible innovation
While the primary objective of the AI Act is to ensure safety and ethics, it is also designed to encourage technological innovation. By establishing a regulated and predictable environment, companies can develop solutions with greater confidence, knowing that they meet clear standards. In addition, the EU seeks to position itself as a global leader in AI regulation, creating a model for other regions to follow.
Entry into force of the Artificial Intelligence Act
The law will enter into force between the second and third quarter of 2024, with transition periods ranging from 6 to 24 months depending on the risk category and specific requirements. This gives companies time to adapt, but also underlines the need to start compliance efforts immediately.
Opportunities for Technology Mergers and Acquisitions (M&As)
For advisors specialising in technology mergers and acquisitions, the AI Act introduces new and strategic dynamics. For example:
- Extended Due Diligence: IA compliance reviews will be crucial to assess the value and risks associated with target companies.
- Promoting Compliant Startups: Startups that develop technologies compatible with the new regulations may attract more interest from investors and buyers.
- Asset Restructuring: Companies with non-compliant high-risk AI systems could face devaluations or strategic adjustments to their operations.
You can check our M&A Academy section where we discuss how to prepare your company's Due Diligence to avoid any legal issues.
The future of AI in the European Union
The approval of the AI Act by the European Union represents a decisive step towards a future where artificial intelligence not only drives innovation but also respects fundamental rights and values. For tech companies, this regulatory framework presents both challenges and opportunities, requiring a firm commitment to transparency, security, and accountability. Adapting to this new reality is not optional, but essential for
At the Tech M&A area of Baker Tilly, we advise our clients in the process of buying and selling technology companies. We ensure that your organization complies with all legal terms before starting a business sale process. Contact our M&A experts with no obligation.