First requirements of AI Act come into effect as Clifford Chance prepares for Paris AI Fringe

Yesterday (2 February) marked the first requirements under the European Union AI Act coming into effect, banning the use of AI systems that involve ‘prohibited AI practices,’ and requiring providers and users of AI systems to ensure they have sufficient AI literacy to operate AI systems.

Guidance from the European Commission is, to the frustration of many, still pending, and companies have been designing and implementing their own strategies and compliance plans in its absence.

 

Law firms more than ever are engaging in knowledge sharing forums: Clifford Chance is preparing for the global AI Action Summit in Paris, and on 11 February at the satellite AI Fringe, Dessislava Savova, a partner and head of Clifford Chance’s Continental Europe tech group, will be moderating a discussion on how to deliver trustworthy AI in challenging times. Clifford Chance is a partner of the Fringe event. The panel includes Brendan Kelleher, chief compliance officer at SoftBank, Laurent Daudet, co-CEO and co-founder of LightOn, and Michel Combot, director of technology and innovation at CNIL. You can register to attend here: https://lnkd.in/ebwFhuKm

The EU AI Act came into force on August 1, 2024, and its requirements are coming into effect under a staggered timeline, with the majority of its provisions being implemented by August 2, 2026. The Act applies to any systems used in the EU.

 

The literacy requirements are that providers and deployers of AI systems must take suitable measures to ensure their staff and anyone engaged in the operation of their AI systems have sufficient skills, knowledge and understanding, and that they are aware of the risks and harm AI can cause.

Prohibited AI practices, meanwhile, are considered to be practices that are harmful and abusive, contradicting Union values, the rule of law, and fundamental rights. They apply to providers and deployers of AI systems and ban AI systems that are manipulative or deceptive; exploit vulnerabilities (eg age or socio-economic status); that ‘score’ people based on their behaviour or personality characteristics; that profile people to predict criminal behaviour; that create or expand facial recognition databases through untargeted scraping of the internet and CCTV; or that infer emotions; or categorise people based on the likes of race or political leaning.

For more on the bans that came into effect yesterday, this paper from Mayer Brown is helpful: https://www.mayerbrown.com/en/insights/publications/2025/01/eu-ai-act-ban-on-certain-ai-practices-and-requirements-for-ai-literacy-come-into-effect

To find out more about the AI Fringe it’s https://aifringe.org/