Skip to content

Search the site

AILLMsregulationEuropeNews

Europe’s AI Act demands extensive "logs" - targets biometrics, bias, black boxes

Emotion recognition banned in workplaces, classrooms.

CC-BY-4.0: © European Union 2022: Source, European Parliament.

The European Parliament has formally adopted the EU AI Act – agreeing on a sweeping set of new rules that ban certain AI applications including “emotion recognition” systems in the workplace and schools. (Those searching the tortuous bowels of the EU’s many confusing websites for the formally adopted text, we’re here to help; the 459-page PDF is here.)

The rules also lay out  cybersecurity requirements, strict rules on biometric use (with concessions for cybersecurity and authentication providers). As Dr Nils Rauer of law firm Pinsent Masons highlighted, under the AI Act, the area in which an AI system is applied could trigger a “high-risk” designation – think deployments across the educational sector, critical infrastructure, justice and democratic processes, for example.

See also: A bootleg API, AI’s RAM needs, cat prompts and GPU shortages: Lessons from scaling ChatGPT

He said: “Where AI systems are categorised as ‘high-risk’, the technology would need to meet mandatory requirements around issues such as risk management, data quality, transparency, human oversight and accuracy, for example, while the providers and deployers of those systems would face a range of duties, including around registration, quality management, monitoring, record-keeping, and incident reporting. Further obligations will apply to importers and distributors of high-risk AI systems.”

It also demands that deployers of high-risk AI systems"keep the logs automatically generated by that high-risk AI system (to the extent such logs are under their control) for... at least six months."

AI Act: What does it mean by "logs"?

The EU AI Act specifies that when it comes to logs it means:

"a) recording of the period of each use of the system (start date and time and end date and time of each use);
(b) the reference database against which input data has been checked by the system;
(c) the input data for which the search has led to a match;
(d) the identification of the natural persons involved in the verification of the results"

Its (somewhat loose and non-specific, to The Stack's reading) cybersecurity requirements come as a report published on March 11 by a team from Google DeepMind, ETH Zurich, the University of Washington, OpenAI, and McGill University demonstrated the ability of hackers to extract “precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2”-- whilst other recent publications demonstrate ongoing work on commoditising prompt injections.) 

The EU AI Act was first proposed by the European Commission in April 2021. After various rounds of scrutiny a text has now been formally adopted by MEPs. The European Council must now also adopt the EU AI Act for the text to become EU law but is expected to follow suit soon.

See also: OpenAI, peers warn in Red Team report that AI could support “misuse” of biology

Under the AI Act, even those behind “general purpose AI models” need to publish detailed summaries about the content used to train them. Quite how robust assessment of these summaries and indeed enforcement will be remains a hugely open question with the Act not yet in statute books.

“[Training] data sets should.. have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the data sets, that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination… especially where data outputs influence inputs for future operations (feedback loops),” Europe’s AI Act adds. 

The EU AI Act was passed overwhelmingly by Parliament: 523 MEPs voted in favour, 46 voted against, and there were 49 abstentions.

Agur Jõgi, CTO, Pipedrive, told The Stack they anticipate the Act’s passage will “have a huge effect for industries across the globe, as the ‘Brussels effect’ kickstarts legislative changes across international borders.”

Ray Eitel-Porter, Responsible AI Lead at Accenture UKIA, added that “the most significant step to regulating AI in the world [is] not just the chance for companies to ensure their high-risk AI systems pass the safety test. Instead, they should take this moment to build a bedrock of responsible AI across their enterprise and foster a culture of safety and innovation – because they don’t need to be at odds with each other. Leaders can take steps to deploy AI tools governed by strong principles and controls that promote fairness, transparency, safety and privacy, for powerful technology to support their people, customers, and society in a positive way. Implementation can take at least two years for a large company, the full extent of the grace period allowed… for high-risk AI systems.”

Join peers following The Stack on LinkedIn

Latest