The EU’s Artificial Intelligence Act

Source: The post is based on the article “The EU’s Artificial Intelligence Act” published in The Hindu on 4th May 2023

What is the News?

Members of the European Parliament have reached a preliminary deal on a new draft of the European Union’s ambitious Artificial Intelligence Act.

Why regulate Artificial Intelligence?

Artificial intelligence technologies are capable of performing a wide variety of tasks including voice assistance, recommending music, driving cars, detecting cancer and even deciding whether you get shortlisted for a job.

But many of these AI tools are essentially black boxes meaning even those who designed them cannot explain what goes on inside them to generate a particular output.

For instance, complex and unexplainable AI tools have already manifested in wrongful arrests due to AI-enabled facial recognition; discrimination and societal biases seeping into AI outputs. Most recently chatbots like ChatGPT are generating versatile, human-competitive and genuine-looking content which may be inaccurate or copyrighted material.

What is the purpose of the EU’s Draft AI Act?

The EU’s AI Act aims to bring transparency, trust and accountability to AI and create a framework to mitigate risks to the safety, health, fundamental rights and democratic values of the EU. 

What are the key features of the EU’s AI Act?

The act defines AI as software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The Act’s central approach is the classification of AI tech based on the level of risk they pose to the “health and safety or fundamental rights” of a person. There are four risk categories in the Act — unacceptable, high, limited and minimal.

The Act prohibits using AI technologies in the unacceptable risk category. These include the use of real-time facial and biometric identification systems in public spaces; systems of social scoring of citizens by governments; subliminal techniques to distort a person’s behaviour and technologies which can exploit vulnerabilities of the young or elderly.

The Act lays substantial focus on AI in the high-risk category prescribing a number of pre-and post-market requirements for developers and users of such systems. Some systems falling under this category include biometric identification and categorisation of natural persons, AI used in healthcare, education, employment, law enforcement, and justice delivery systems among others.

The Act also envisages establishing an EU-wide database of high-risk AI systems and setting parameters so that future technologies or those under development can be included if they meet the high-risk criteria.

How has the AI industry reacted?

Industry players have welcomed the legislation. But others have warned that broad and strict rules could stifle innovation. Companies have also raised concerns about transparency requirements, fearing that it could mean divulging trade secrets.

Print Friendly and PDF