In this digital age, trust has become essential to effective relations among in....
AI Under Watch: The EU AI Act
As Artificial Intelligence (AI) is transforming many sectors and it is impacting the way people live, work, and interact, its influence in society is only growing stronger. However, this rapid widespread and development of AI raises many concerns including various aspects such as bias, privacy protection, and other potential risks.
What is the EU AI Act?
In order to address the potential issues and the need for ethical and responsible AI development, the European Union (EU) has developed the EU AI Act. The EU AI Act is the world's first law that intends to create a comprehensive legal framework for regulating AI systems.
The EU AI Act was first proposed by the European Commission in April 2021 and then approved by the EU Parliament on June 14, 2023, representing a big first step toward regulating AI.
Following this agreement, the Council of the European Union gave final approval to the AI Act on May 21, 2024, signifying a critical step toward the legislation's full implementation.
Then, on August 1, 2024 the highly anticipated Artificial Intelligence (AI) Act has officially come into force, marking a monumental milestone in EU (and global) legislation.
The Motivation Behind the EU AI Act
The EU has shown they are committed to developing AI rules because they are aware of the possible risks it brings and they are dedicated in protecting people’s rights. Given AI’s potential to invade privacy, discriminate, and impact human control, the EU aims to promote a responsible and ethical development and use of AI by setting clear rules and standards.
Furthermore, by addressing the potential risks and promoting transparency, accountability, and human oversight, Europe aims to build confidence among individuals and encourage the adoption of AI technologies in a manner that is safe and beneficial for society.
"The AI Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people's fundamental rights. The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment." - stated the European Commission on their announcement.
The Risk-Based System of the AI Act
EU AI Act outlines a risk-based approach meaning that the higher the risk, the stricter the rules. This approach divides AI systems into different categories depending on their potential impact:
- Unacceptable Risk - AI systems that pose a clear threat to safety, livelihoods, and rights are prohibited. This includes systems that use subliminal techniques, exploit vulnerabilities of specific groups, or allow social scoring by governments.
- High Risk - Systems used in critical sectors (e.g. healthcare, transport, law enforcement, judiciary) are subject to strict requirements. These include rigorous testing, risk management, and adherence to transparency and accountability standards.
- Limited Risk - AI with limited risk must comply with transparency obligations, such as informing users that they are interacting with an AI system.
- Minimal Risk - These systems, such as AI in video games or spam filters, have minimal obligations but must comply with general EU laws.
Some AI systems that are considered risky, such as facial recognition in public places, predictive policing tools, and social scoring systems, will be completely banned. The Act also requires transparency, i.e., informing people if the content they see is made by AI and making sure illegal content is not generated.
On the other hand, AI systems that are less risky, like spam filter, will not have as many rules to follow. The goal of the AI Act is to find a balance between controlling risky AI systems while still allowing space for innovation in safer AI systems.
Prohibited and Regulated AI Systems
Certain AI systems considered risky are completely banned. These include:
- Ban on Emotion-Recognition AI - The use of AI to identify people's emotions in policing, schools, and workplace is prohibited. While AI-based facial detection and analysis have not been banned, there is anticipated contention surrounding this matter.
- Ban on Real-Time Biometrics and Predictive Policing - Facial recognition and other biometric tools cannot be used for tracking individuals or predicting individual behavior in public spaces.This has sparked controversy, with some arguing these technologies are necessary for ensuring public safety and efficient law enforcement.
- Ban on Social Scoring - The practice of social scoring, which uses individuals' social behavior data to create generalizations and profiles, is prohibited. This aligns with the EU's commitment to protecting individual freedoms and preventing discrimination.
New Restrictions and Compliance Requirements
The Act also imposes new rules and restrictions for other AI applications to ensure they operate ethically and transparently:
- New Restrictions for Generative AI - The act introduces new rules for generative AI and proposes that large language models should not use copyrighted material when being trained.
- Stricter Regulations on Recommendation Algorithms - The EU AI Act enforces stricter regulations for recommendation algorithms used on social media platforms. It classifies these algorithms as "high risk," which means they would be closely monitored and examined more carefully.
Regulations for General-Purpose AI Models
The AI Act also addresses general-purpose AI systems, such as large language models, which can be used in a variety of applications. These models are subject to specific requirements to ensure they are used responsibly. This includes:
- Transparency and Disclosure - Developers must provide clear information about the capabilities and limitations of general-purpose AI models. Users must be informed when they are interacting with such models.
- Risk Management - Developers of general-purpose AI models are required to establish comprehensive risk management protocols aimed at identifying and mitigating potential harms. This entails conducting regular assessments and implementing update to effectively address emerging risks.
- Ethical Use of Data - Training data for general-purpose AI models must be ethically sourced and compliant with data protection laws. This ensures the prevention of biases and the preservation of user privacy within the models.
The EU AI Act Enforcement and Penalties
Failure to comply with the EU AI Act may result in fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the nature of the violation and the size of the company. These fines are higher than the ones set by the General Data Protection Regulation (GDPR), where failure to comply with it may result in fines of up to €20 million or 4% of an organization's annual revenue.
Aligning with the EU AI Act: ISO/IEC 42001
While the EU AI Act emphasizes responsible development and thical use of AI, businesses aiming to comply with the Act can benefit from implementing a robust Artificial Intelligence Management System (AIMS). This is where the ISO/IEC 42001 standard comes in.
ISO/IEC 42001 provides a framework for organizations to establish, implement, maintain, and continually improve an AIMS. It guides organizations on developing responsible AI practices that address key aspects aligned with the EU AI Act, such as risk management, transparency and accountability, data governance, and human oversight.
PECB offers ISO/IEC 42001 training courses that equip individuals with the competencies needed to plan, develop, implement, maintain, and improve an AIMS within organizations. These training courses include:
Conclusion
The EU AI Act breaks new ground by prioritizing ethical considerations and responsible development in AI regulation. By implementing a risk-based system, the Act aims to ensure that AI development is both safe and ethical, protecting individuals' rights while fostering innovation. Europe's rules for AI could have a significant impact on making AI providers become responsible in maintaining transparency, while also having sufficient technical resources to enforce these rules effectively.
About the Author
Vlerë Hyseni is the Senior Digital Content Specialist at PECB. She is in charge of doing research, creating, and developing digital content for a variety of industries. If you have any questions, please do not hesitate to contact her at: support@pecb.com.