As Artificial Intelligence (AI) is transforming many sectors and it is impacting the way people live, work, and interact, its influence in society is only growing stronger. However, this rapid widespread and development of AI raises many concerns including various aspects such as bias, privacy protection, and other potential risks.
In order to address the potential issues and the need for ethical and responsible AI development, the European Union (EU) has developed the EU AI Act. The EU AI Act is the world’s first law that intends to create a comprehensive legal framework for regulating AI systems.
The EU AI Act was first proposed by the European Commission in April 2021 and then approved by the EU Parliament on June 14, 2023, representing a big first step toward regulating AI.
Following this agreement, the Council of the European Union gave final approval to the AI Act on May 21, 2024, signifying a critical step toward the legislation’s full implementation.
Then, on August 1, 2024 the highly anticipated Artificial Intelligence (AI) Act has officially come into force, marking a monumental milestone in EU (and global) legislation.
The EU has shown they are committed to developing AI rules because they are aware of the possible risks it brings and they are dedicated in protecting people’s rights. Given AI’s potential to invade privacy, discriminate, and impact human control, the EU aims to promote a responsible and ethical development and use of AI by setting clear rules and standards.
Furthermore, by addressing the potential risks and promoting transparency, accountability, and human oversight, Europe aims to build confidence among individuals and encourage the adoption of AI technologies in a manner that is safe and beneficial for society.
“The AI Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights. The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment.” – stated the European Commission on their announcement.
EU AI Act outlines a risk-based approach meaning that the higher the risk, the stricter the rules. This approach divides AI systems into different categories depending on their potential impact:
Some AI systems that are considered risky, such as facial recognition in public places, predictive policing tools, and social scoring systems, will be completely banned. The Act also requires transparency, i.e., informing people if the content they see is made by AI and making sure illegal content is not generated.
On the other hand, AI systems that are less risky, like spam filter, will not have as many rules to follow. The goal of the AI Act is to find a balance between controlling risky AI systems while still allowing space for innovation in safer AI systems.
Certain AI systems considered risky are completely banned. These include:
The Act also imposes new rules and restrictions for other AI applications to ensure they operate ethically and transparently:
The AI Act also addresses general-purpose AI systems, such as large language models, which can be used in a variety of applications. These models are subject to specific requirements to ensure they are used responsibly. This includes:
Failure to comply with the EU AI Act may result in fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the nature of the violation and the size of the company. These fines are higher than the ones set by the General Data Protection Regulation (GDPR), where failure to comply with it may result in fines of up to €20 million or 4% of an organization’s annual revenue.
While the EU AI Act emphasizes responsible development and thical use of AI, businesses aiming to comply with the Act can benefit from implementing a robust Artificial Intelligence Management System (AIMS). This is where the ISO/IEC 42001 standard comes in.
ISO/IEC 42001 provides a framework for organizations to establish, implement, maintain, and continually improve an AIMS. It guides organizations on developing responsible AI practices that address key aspects aligned with the EU AI Act, such as risk management, transparency and accountability, data governance, and human oversight.
PECB offers ISO/IEC 42001 training courses that equip individuals with the competencies needed to plan, develop, implement, maintain, and improve an AIMS within organizations. These training courses include:
The EU AI Act breaks new ground by prioritizing ethical considerations and responsible development in AI regulation. By implementing a risk-based system, the Act aims to ensure that AI development is both safe and ethical, protecting individuals’ rights while fostering innovation. Europe’s rules for AI could have a significant impact on making AI providers become responsible in maintaining transparency, while also having sufficient technical resources to enforce these rules effectively.
About the Author
Vlerë Hyseni is the Senior Digital Content Specialist at PECB. She is in charge of doing research, creating, and developing digital content for a variety of industries. If you have any questions, please do not hesitate to contact her at: support@pecb.com.
Share
Beyond Recognition
©2025 Professional Evaluation and Certification Board. All rights reserved.