
This is "The AI Economy," a weekly LinkedIn-first newsletter about AI's influence on business, work, society and tech and written by Ken Yeung. Sign up here.
While U.S. lawmakers continue to hem and haw over how to regulate the tech industry, their colleagues in the European Union have once again passed historic regulations aimed at protecting human rights while claiming to foster innovation. In this special edition of “The AI Economy,” learn about the Artificial Intelligence Act, adopted on Wednesday with overwhelming support within the EU Parliament.
The Prompt
Europe first passed the General Data Protection Regulation (GDPR) in 2016. Then came its Digital Markets Act (DMA) in 2022. And now, the supranational union body has agreed on legislation to ensure AI doesn’t harm humans. In a 523 to 46 vote, with 49 abstentions, the EU Parliament endorsed rules banning certain AI applications and mandated high-risk systems adhere to certain obligations to protect the health, safety, fundamental rights, environment, democracy, and the rule of law.
Though a broad sweeping law regulating AI, rules will depend on the level of risk and impact created by the individual AI.
Proponents of the law say these protections will enhance the European Union’s competitiveness, ensure a safe and trustworthy society, and promote digital innovation.
As such, EU member countries are required to establish regulated environments or sandboxes for startups and small- and medium-sized businesses the opportunity to develop and train their AI models before being released to the public.
Read the full text of the AI Act
Banned Applications
According to the EU, AI apps that threaten citizens’ rights, meaning biometric systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage are prohibited. In addition, emotion recognition technology in the workplace and schools, social scoring, predictive policing, and AI that manipulates human behavior are banned.
The Levels of AI Risk

The EU acknowledges many AI systems will likely “pose minimal risk.” Nevertheless, it has broken the technology into four distinct categories. The lowest are those systems that can operate freely without restriction.
Above that is “AI with specific transparency obligations” that applies to impersonation or bots. Systems in this category are permitted under the AI Act but must adhere to the EU’s information and transparency obligations.
At the upper echelon are two levels where strict regulations would be imposed, or systems would be banned.
High risk: Systems negatively affecting safety or fundamental rights — these are divided into two categories: Toys, aviation, cars, medical devices, and other products that fall under the EU’s product safety legislation; and those AI that must be registered in an EU database, including:
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law
Unacceptable risk: Systems considered a threat to people and are banned
- Cognitive behavioral manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behavior in children
- Social scoring: classifying people based on behavior, socioeconomic status or personal characteristics
- Biometric identification and categorization of people
- Real-time and remote biometric identification systems, such as facial recognition
Obligations For High-Risk AI
To operate in the European Union, developers of high-risk AI will need to adhere to certain requirements. Among them include the following:
- Use high-quality training, validation and testing data
- Establish documentation and design logging features — ensure traceability and suitability
- Provide the “appropriate certain degree” of transparency and educate users on how to use the system
- Enact human oversight
- Ensure robustness, accuracy and cybersecurity
The new law lays out additional obligations for providers and even users.

Penalties for failing to comply with EU obligations can be severe, with one report saying companies could face penalties of up to 7 percent of global turnover or €35 million, whichever is higher if they are found to violate the AI Act.
Law Enforcement AI Carveout
The AI Act does include some exemptions for AI usage by law enforcement, though in limited situations. Using biometric identification in “real-time” is permitted if “strict safeguards are met,” e.g. its use is limited in time and geographic scope and police obtain prior judicial or administration authorization. On the other hand, using such a system post-facto is deemed a high-risk use case and requires judicial authorization being linked to a crime.
Push for Transparency
All general-purpose AI systems and the models they’re based on must meet EU standards for transparency, including compliance with the bloc’s copyright law.
Artificial or manipulated images, audio or video content (i.e., deepfakes) must be clearly labeled.
What’s Next
Although the AI Act text has been approved by the EU Parliament, several additional steps must be completed before it becomes fully effective, one of which involves obtaining approval from the EU Council. But it’s expected to be “fully applicable” 24 months after entering into force, though some parts may be enacted sooner.
- The ban on AI systems posing unacceptable risks will apply six months after the entry into force
- Codes of practice will apply nine months after entry into force
- Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force
- High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force
It’s not a perfect law but a template from which other governments can build to regulate AI. It continues to face opposition from politicians who argue it will stifle innovation in the bloc and activists arguing it doesn’t go far enough to protect human rights.
Read More
- How the U.S., EU and China are going about regulating AI (Bloomberg)
- How the AI Act impacts the enterprise (CIO)
- What comes next after the passage of the EU AI Act (The Verge)
Quote This
“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected…We ensured that human beings and European values are at the very centre of AI’s development.”
— Brando Benifei, IT MEP for Italy and Internal Market Committee co-rapporteur
“While EU policymakers are hailing the AI Act as a global paragon for AI regulation, the legislation fails to take basic human rights principles on board.”
— Mher Hakobyan, Amnesty International’s Advocacy Advisor on Artificial Intelligence
End Output
Thanks for reading. Be sure to subscribe so you don’t miss any future issues of this newsletter.
Did you miss any AI articles this week? Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”

Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology.
Do you have a story you think would be a great fit for “The AI Economy”? Awesome! Shoot me a message – I’m all ears!
Until next time, stay curious!
Subscribe to “The AI Economy”
New issues published on Fridays, exclusively on LinkedIn
Leave a Reply
You must be logged in to post a comment.