You know the moment. Someone shares an article in the group chat. "EU AI Act: what you need to know." You open it. You read three paragraphs of legal prose. You close it again.
Understandable. But this is going to affect you.
The essentials in three sentences
The EU AI Act classifies AI applications based on risk. The higher the risk, the stricter the requirements. Most business applications fall into the "limited risk" category and require transparency: you need to be able to explain that AI is being used and what it does.
That is it. No hundred-page legal framework required. Three questions cover ninety percent of what you need to know.
Three questions
Does this system make decisions that directly affect people? Think: assessing job applications, estimating creditworthiness, granting access to essential services. If the answer is yes, you fall under high risk. That means you need human oversight, documentation, and transparency as a legal obligation.
Do your customers and employees know they are dealing with AI? The Act requires you to communicate this. Not in small print. Clear communication. When someone interacts with an AI system, that must be visible.
Can you explain why the system does what it does? Not technically. In plain language. "The system classifies complaints by urgency based on keywords and historical patterns" is sufficient. "It uses AI" is not.
If you can answer these three questions, you are largely compliant. The details matter, but the foundation is simpler than most lawyers make it appear.
Why it matters (even if you are not a lawyer)
The AI Act is legislation. But the impact is not in the fines. The impact is in trust.
Organisations that can explain how their AI works, why certain choices were made, and where the limits are, those organisations build trust with customers, employees, and partners. Organisations that cannot, build risk.
The interesting shift: the AI Act forces conversations that organisations should be having anyway. Who is responsible for the output? Where does the system's autonomy end? How do you know it is still working correctly? These are not compliance questions. They are governance questions that make every AI project better.
What you can do now
Four steps you can take this month.
Inventory. Which AI systems are running in your organisation? Including the unofficial ones, the Shadow AI that employees use on their own initiative. You cannot be compliant with systems whose existence you do not know about.
Classify. For each system: high risk (decisions that directly affect people) or limited risk (most business applications)? The classification determines which requirements apply.
Document. For each system, a brief description: what it does, what data it uses, who is responsible, and where the limits are. Not a thirty-page report. One page per system is enough.
Build it in. The most effective compliance is compliance that lives inside the product, not pasted on top. Every automation you build has three layers: what it does, where it stops, and why it does what it does. If you include that from the start, compliance becomes a by-product of good design.
Most organisations wait until enforcement begins and then handle it under pressure. The smart ones handle it now, as part of how they design AI. It costs less, it produces better systems, and it builds the trust you will need when the regulator calls.