AI Regulation
Inside the EU AI Act: Exclusive Insights from Lead Author, Gabriele Mazzini

Be part of the biggest events of the season
Apr 10, 2025 | 15 min read
On August 1, 2024, the EU AI Act officially came into force, establishing the world’s first comprehensive legal framework for regulating AI technology. In this exclusive interview, we speak with Gabriele Mazzini, the architect and lead author of the Act, to gain an insider’s perspective on its development. Mazzini offers a behind-the-scenes look at the complex policy-writing process, discussing how various stakeholders were consulted, and how consensus was reached on the Act’s risk-based approach. He also provides crucial advice for business leaders navigating compliance, shares important updates since the law took effect, and discusses the global implications of the Act. Most importantly, Mazzini reassures companies that now is not the time to panic, but to prepare for the future of AI regulation.
What motivated you to take on the role of the lead author of the EU AI Act? How did your background in law influence your policy-writing process?
I realized from the get-go that AI policy was fascinating. I have been passionate since the beginning, notably in trying to understand the intersection between AI as a technology and law as a tool to govern technology. I drafted a quite comprehensive paper about the intersection between AI and EU law in 2018, way before the Commission started working on the AI Act. At the time, I was working in a department in the Commission, which was not the department that ultimately led the work on the Act but was mostly focused on the liability implications of AI. We were reflecting on whether the liability regime in the EU needed to be changed to enable AI. My background in law and the study put in understanding the complexity of the intersection between AI and EU law was essential for the work I did afterwards on the AI Act. When working in policymaking as a regulator it is essential to think holistically, especially in a field like AI where implications are manifold and broad and where regulatory action takes the form of a horizontal legal framework, like the AI Act, which applies across all sectors.
How did you engage with various stakeholders during the development process? What role did their input play in shaping the Act?
It’s a privilege to interact with many stakeholders as a policymaker and listen to many different views. You also start seeing how society sees your work, and whether they see opportunities or risks. At the same time, it’s also a major responsibility because you have to make sure that whatever choices you make as a policymaker are grounded on facts and evidence and you have as much as possible an up-to-date understanding and knowledge about the matter you regulate.
It’s both a privilege and a responsibility. I’ve always interpreted that role with much respect and not as a tick-the-box exercise where the job is done after meeting X number of stakeholders. Consulting with and engaging with stakeholders is much more than that. On an individual basis, I’ve always had an open-door policy from the beginning and was willing to meet with whoever was interested in talking to me. The institution as a whole has of course also engaged with stakeholders in a structured way.
This goes back to a time when the Act was not even in the conception phase. The Commission started engaging with stakeholders already in 2018 and 2019 when it set up an expert group on artificial intelligence. This expert group was composed of around 52 individuals from different backgrounds, namely industry, academia, NGOs, and civil society. That group already gave a broad perspective on the emergence of AI and the policy implications of AI. They also developed ethical guidelines for trustworthy AI which were not a deliverable of the European Commission but of this separate expert group. That work already initiated a structured dialog between the European Commission and the stakeholders.
That work was also complemented by the establishment of an online platform (the AI Alliance) where citizens and any interested party could provide feedback and suggestions. Another important set of consultation processes took place after the adoption of the White paper. Before the Commission came up with the actual legal framework, which happened in 2021, it adopted a White paper on AI in February 2020, and this was essentially how the institution tried to identify a number of potential ideas for what could be the ultimate draft legal framework and aimed to catalyze feedback on those ideas. That was also another interesting way we consulted widely with stakeholders.
Can you share any particularly challenging moments during the writing process? How did you balance competing interests and priorities to reach a consensus?
No process is perfect. It’s challenging to deal with a legal framework that is so complex and large and ensure everyone fully understands what you’re trying to do. This is because any stakeholder typically tends to have a peculiar perspective when looking at and considering the policy work that is unfolding, which is linked to the needs and interests they represent. When trying to build something horizontal, sometimes the input you receive from several stakeholders does not necessarily fit the overall picture. So, the skill of the policymaker is to try to merge the narrow focus or perspective into the ultimate goal, which is in this case, a broader framework.
What led to the risk-based framework of the EU AI Act?
It was pretty clear to me since the beginning that regulating any AI application or AI technology as such did not make sense. At the same time, also for those applications that may have deserved to be regulated, it did not seem warranted to establish the same type of rules. Hence the idea of a ‘pyramid’-like approach tailored to the actual use case.
This idea was quite fascinating because we realized that we did not want to regulate AI as a technology.
We didn’t want to regulate any AI application as if AI always creates risks. To create a balanced legal framework that does not hinder development and intervenes only when necessary, you need to focus on the application level and the use case. Therefore, the risk-based approach was exactly that solution, because depending on the type of risk that the application would generate, the rules would be different. We identified three risk levels where binding legal frameworks apply, plus a fourth level for which no binding rules are foreseen, but certain forms of voluntary compliance are possible. Of course, this choice was not ‘carved in stone’. There is no ontological value in the risk levels either that could have been articulated differently. But I think it was an interesting and groundbreaking idea.
The EU AI Act officially came into force on August 1st. What significant updates or events have unfolded since then that business leaders should take note of?
The fact that the Act entered into force doesn’t mean it’s immediately applicable. The Act is law, so it is binding, but it does not apply in its entirety until after three years.
There is a so-called transition period. The first applicable rules that companies need to comply with will be the rules on the prohibitions. The top of the risk pyramid, if you want. The second set of rules is around the general-purpose AI models and will be applicable one year after 1 August 2024. Two years after that, on 1 August 2026, all the other rules of the AI Act are applicable except for certain provisions regarding high risk.
Business leaders need to understand the timeline in which the rules become applicable.
What has happened since the publication of the Act is that the administrations, both in the Commission and in the Member States, have started to set up internal processes and structures to ensure enforcement. Business leaders, notably those that may be concerned by the rules applicable to the general-purpose AI models, should pay attention to the work that has already started in developing the Code of Practice at the EU level, i.e. facilitated by the Commission. These Codes of practice should be finalized before the entry into application of the relevant chapter of the AI Act, which means before 1 August 2025.
Another important fact business leaders should keep in mind is that the Act is not 100% clear on all its provisions. In fact, the European Commission will have to develop several executive actions called implementing acts and delegated acts as well as guidelines and templates for about 70 items. There are still many areas where clarification is needed, which is not ideal.
Therefore, there is an opportunity for business leaders and companies to shape the process of finetuning and clarifying the AI Act in order to determine the actual extent to which certain rules may apply to them. In other words, it is time to make their voices heard. They should be active in the implementation phase now that the legislative phase is finalized, but so much is still to be clarified.
With penalties for non-compliance potentially reaching up to 35 million euros or 7% of annual turnover, what immediate steps should businesses take to ensure they are not at risk?
They should not consider themselves to be at the receiving end of a process they cannot influence. Instead, now is a time to engage critically with the provisions, especially when those rules provide a certain margin of appreciation. Companies need to proactively engage with the regulators and suggest interpretations, positions, and ideas to make sure that those rules are applied reasonably and sensibly. This is one of the challenges of regulating technology, where there is a knowledge gap between the regulators and the companies that develop those technologies.
Of course, it goes without saying that regulators should not be dependent only on the company’s views. Although it was not obvious in our case, especially at the beginning of the process, regulators should invest heavily in having internal deepseated expertise on the matters that it intends to regulate. You need to know what you want to regulate in order to do that well. Only if you have your own technical expertise you can properly engage with external stakeholders constructively, while at the same time retaining the independence of judgment that is necessary to take broader societal considerations into account. On the other hand, those who developed the technology and the products must have a say in suggesting the best ways to comply. This exchange needs to happen. I understand sometimes companies, especially the smaller ones, don’t have the resources to engage extensively with the regulators, but I think at this time when so much still needs to be clarified it’s an exercise that is worth doing. It doesn’t have to be individual companies; it could be industry associations.
Many companies are facing a shortage of AI talent. How do you think this skills gap will impact the successful adoption of the EU AI Act?
Because those skills are rare, companies need to increase their strength in certain AI-related skills. The concern is that, as I mentioned before, companies at this stage may have to invest more in compliance than AI skills. That may impact the company’s ability to compete in the AI space.
If you spend more money on compliance, as opposed to research and development or AI engineers which are also scarce, there is a risk of imbalance. The same may happen with authorities because they must ensure compliance with all these rules and need to equip themselves with several technical skills.
I hope this set of rules will be somewhat clarified as soon as possible so that companies can hopefully shift more of their budget to AI skills rather than AI compliance. In my view, the successful adoption of AI in Europe depends on the ability to get this legal framework, and the tools needed to implement this framework, working effectively and sensibly as fast as possible. So there is still important work to do.
Who holds the primary responsibility for implementing and enforcing the EU AI Act within organizations?
It should be a team effort. The Act does not foresee a figure like a data protection officer (DPO) in the privacy legislation. This is not an obligation, so the Act does not require, for instance, a Chief AI Officer in companies. The obligations that the Act establishes are on the economic actor, which is the provider, the deployer, so the company itself. This means that the companies can organize themselves as they wish. The Act gives total freedom to organizations to organize themselves depending on their size. I don’t think there is necessarily only one model. Ultimately, the legal responsibility is on the company. If there is a lack of compliance, the company will have to pay the fine.
How do you see the EU AI Act influencing AI regulation in other parts of the world?
There is a huge interest around the world. Since I left the Commission, I’ve traveled from South America to Asia, and I have witnessed a growing interest in understanding this piece of legislation. It’s quite normal in this phase because AI governance and regulation is something that is of interest globally. Governments are wondering how to deal with the ‘AI wave’.
This interest is also reflected by the collective efforts at an international level. For instance, UN agencies are investing heavily in reflecting on AI governance frameworks. As the EU is the first regional actor to come up with such a comprehensive legal framework on AI, it’s normal that countries around the world are looking with interest at that framework and are asking themselves whether they should get inspiration.
It’s too early to say whether the Act will turn into a regulatory model for other regions around the world. There is a need to understand whether those choices fit the socioeconomic or legal context in those countries. The capacity to implement a framework like the AI Act also differs from country to country. A legal framework is not just a piece of paper. It requires human resources, skills, funding, and structures to turn it into an effective tool that can achieve the objectives it was designed for. It needs to be managed and brought to life. Not all countries are in the same position, and they would be well-advised to consider questions of implementation and enforcement from the get-go, not after the law has been agreed.
Are there any specific areas where you believe the Act could have a significant global impact?
I hope the risk-based approach can be considered as one of the foundational elements. The idea is to consider AI as a tool that has both benefits and risks and is not necessarily dangerous by its nature. It’s a technology with different risk levels depending on how it’s used. I’d like to see this risk-based approach adopted widely.
The extent to which certain areas of the AI Act may have an impact beyond EU borders could also depend on certain company choices, especially for companies that sell their products and services in the EU. They may adjust their compliance system to the EU legal framework simply because they want to sell in the EU.
Those companies may therefore decide to adopt the same or similar compliance structure when selling their products outside the EU. It’s up to the companies whether to have two systems, one for the EU market and one for the non-EU market. It’s not for me to say what is economically convenient for companies. But these considerations may be relevant in determining whether we may see a larger or a narrower adoption of certain areas of the Act.
What are the key trends or developments shaping the AI landscape in the coming years? How might the Act need to evolve to address these future challenges?
It will be interesting to see whether the trend in generative AI will continue along the lines we have seen so far. This trend towards developing larger models that require more data, and more computing power, is based on certain underlying architectural choices. Perhaps intelligence will come from other foundational choices that do not necessarily rely on growing data sets or computing power. This will ultimately shape the investments around creating a technology stack to support this.
From a regulatory and policy point of view, it’s a challenge to keep regulation up to date, but it’s not impossible. When I think about the AI Act, making sure it’s future-proof was one of my main concerns since the beginning. However, certain choices made after the adoption of the Commission proposal, such as regulating foundation models or deleting the possibility of updating the AI definition, do not necessarily go in that direction from my point of view. We will see whether the Act will be able to stand the test of future developments.
Currently, I’m more concerned about ensuring the Act works now to enable trustworthy innovation in Europe. This is where the Act will prove its value. It should be applied in a way that is accessible, easy to understand, and provides legal certainty to companies so that they can rely on a stable legal framework and focus on building the products.
*The interview answers have been edited for length and clarity.