AI Act

EU has developed a legal framework for the development and use of AI technologies. This is called the 'AI Act'. In this article we describe what it is and why you should care.

Hero mask
<h1>AI Act</h1>
<p>EU has developed a legal framework for the development and use of AI technologies. This is called the 'AI Act'. In this article we describe what it is and why you should care.</p>

The AI Act

Artificial Intelligence (AI) is advancing so rapidly worldwide that it can sometimes feel overwhelming to keep up. Global studies from PwC estimate that by 2030, AI alone will contribute $15.7 billion to the global economy.

The potential for AI seems almost limitless, but with it comes a range of risks – both ethical and technical. To address this, the EU has developed the AI regulation (AI Act) that will have a significant impact on how companies approach AI in both the private and public sectors.

 

What is it?

Simply put, the AI Act is a regulation that establishes a common legal framework for the development and use of AI technologies. It is the first comprehensive legislation worldwide for artificial intelligence.

Join our AI Act event

At Kraftvaerk we're hosting an event for companies regarding the AI Act on May 22, 2025 (in Danish) - sign up here

Why should you care?

The AI Act will significantly impact anyone looking to engage with AI at various levels. This includes everyone involved in the development, use, import, distribution, or production of AI models.

However, the impact of the AI Act will vary depending on the specific projects and their degree of AI involvement. The regulation is built on a risk-based approach that categorizes AI models according to their risk level.

The implementation of the regulation is staggered based on the level of risk: 

  • Unacceptable Risk (Effective Date: February 2, 2025)

    This covers systems that pose a clear threat to citizens' safety, rights or dignity. Examples include systems used for mass surveillance or social scoring.

  • High Risk (Effective Date: August 2, 2026)

    These are systems that represent a significant risk and are therefore subject to strict requirements, such as risk management, detailed documentation and monitoring. These systems often form part of critical infrastructure or involve sensitive personal data.

  • Limited Risk (Effective Date: August 2, 2025)

    Systems categorized as limited risk include, for example, chatbots. These systems are required to ensure transparency so that users are always aware of AI usage. A user must not be placed in a situation where they are unsure whether something is generated with the help of AI. The vast majority of AI systems will fall into this category.

  • Ensuring AI-related skills

    Additionally, the regulation requires organizations to ensure, as far as possible, that their staff and others involved in the operation and use of AI systems possess sufficient AI-related skills.

Violations of the AI Act come with significant sanctions, which can be calculated as a fixed amount or as a percentage of the previous year's turnover, whichever is higher.

The sanction rates are as follows: 

  • €35 million or 7% for violations involving prohibited AI applications 
  • €15 million or 3% for breaches of obligations under the AI Act 
  • €7.5 million or 1.5% for providing incorrect information

For startups or SMEs, more reasonable caps have been set. Nevertheless, it is a good idea to be well-versed in the legislation before engaging with artificial intelligence, as non-compliance could be costly. 

In Denmark, the Danish Agency for Digital Government (Digitaliseringsstyrelsen) has been appointed to oversee compliance with the AI Act. The agency is also responsible for the mandatory evaluation of AI systems in the 'High Risk' category before they can be made available. 

How do we approach the AI Act at Kraftvaerk?

At Kraftvaerk, we build AI solutions with a clear awareness of the EU’s upcoming AI Act. We don’t offer legal advice – but we make sure that compliance is embedded in the way we design and implement AI. That means proactively identifying potential risk categories, ensuring transparency in user interfaces, and documenting how AI models are integrated into systems. 

When needed, we work closely with our clients’ legal departments or bring in our legal partner Bird & Bird, who are experts on AI regulation. This ensures that our clients’ solutions live up to the legal requirements – both now and in the future. 

More inspiration

Buy vs. Build: How to make the right decisions when applying AI to your business

Buy vs. Build: How to make the right decisions when applying AI to your business

Just as with any choice of technology, we often get the question from our clients whether to roll your own, or to buy off the shelf. In this blog post we look into the pros and cons when it comes to buy vs. build AI.

Ditch the AI strategy

Ditch the AI strategy

In this article you can read about why we recommend that you ditch the AI strategy and instead focus on discovering and starting small.