AI Act: What Swiss companies must and must not do now

As you know, Switzerland is not an EU country. Nevertheless, EU regulations have a significant impact on Swiss companies. This was most recently seen with the General Data Protection Regulation (GDPR), the influence of which resulted in the new Swiss DPA(blog post). This must be observed by many companies, as personal data of EU citizens is processed through business relationships. The same is likely to apply to the AI Act, and we provide an overview:

1. timeline

After many years of discussion, the AI Act finally came into force on August 1, 2024. However, transitional provisions apply to the entry into force of the individual regulations. The timeline at a glance:

August 2, 2024: Validity of the AI Act (Art. 113 sentence 2 AI Act)

February 2, 2025: Prohibition of AI practices (Art. 5) and obligation to train employees (Art. 4)

August 2, 2025: Entry into force of the regulation for AI models with general purpose (Chapter V), notifying authority (Art. 28 ff), governance (Chapter VII) and criminal provisions (Chapter XII)

August 2, 2026: Full application of the AI Act incl. regulation on high-risk systems (with exception, see below)

August 2, 2027: Application of Art. 6 para. 1 regarding the classification rules for high-risk AI systems

My opinion: The timeline seems confusing at first glance, but it is intended to ensure that providers and operators of AI systems can prepare for this. This is in line with the constitutional requirement that providers should be able to prepare for legislation.

2 When does the AI Act apply to Swiss entrepreneurs?

The AI Act does not apply directly to Swiss companies. However, Swiss companies may also be affected by the scope of application if they

  • place AI systems (*) on the market or put them into operation in the EU (Art. 2 para. 1 a) 1st Alt) or

  • place AI systems with a general purpose (*) on the market, regardless of where the place of business is located (Art. 2 para. 1 a 2nd alt.) or

  • if the results produced by the AI system are used in the EU (Art. 2 para. 1 c))

  • import AI systems into the EU or trade in them (Art. 2 para. 1 d))

While research and development on and with AI systems will largely be permitted in Switzerland, it will be necessary to check whether an EU concern exists at the latest when placing on the market and trading. If this is the case and the AI Act applies, far-reaching legal requirements will apply, which are subject to significant penalties (up to EUR 35 million or 7% of annual global turnover).

However, Switzerland intends to issue an AI ordinance for Switzerland in the 4th quarter of this year (see: link to LinkedIn article).

My assessment: It is unlikely that Switzerland will adopt the EU's AI regulations 1:1. However, there could be regulations with regard to the protection of fundamental rights and democracy (keyword: AI surveillance systems and deepfakes). Major associations have also already spoken out against strict AI regulations in Switzerland (see: link to LinkedIn post).

 

3. why is there an AI Act?

With market regulation in the field of AI, the EU wants to ensure the trustworthy use of AI, promote innovation, protect fundamental rights and guarantee transparency and control of AI systems. Providers and operators of AI systems are therefore subject to different transparency, information and notification regulations, depending on how the risk of the AI system is assessed. To this end, there is the so-called AI cascade, which distinguishes between

  • minimal risk (e.g. spam filter),

  • limited risk (e.g. chatbots),

  • high risk (e.g. medical AI products) and

  • prohibited AI systems (e.g. social scoring systems).

Based on this classification, the requirements for the provider and operator increase considerably. For low-risk AI systems , such as digital assistants or spam filters, the AI Act stipulates less stringent requirements. Nevertheless, these systems must also meet certain transparency and security standards in order to protect the rights and freedoms of citizens. Even AI systems with a general purpose(*) are subject to transparency requirements and information obligations regarding their functionality, purpose and risks. In the case of high-risk AI systems, there are not only certification obligations, but also considerable transparency obligations (e.g. by providing comprehensible operating instructions) and monitoring obligations.

Authorities are set up in each EU country to carry out certification and monitoring.

My assessment: Similar to the GDPR, there is a great deal of uncertainty about the right approach, which is sometimes deliberately fueled. However, as with the GDPR, the following applies: "Things are not eaten as hot as they are cooked". Or translated: "Nothing is implemented as hot as the law says." In addition, the implementation requirements and risks can be assessed and adapted with expert legal support from the development process onwards. Ultimately, the AI Act will also be interpreted by the ECJ in many regulatory areas. Companies can take advantage of this.

4. recommendations for action:

-> Check applicability: Every company that offers or operates AI systems should clarify whether it falls under the AI Act, as there are various exceptions. For example, the AI Act does not apply to AI systems with exclusively military, defense or national security purposes.

-> Train AI competence: The first regulations on AI competence, among other things, will come into force in February 2025. These should be observed not only because of the AI Act, but also for reasons of AI governance and corporate compliance. We therefore recommend the early creation of AI governance guidelines and the training of employees.

-> Observe regulation: From August 2025, the regulations for general purpose AI systems will apply, including documentation requirements. In order to avoid penalties, the documentation and transparency obligations must be fulfilled. However, there are no obligations for open source models.

-> Special requirements for high-risk systems: Special requirements apply to high-risk systems. We will present these separately in a later article.

-> Check EU representative: Articles 22, 54 and 60 of the AI Act require the appointment of an authorized representative established in the EU. As our law firm is also based in Berlin, we can provide you with an EU authorized representative from a single source.

-> No hysteria: Last but not least, the same applies here: No hysteria, but good preparation for the AI Act should be undertaken by every company with AI systems.

You can find out more about this topic in a column by Sven Kohlmeier for Inside-IT, in which he gives his legal and European assessment and the implications for Switzerland: Column by Sven Kohlmeier.

If you have any suggestions or questions on this topic, please contact Sven Kohlmeier.

(*) Definitions according to AI-Act:

AI system (Art. 3 No. 1): "a machine-based system that is designed to operate with varying degrees of autonomy and that, once operational, can be adaptive and that derives from inputs received for explicit or implicit goals how outputs such as predictions, content, recommendations or decisions are produced that can influence physical or virtual environments";

AI system with a general purpose (Art. 3 no. 63): 'general-purpose AI model' means an AI model, including where such an AI model is trained with a large amount of data under extensive self-monitoring, which has significant general usability and is capable of competently performing a wide range of different tasks regardless of how it is placed on the market, and which can be integrated into a variety of downstream systems or applications, with the exception of AI models used for research and development activities or prototyping before being placed on the market'.