Use of AI in law firms and companies

Efficiency and good work organization are regularly required in modern legal practice. The legal industry is undergoing a transformation that is significantly influenced by the integration of artificial intelligence (AI). Law firms are facing the challenge of implementing effective and secure AI solutions to increase their efficiency while improving the quality of their services.

Applications of AI solutions in law firms are therefore becoming increasingly widespread, both with regard to the actual activities of lawyers and internal law firm processes. The following article outlines the special considerations that should be taken into account. The considerations can of course also be applied to other sectors and companies.

Data protection

One of the first considerations when using AI is usually data protection. It is undisputed that the (Swiss) Data Protection Act (DPA) also applies to AI applications. The much stricter GDPR may even apply, which would have to be examined on a case-by-case basis.

The Federal Data Protection and Information Commissioner stated in a media release that when developing AI applications and planning their use, it must be ensured that the data subjects have the highest possible degree of digital self-determination. This raises the question of whether a data protection impact assessment (DPIA) must always be carried out when using AI.

If there is a high risk to the data subject or fundamental rights, the controller must first carry out a DPIA (Art. 22 para. 1 FADP). A high risk arises in the case of extensive processing of particularly sensitive personal data (in addition to data on personal views, activities, health, privacy, etc., this also includes data on administrative and criminal prosecution or sanctions) or if extensive public areas are systematically monitored (Art. 22 para. 2 FADP). The legal text is clear in this respect and supports Rosenthal's view - at least for private data processors. Federal bodies will have to carry out a DPIA if fundamental rights are affected (Dispatch on the FADP, page 7059).

The dispatch on the DPA (p. 7059 f) focuses (as does the FDPIC) on informational self-determination and the right to privacy.

"With regard to data, autonomy means in particular being able to dispose of personal data independently and not having to assume that it is held in unknown quantities by a large number of third parties who can dispose of it without restriction."

However, this is precisely the case with the input of personal data of a specific person into an AI, especially since the AI is able to merge and categorize data of a specific person and thus create a comprehensive picture of a person.

I continue quoting from the message and turn into the legal home stretch:

"A high risk is generally assumed if the specific characteristics of the planned data processing suggest that the data subject's freedom of disposal over their data is or may be severely restricted. The high risk may result, for example, from the type of data processed or its content (e.g. particularly sensitive data), the type and purpose of the data processing (e.g. profiling), the amount of data processed, the transfer to third countries (e.g. if foreign legislation does not guarantee adequate protection) or if a large or even unlimited number of people can access the data."

Accordingly, a DPIA would have to be carried out if profiling by AI systems (i.e. the automated processing of personal data in order to assess key aspects of a person's personality) is possible and if data is transferred to a third country without an adequate level of data protection. As Switzerland believes that the USA (where most AI systems are probably hosted) does NOT guarantee an adequate level of data protection (as of 01/2024), a DPIA is therefore required in these cases.

Assessment - Data protection

If the AI is not used to process personal data, but only factual or information data, then there is no risk. However, if personal data is processed with AI systems, it must be checked in each case whether a DPIA needs to be created. In my opinion, there is a high risk for the data subject, as personal data is stored and could be used for subsequent profiling. This is because the technical possibilities of profiling and data dominance through AI will advance considerably and, in my opinion, it cannot be ruled out that personal data once entered will not be used and merged later, because data remains "the gold of the 21st century". Art. 22 para. 2 FADP further specifies the cases in which a DPIA must be carried out. The result of the DPIA in a specific case then depends on the specific area of application, the amount of personal data, the technology, etc.

Takeaways - Data protection

  • Before using an AI system, obtain legal clarification as to whether a DPIA is required

  • Check data transfer to third countries (adequate level of data protection?)

  • Check whether AI agents can be used without data transfer, e.g. on premise in the company, so that data sovereignty is maintained

  • Information on the processing of personal data must be provided in advance (data protection declaration); consent must be obtained from the client for particularly sensitive personal data

  • DPA protects the right to informational self-determination - every company should take this into account

Copyright

Another topic that is of concern in connection with AI solutions is copyright in the training, use and output of AI tools.

The question of whether the use of data as training data constitutes an infringement of copyright has not only arisen since the New York Times or writers such as John Grisham or George R.R. Martin filed a lawsuit. This question is unresolved and will ultimately be decided in court.

Interestingly, ChatGPT has responded to this discussion in its Terms & Conditions (T&C). While it still said in March 2023 ("you own all input" - which is of course legally and factually unconvincing), since November 2023 it says"You are responsible for Content, including ensuring that it does not violate any applicable law...".

The copyright or right of use of third-party works (e.g. images, music, written works, software code) must of course be observed when entering and using them in an AI tool. The problem is that the infringed party can hardly prove which works have been used illegally for AI training. However, it is not clear why copyright law should not apply to AI tools, especially as further evaluation and value creation takes place through the AI tools.

According to the current relatively unanimous opinion, the output of an AI-generated result does not enjoy copyright protection, as it is not a human creation and in many cases the level of creativity is likely to be questionable. The level of creativity represents the creative element, so to speak, i.e. only if something has a certain level of creativity can it also have a level of creativity.

Something else may result from the fact that a human provides a creative input (prompt) and thus the AI result can be achieved in the first place. Insofar as the Federal Supreme Court (BGE 130 III 168 E. 5.2.) has ruled that a photo of the well-known singer Bob Marley at a concert can enjoy copyright protection, this case law should also be applied to the AI input. The photographer of the portrait pressed the shutter release at the right moment and thus created an impressive image. This convinced the federal judges:

"... that the photograph of Bob Marley is appealing and interesting, and describes the reason for this as the special facial expressions and posture of the person depicted, especially the flying Rasta curls and their sculpture-like forms, whereby a special accent is set by the shadow that a horizontally flying curl casts on the face. [...] This is manifested in the choice of framing and the timing of the shutter release during a particular movement of the singer."

In my opinion, pressing the shutter release of a camera at the right moment (Bob Marley) is just as creative as a creative input in an AI tool in order to achieve a certain, perhaps already considered, result - camera and AI tool are then only the tool of creative creation.

A further aid to interpretation could be the reference to the practice of works created with a computer (computer as a tool), e.g. in the field of graphic design (BGer 4C.120/2002, E. 2).

Assessment - Copyright

The AI output can enjoy copyright protection if the input is sufficiently creative and has a level of creation. The author would be the person who entered the input/prompt. The AI tool is only the tool of the creative creator, so in our view the output can enjoy copyright protection.

Takeaways - Copyright

  • When using AI tools, observe the copyright and usage rights to the works.

  • It is currently unclear whether the training of AI tools with copyrighted works is permissible or impermissible. There is legal uncertainty here.

  • The output can enjoy copyright protection if the input is based on a particular creative input and the output has a level of creativity.

  • Copyright is likely to be one of the most exciting issues in the use of AI tools. But this has always been the case (just remember cassettes and CDs and digitization).

Contract law and duty to inform

Let's get this out of the way first: There is no legal requirement for the contractor to inform the client about the use of AI tools, BUT: of course, the Data Protection Act prescribes information obligations that also relate to the use of AI tools(see FDPIC media release).

Of course, a duty to provide information and clarification can be expressly regulated in a contract. In the case of IT contracts in particular, comprehensive disclosure and information obligations often already exist today. Ultimately, however, in my opinion, there may also be a contractual secondary obligation to provide the client, without being asked, with essential information relating to the execution of the contract. The secondary obligation may arise from the requirement to act in good faith (Art. 2 para. 1 ZGB). Ancillary duties derived from this requirement are, in particular, duties of care, custody, clarification, information and advice (BSK ZGB-I-Lehmann/Honsell, Art. 2 N16).

An unsolicited obligation to provide information about the use of AI applications could exist for the contractor if the client assumes that the service will be provided on a highly personal basis, which may also be required by the practice of transparency in good and fair business cooperation. It should be sufficient here to inform the client in general terms; the client can then make further inquiries if interested. If available, compliance frameworks and internal work instructions should include whether and how customers and clients should be informed about the use of AI tools.

After all, transparency ultimately creates trust. So why not proactively inform contractual partners and clients about the use of AI tools? It is probably also in line with clients' expectations that the lawyer creates efficiently and uses AI tools.

Probably rather unconsciously, the Federal Supreme Court (BGer 4A_305/2021) decided in 2021, when AI was not yet widely discussed, that the use of AI tools does not constitute an impermissible substitution under contract law. According to Art. 398 para. 3 CO, the contractor must "personally procure the business" unless he is authorized to transfer it to a third party. The Federal Supreme Court comes to the conclusion that "third parties" within the meaning of Art. 398 para. 3 CO refers to other natural or legal persons:

"The use of aids, such as a computer with corresponding software that carries out market making automatically, does not constitute substitution, as these aids do not have legal personality." (E 7.3.1 ).

Applied to AI tools, this case law means that the use of AI tools does not constitute impermissible substitution, but in my opinion, the same applies as mentioned under I. with regard to the duty to inform and provide information.

Takeaways - contract law and notification obligations

  • When using AI tools, an obligation to provide information may arise from the DPA.

  • An obligation to provide information and clarification may arise if this has been contractually agreed.

  • In my opinion, however, an obligation to provide information may arise from a secondary obligation if the client expects the order to be carried out in person.

  • The use of AI tools (in law firms or companies) does not constitute an impermissible substitution under Art. 398 para. 3 CO.

Protection of professional secrecy

Professional confidentiality (e.g. for lawyers, doctors) poses a particular challenge when using AI tools. Apart from the aforementioned, there is a special trust in confidentiality. Breach of professional secrecy can even be punished under criminal law. Posting a PDF document, e.g. a client's contract, in an online translator without first obtaining the client's consent is already inadmissible and constitutes a breach of confidentiality. It is therefore imperative for lawyers to observe professional secrecy, also for the sake of their own reputation. We suggest introducing regulations within the law firm/company for the use of AI tools and the protection of client data.

Takeaways - contract law and notification obligations

  • When using AI tools, professional secrecy must be maintained or the client must be asked for consent in advance

  • Ideally, law firms and companies should have AI usage regulations

Conclusion

AI has been developed to stay and, in our opinion, will have a significant impact on the world of work and life. Procurement and use must be legally secure in order to avoid bad investments and to preserve trust in the legal profession. Despite new technologies, law firms will have to preserve professional traditions and attorney-client privilege. However, the way they work will change considerably: from an often effort-based to a results-oriented model. This offers opportunities for law firms and creates space for innovative and new ideas. New markets and clients are opening up for small and medium-sized law firms in particular, as they can work in a more agile way and offer new services and products with the support of AI technologies.

Together with Katja Böttcher, Sven Kohlmeier has written a guide to the integration of artificial intelligence (AI) in law firms. If you have any questions or suggestions on this topic, please do not hesitate to contact Sven Kohlmeier.