Procurement Conference 2023: Use of AI in the public sector

The procurement conference at the end of August 2023 was a complete success for the organizers and visitors. In terms of content, the procurement conference had a lot to offer under the motto "Everything new? The changing world of procurement" had a lot to offer: Cloud Service, Data Protection, AI and Sustainability. As a speaker, our lawyer and specialist attorney for IT law (DE) Sven Kohlmeier provided insight into the procurement of AI for public administration and institutions.

Use cases of AI in administration

The use of AI tools in administration is not entirely new. For example, the commercial register in St. Gallen uses a chatbot on its website, and the migration office of the canton of Zurich also relies on a digital chatbot. Both are possible areas of application for AI tools. However, with the public coverage of ChatGPT (by OpenAI) and its counterpart Bard (by Google), the possibilities of simplification in many areas of administration seem to have reached a new level: The evaluation and analysis of large amounts of data, the evaluation of court decisions, the AI-automated review of applications, decisions and opinions, and the preparation of rulings. AI tools can also be a valuable asset in cybersecurity and criminal investigation work. For example, instead of investigative staff having to look at criminally incriminated material on the internet, this can be done by an AI tool.

The advantages are thus clear: better decision-making, time and cost savings, and increased efficiency of administrative activities. On the other hand, AI applications also pose a challenge for the administration: There may be risks in the use of AI for confidential or personal data, and trust in government institutions could also suffer if an emotionless AI tool decides instead of an official person.

Legal aspects of the procurement of AI applications

1. procurement and contracting law

In principle, the administration and authorities may procure AI applications within the framework of demand management if the general authorization is governed by the substantive laws. The public procurement law applies to the procurement procedure (federal level BöB/VäB; cantonal procurement law). In this context, the state is less free when it comes to outsourcing activities, as it is bound in particular by fundamental rights and must base its actions on a legal foundation (SURY, Digital in Law, Informatikrecht, Bern 2021, p. 289).

2. data protection

The use of AI tools must be in compliance with the (revised) DPA [https://www.wickipartners.ch/news/datenschutzrechtsrevision-teil-1-bersicht-der-nderungen]. There is a duty to inform when processing personal data, and a duty to obtain prior consent when processing particularly sensitive personal data. The Zurich IDG [https://www.zh.ch/de/news-uebersicht/medienmitteilungen/2023/08/kanton-zuerich-modernisiert-gesetz-ueber-information-und-datenschutz.html] also aims to regulate the handling of AI. Risks exist in particular when data from citizens are fed into AI tools and are later reused by the AI as training data.

3. liability

In addition to the fine under data protection law for violations of the DPA (Art. 60 et seq. DSG,[https://www.wickipartners.ch/news/bussen-im-neuen-datenschutzrecht-bis-chf-25000-fr-private-person-mglich], proper performance is also owed when auxiliary means such as AI software are used. Although the state is not liable for poor performance in the exercise of sovereign duties (unlike in the case of a contractual relationship under civil law), the employee must still perform his or her work properly vis-à-vis the employer, be it a public authority, hospital or institution, even if AI is used. It has not yet been decided whether there is an obligation to disclose the use of AI software. At least if a highly personal service is to be provided, this could be a secondary obligation (under civil law); for the administration, this could result in a disclosure obligation with regard to the use of AI software for transparency considerations.

4. intellectual property

When using AI tools, the question arises as to the intellectual property rights to the resulting work as well as the right of use to the data used. It is already questionable whether the AI training data require a right of use. Only recently, for example, well-known authors such as John Grisham, Jonathan Franzen and George R.R. Martin filed a lawsuit against OpenAI in New York for copyright infringement [https://www.reuters.com/legal/john-grisham-other-top-us-authors-sue-openai-over-copyrights-2023-09-20/].

The AI output has no copyright protection because it is not a human creation and, moreover, in many cases the level of creation is likely to be questionable. According to Sven Kohlmeier, something different may result from the fact that a human being makes a creative input (prompt) and thus the AI result can be achieved in the first place. Insofar as the Federal Supreme Court (BGE 130 III 168 E. 5.2.) has ruled that a photograph of the well-known singer Bob Marley at a concert can enjoy copyright protection, this case law should also be applied to AI input. Pressing the shutter button of a camera at the right moment (Bob Marley) is just as creative as a creative input in an AI tool.

Insofar as copyrighted material or works are used for the AI input, the user would have to have the rights of use. In this respect, it is inaccurate for OpenAI's General Terms and Conditions to state that "you own all Input". The General Terms and Conditions of Open AI cannot easily transfer the "intellectual property" of the Input to the user. The transfer of the right of use can be made solely by authors or authorized users. OpenAI is not an author if, for example, the user uses the text of another author. In the area of copyright, therefore, interesting legal questions will arise in the future that must also be taken into account by the public administration and its employees if they do not want to infringe any intellectual property. 

5. professional secrecy

If professional secrecy holders (e.g. doctors or lawyers) work in the public institution, e.g. in a hospital or in the legal service, the relevant regulations on confidentiality must be observed. Anyone who feeds the data entrusted to him or her into an AI tool without consent not only violates the professional duty of confidentiality, but may also be fined under Art. 62 (1) FADP.

6. observance of public law

From the requirements of the Federal Constitution, the right to a legal hearing enshrined in Art. 29 para. 2 BV, may arise the need for official decisions to be made by human beings. Art. 29 para. 2 BV reads:

"The parties are entitled to be heard."

As the Federal Supreme Court has consistently ruled, the authority has the duty to give reasons for its rulings and decisions (BGE 129 I 232 E 3.2):

"The principle of the right to be heard as a personal right of participation requires that the authority actually hears the submissions of the person affected by the decision in his legal position, examines them carefully and seriously and takes them into account in reaching its decision. This results in a fundamental obligation on the part of the authorities to give reasons for their decisions. The citizen should know why the authority has decided contrary to his request. The reasons for a decision must therefore be drafted in such a way that the person concerned can challenge it properly if necessary. This is only possible if both the citizen and the appellate authority are able to understand the implications of the decision. In this sense, the considerations on which the authority was guided and on which its decision is based must be mentioned at least briefly (BGE 126 I 97 E. 2b with references)."

It is questionable whether, according to the requirements of the Federal Supreme Court, the right to be heard is granted if it is not the authority that examines the complainant's arguments, but an autonomous AI tool. Can this still be qualified as an authority in the sense of the legislative concern?

And also in the case of the exercise of discretion by the authority, the question arises as to whether this can be replaced by an AI tool. Discretion is misused if it is not exercised dutifully, is guided by extraneous criteria or is "unmotivated" (VGr, 22.10.2004, VB.2004.00297, E. 2.3). Even if not every human official decision is particularly motivated, there should be no "motivation" at all in the use of AI. And there may also be an undercutting of discretionary powers. This is the case if the authority does not exhaust the discretion granted to it and, for example, completely or partially waives the discretion to which it is entitled (VGr, 9.11.2011, VB.2011.00573, E. 2), if special circumstances are not taken into account although the law provides for this (VGr, 8.2.2007, VB.2006.000369, E. 6), or if the authority treats cases schematically in the same way, although the legislator requires a differentiated treatment of certain issues (VGr, 19.5.2004, VB.2004.00123, E. 4.3.1). This could be the case with the use of AI tools. An incorrect exercise of discretion by an authority may result in the invalidity of the decision.

Ethical aspects of the procurement of AI applications

1. morals

The decision-making of the AI application may differ depending on the societal-moral compass, depending on the data with which the AI application has been trained. For example, while in Europe children are often given a higher priority over elders, in Asia it is the other way around. An interesting insight into this is provided by the study of various research institutions and universities, available at www.moralmachine.mit.eu. The ethical question is: Who programs and trains the AI application, and does this impose values on decision-making?

2. deployment of autonomous weapon systems

The ethical issue in the use of autonomous weapon systems is how much "residual" human control is required in the use of AI weapon systems. It is also questionable on the basis of which criteria the weapons - as a rule - independently select their targets. Is skin color or a full beard enough? Finally, AI-based weapon systems also raise the question of whether they need to be outlawed, as is the case with landmines. The Federal Department of Defense, Civil Protection and Sport (DDPS) sees "military advantages" in the use of autonomous weapon systems and hopes for better compliance with international law. At the same time, however, it notes, "Numerous (international) legal, political and ethical questions arise." (Federal Department of Foreign Affairs (FDFA), Disarmament and Nonproliferation, Classical Weapons, last updated 01.01.2023, available: here).

3. co-working

In the area of employment law, another ethical question arises: What influence does it have on employees if the decision is made or controlled by an AI application? This can have an impact not only on the motivation of employees if the AI is "always right", but also on the interpersonal exchange.

Guidance on procurement of AI application in public administration.

According to attorney Sven Kohlmeier, the procurement and use of AI applications in public administration requires the following specifications: Transparency, guidelines and uniformity.

1. transparency: when procuring AI systems, the call for tenders needs requirements for:

  • the disclosure/transparency about the origin of the (training) data

  • Transparency about the algorithm of decision making

  • Ideally: Disclosure of the source code, at least to the authority

  • In the decisions, it must be disclosed, for example, by means of a "digital footprint" that the decision was made using AI systems. This enables the citizen to check the decision and the tools used

Guidelines: The specifications for the AI software to be procured must be derived from the internal administrative guidelines. The guidelines must be drawn up in advance of the procurement and should contain essential points on the use of AI applications:

  • Fulfillment of the triage checklist for AI systems (Checklist 1, Final Report Deployment of AI, Canton of Zurich, PDF).

  • Fulfillment of transparency requirements (Checklist 2, Final Report Deployment of AI, Canton of Zurich, PDF).

  • Technical specifications on how decisions disclose the AI tools used through a signature or "digital footprint".

  • Specifications about the training data used and its origin

  • Requirements for compliance with laws (including data protection, copyright)

3. uniformity: there is a need for a uniform administrative practice for the tendering of AI systems by the Confederation and the cantons with regard to:

  • Ethical principles

  • Compliance with laws (e.g. data protection and copyright, etc.)

  • Description of the system to be procured (e.g., scope of services, hosting, training data origin and training data customization requirements, etc.).

At Wicki Partners AG, Sven Kohlmeier deals with issues relating to AI, IT and data protection. Due to his professional past, he also has a deep insight into administrative processes and the digitalization of administration. If you have any questions on the topic, please feel free to contact Sven Kohlmeier.

 

The presentation given at the procurement conference can be found here:(PDF)

The full video link of his presentation can be found here:(YouTube)