The generic term “artificial intelligence” covers numerous applications of machine learning, characterised by the ability to find dynamic solutions. For their functioning, programmed requirements are in part no longer decisive, but rather an overall system consisting of database, training and testing environment as well as the actual learning procedure – such as artificial neural networks (deep learning). In some cases, this confronts the legal system with completely new challenges, as it often ranges from being extremely difficult to impossible to understand or forecast the behaviour of the systems. At the same time, systems capable of learning can often uncover findings with striking clarity and make controlling decisions in matters involving complex cause-and-effect relationships – such as piloted driving or in the smart grid. The project will initially work out the specific opportunities and hazards associated with the phenomenon of “artificial intelligence” – in order to then search for suitable regulatory approaches that ensure that their use is in harmony with the principles of privacy protection, legal certainty and orientation towards the common good. Conceivable approaches are standardised requirements (e.g. in connection with auditing procedures), supervisory authorities to control particular use forms, legal provisions for particular sectors and an (also ethical) overall concept for societal handling of artificial intelligence.
Moreover, what is especially interesting and also to a large extent still unclear is which regulatory efforts legislature should apply and how and from what critical threshold they are to be applied. Consequently, the programme area will in particular ask the question which thresholds must be exceeded and with what legal consequences or whether only particular sectors of economic life should be subject to regulation.
Moreover, numerous proposals for regulatory authorities need to be examined regarding their legal feasibility and reasonableness. This is also related to considerations regarding to what extent Germany still has room for manoeuvre following the GDPR and whether the Federal Republic can, if necessary, make use of innovative regulatory techniques, such as experimental clauses. Another focus will be on technical methods and organisational measures that ensure that the lawfulness of computer programmes can be checked or that at least render their decisions explicable. When processing research questions, the programme area can build on the extensive findings of the predecessor project “Algorithm control as regulatory task”, which resulted in the monograph “Blackbox Algorithmus – Grundfragen einer Regulierung Künstlicher Intelligenz” (Springer Verlag, Heidelberg 2019), among other things.
An important partial aspect is the question how the state can absorb the potential of software applications capable of learning in order to optimise the fulfilment of its tasks – for example, through the use of chatbots in public administration or controlling public data streams (for example, in a smart city). Of particular interest in this regard is the question how governance systems change due to the fact that an increasingly “intelligent” technical agent now sits next to a person in the control centre: what pitfalls must be taken into account if the state increasingly controls social processes through software? Attention here will also be on the constitutional limits of the deployment of artificial intelligence in public administration.
Prof. Dr. Mario Martini