Statistics
46
Views
0
Downloads
0
Donations
Uploader

高宏飞

Shared on 2025-11-28
Support
Share

AuthorNatascha Windholz et al.

Compliant Usage of Artificial Intelligence in the Private and Public Sectors

Tags
No tags
Publish Year: 2025
Language: 英文
Pages: 489
File Format: PDF
File Size: 3.9 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

A I A CT Natascha WINDHOLZ et al. Compliant Usage of The Handbook AI ACT Artificial Intelligence in the Private and Public Sectors
Accessibility disclaimer Carl Hanser Verlag goes to great lengths to make its products accessible. This also includes making images or tables accessible for blind and visually impaired people. This is achieved through additional descriptive texts (alternative texts) that are integrated into the data. The alternative texts can be read aloud by assistive technologies (e.g. screen readers). An AI supports our teams of authors in the cre- ation of these texts. Responsibility for the content remains with the editors and authors.
Windholz et al. The AI Act Handbook
(This page has no text content)
Natascha Windholz et al. The AI Act Handbook Compliant Usage of Artificial Intelligence in the Private and Public Sectors With Contributions from Kristina Altrichter, Gabriele Bolek-Fügl, Karin Bruckmüller, Alexandra Ciarnau, Veronica Cretu, Julia Eisner, Julia Fuith, Valerie Hafez, Isabella Hinterleitner, Manuela Machner, Renate Rechinger, Sabine Singer, Merve Taner, Theresa Tisch, Natascha Windholz, Carina Zehetmaier, Klaudia Zoltzmann-Koch Hanser Publishers, Munich
Print-ISBN: 978-1-56990-314-8 E-Book-ISBN: 978-1-56990-324-7 Epub-ISBN: 978-1-56990-324-7 All information, procedures, and illustrations contained in this work have been compiled to the best of our knowledge and is believed to be true and accurate at the time of going to press. Nevertheless, errors and omissions are possible. Neither the authors, editors, nor publisher assume any responsibility for pos- sible consequences of such errors or omissions. The information contained in this work is not associated with any obligation or guarantee of any kind. The authors, editors, and publisher accept no responsibility and do not assume any liability, consequential or otherwise, arising in any way from the use of this infor- mation – or any part thereof. Neither do the authors, editors, and publisher guarantee that the described processes, etc., are free of third party intellectual property rights. The reproduction of common names, trade names, product names, etc., in this work, even without special identification, does not justify the assumption that such names are to be considered free in the sense of trademark and brand protection legislation and may therefore be used by anyone. The final determination of the suitability of any information for the use contemplated for a given application remains the sole responsibility of the user. Bibliographic information of the German National Library: The German National Library lists this publication in the German National Bibliography; detailed biblio- graphic data are available on the Internet at http://dnb.d-nb.de. This work is protected by copyright. It was machine-translated and subsequently checked and edited by the authors. All rights, including those of translation, reprint, and reproduction of the work, or parts thereof, are re- served. No part of this work may be reproduced in any form (photocopy, microfilm, or any other process) or processed, duplicated, transmitted, or distributed using electronic systems, even for the purpose of teaching – with the exception of the special cases mentioned in §§ 53, 54 UrhG (German Copyright Law) – without the written consent of the publisher. No part of the work may be used for the purposes of text and data mining without the written consent of the publisher, in accordance with § 44b UrhG (German Copyright Law). © 2025 Carl Hanser Verlag GmbH & Co. KG, Munich Kolbergerstraße 22 | 81679 Munich | info@hanser.de www.hanserpublications.com www.hanser-fachbuch.de Editor: Sylvia Hasselbach Production Management: le-tex publishing services GmbH, Leipzig Cover concept: Marc Müller-Bremer, www.rebranding.de, München Cover design: Thomas West Cover picture: © AdobeStock / Maxim_Kazmin Production Management: le-tex publishing services GmbH, Leipzig Printed and bound by Elanders Waiblingen GmbH, Waiblingen Typesetting: Eberl & Koesel Studio, Kempten Printed in Germany
Table of Contents Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XV 1 What is AI and How Do Data Science and Data Analytics Differ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Gabriele Bolek-Fügl 1.1 The Cornerstones of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Computing Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.4 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.5 Measurement and Model Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.6 Interfaces for Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.7 Security and Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 Data Science and Data Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Development of AI in SMEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2 Geopolitics of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Veronica Cretu 2.1 Emerging Landscape of AI Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 The Race for AI Regulation – the Big Three . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
VI Table of Contents 3 AI Act: Rights and Obligations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Gabriele Bolek-Fügl, Veronica Cretu, Julia Fuith, Merve Taner, Natascha Windholz, Carina Zehetmaier 3.1 Introduction to the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1.1 Definition of AI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.2 Roles of Natural or Legal Persons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.1.3 Market Launch Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.1.4 Terms for the Use of AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1.5 Data-related Designations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.1.6 AI Literacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 AI Literacy for Providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2.2 Definition of AI Literacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2.3 AI Literacy and the Provisions of the AI Act . . . . . . . . . . . . . . . . . . . . 48 3.2.4 Proposal for a Maturity Framework for AI providers . . . . . . . . . . . . 50 3.3 Risk-based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.1 Prohibited AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.2 High-risk AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.3.2.1 Classification of AI as a High-risk AI System . . . . . . . . . . . . 61 3.3.2.2 Annex III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.3.2.3 Requirements for High-risk AI Systems . . . . . . . . . . . . . . . . 71 3.4 Fundamental Rights Impact Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.4.1 AI Act and Fundamental Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.4.1.1 Implementation of the Fundamental Rights Impact Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.4.1.2 Impact Assessment as Part of AI Governance . . . . . . . . . . . 97 3.4.1.3 Existing Tools for Fundamental Rights Impact Assessments 97 3.5 Harmonized Standards, Conformity Assessment, Certificates and Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.5.1 Harmonized Standards and CE Marking . . . . . . . . . . . . . . . . . . . . . . . . 101 3.5.2 Conformity Assessment Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.5.3 Exemptions from the Conformity Assessment Procedure . . . . . . . . 107 3.5.4 EU Declaration of Conformity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.5.5 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Table of Contents VII 3.6 Transparency Obligations in the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.6.1 Guidelines for the Implementation of Transparency Obligations for Data and Data Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3.6.2 Guidelines for the Implementation of the Transparency Provisions Provided for in Art. 13 AI Act . . . . . . . . . . . . . . . . . . . . . . . 115 3.6.3 Guidelines on the Implementation of Transparency Obligations for Providers and Suppliers of Certain AI  Systems and GPAI Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.7 General-purpose Artificial Intelligence (GPAI) . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.7.1 ChatGPT: the Start of an “AI revolution”? – Implications for the Legislative Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.7.2 Inclusion of GPAI in the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 3.7.3 AI Models and AI Systems for General Use . . . . . . . . . . . . . . . . . . . . . . 121 3.7.3.1 Classification Rules for GPAI Models . . . . . . . . . . . . . . . . . . . 121 3.7.3.2 Commitments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3.7.4 GPAI Models with Systemic Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.7.4.1 Classification Rules for General-Purpose AI Models with Systemic Risk according to Art. 51 AI Act . . . . . . . . . . . . . . . 126 3.7.4.2 Obligations for GPAI Models with Systemic Risk under Article 55 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 3.7.5 GPAI Models and High-risk Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 3.7.6 Implementation Period and Penalties . . . . . . . . . . . . . . . . . . . . . . . . . . 129 3.8 AI Sandboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3.8.1 Setup and Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3.8.2 Further Processing of Personal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 3.8.3 Tests Outside of AI Sandboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3.8.4 Consent for Tests Outside Sandboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3.8.5 Facilitation for SMEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3.9 Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3.9.1 Notifying Authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3.9.2 Conformity Assessment Bodies and Notified Bodies . . . . . . . . . . . . . 138 3.10 Governance in the AI Act. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 3.10.1 AI Office . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 3.10.2 AI Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 3.10.2.1 Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 3.10.2.2 Tasks of the AI Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
VIII Table of Contents 3.10.3 Advisory Forum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 3.10.4 Scientific Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 3.10.5 National Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 3.10.6 EU Database for High-risk AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.10.7 Post-market Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.10.8 Sharing Information on Serious Incidents . . . . . . . . . . . . . . . . . . . . . . 146 3.10.9 Law Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.10.10 Confidentiality of Procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 3.10.11 Procedures at National Level for dealing with AI Systems presenting a Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 3.10.12 Procedures for AI Systems Classified as Non-high-risk AI by the Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 3.10.13 Compliant AI Systems which present a Risk . . . . . . . . . . . . . . . . . . . . . 150 3.10.14 Formal Non-conformity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3.10.15 Legal Remedy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3.10.15.1 Right to a Explanation of Decision-making . . . . . . . . . . . . . 151 3.10.15.2 Legal Remedies for GPAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 3.11 Penalties and Sanctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 3.12 SMEs and Start-ups in the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 3.12.1 Facilitations and Exemptions for SMEs and Start-ups . . . . . . . . . . . 155 3.12.2 Checklist: Launching a New AI System in Accordance with the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 4 Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Gabriele Bolek-Fügl 4.1 General Requirements of the GDPR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 4.1.1 The Principles for Processing Personal Data . . . . . . . . . . . . . . . . . . . . 169 4.1.2 Lawfulness of Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.1.3 Obligation to Provide Information where Personal Data is Collected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.1.4 Rights of the data subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 4.2 Privacy by Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 4.2.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 4.2.2 Responsibility for Processing in Compliance with the Law . . . . . . . 179 4.3 Requirements for Testing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 4.4 Automated Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Table of Contents IX 4.5 Guidance and Recommendations on GDPR and AI from Data Protection Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 4.5.1 Publications of the European Data Protection Board (Excerpt) . . . . 185 4.5.2 DSK Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 4.5.3 The State Commissioner for Data Protection and Freedom of Information Baden-Württemberg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 4.5.4 Hamburg Commissioner for Data Protection on LLMs . . . . . . . . . . . 191 4.5.5 FAQ of the Austrian Data Protection Authority . . . . . . . . . . . . . . . . . . 194 4.6 ChatGPT and the Data Protection Complaint from noyb . . . . . . . . . . . . . . . . . 195 5 Intellectual Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Alexandra Ciarnau 5.1 Protection of AI and its Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5.1.1 Copyrights and Ancillary Copyrights . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.1.1.1 General Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.1.1.2 Individually Developed AI Systems . . . . . . . . . . . . . . . . . . . . 200 5.1.1.3 Individually Developed AI Models . . . . . . . . . . . . . . . . . . . . . 200 5.1.1.4 Input and Training Data Pool . . . . . . . . . . . . . . . . . . . . . . . . . 201 5.1.1.5 User Documentation and User Manual . . . . . . . . . . . . . . . . . 201 5.1.1.6 Rights and Claims of the Author . . . . . . . . . . . . . . . . . . . . . . . 202 5.1.1.7 Granting of Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5.1.1.8 Open Source Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5.1.1.9 Patent and Utility Model Protection . . . . . . . . . . . . . . . . . . . 204 5.1.2 Trade Secret Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 5.2 Legal IP Compliance when Using AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5.2.1 AI Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5.2.1.1 IP-protected Input Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5.2.1.2 AI Act Requirements for AI Systems . . . . . . . . . . . . . . . . . . . 208 5.2.2 AI Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5.3 Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.4 Reference Table Legislation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6 AI and IT Contract Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Alexandra Ciarnau, Merve Taner 6.1 Licensing of Standard Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 6.2 Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
X Table of Contents 6.3 Software Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 6.4 Open Source Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 6.4.1 Open Source AI – Paving the Way for the Future? . . . . . . . . . . . . . . . 223 6.4.2 Definition of Open Source and Legal Basis . . . . . . . . . . . . . . . . . . . . . . 225 6.4.3 Legal Problem Areas in Connection with Open Source According to Existing Legal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.4.4 Open Source Software Strategy of the European Commission . . . . 230 6.4.5 Exceptions for Open Source in the AI Act . . . . . . . . . . . . . . . . . . . . . . . 230 6.5 Hardware Purchase and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.6 General Information on Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.7 Reference Table Legislation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 7 Private Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Kristina Altrichter, Gabriele Bolek-Fügl, Karin Bruckmüller, Alexandra Ciarnau, Julia Eisner, Isabella Hinterleitner, Manuela Machner, Renate Rechinger, Carina Zehetmaier, Klaudia Zotzmann-Koch 7.1 AI – from Prejudice to Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 7.1.1 Right to Equality and Non-discrimination . . . . . . . . . . . . . . . . . . . . . . 244 7.1.2 How Prejudices Find their Way into AI . . . . . . . . . . . . . . . . . . . . . . . . . 246 7.1.2.1 How the AI Act Addresses Discrimination . . . . . . . . . . . . . . 249 7.1.2.2 Can We Fix Bias in AI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.2 AI in the Financial Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 7.2.1 Exceptions to the Scope of Application . . . . . . . . . . . . . . . . . . . . . . . . . 255 7.2.2 Prohibited AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 7.2.3 High-risk AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7.2.3.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7.2.3.2 Refutation of the High-risk Property . . . . . . . . . . . . . . . . . . . 262 7.2.3.3 Interactions between Financial Regulations and the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 7.2.4 General Purpose AI Systems/Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 7.2.5 Certain AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 7.2.6 Authority Competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 7.3 AI in the Insurance Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 7.3.1 Dynamic Underwriting and Risk Assessment in Health Insurance 268 7.4 AI and Whistleblowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 7.4.1 Whistleblower for the AI Category . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Table of Contents XI 7.4.2 Areas of Application of AI in the Implementation of the Whistleblowing Directive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 7.4.2.1 Challenges in the Whistleblowing Process . . . . . . . . . . . . . 276 7.4.2.2 Procedure of the Whistleblowing Use Case . . . . . . . . . . . . . 279 7.5 Use of AI in Future and Existing Employment Relationships . . . . . . . . . . . . . 282 7.5.1 Writing Job Ads with AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 7.5.2 AI Support for Applicant Selection by Means of Video  Analysis [4] 285 7.6 AI in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 7.6.1 Roles in the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 7.6.2 AI Literacy (Art. 4 AI Act) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 7.6.3 AI Systems with “Limited” Risk (Art. 50 AI Act) in Education . . . . . 291 7.6.4 High-risk AI Systems in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 7.6.5 Prohibited AI Systems in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 7.7 AI in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.7.1 Example: AI Diagnosis of Skin Diseases . . . . . . . . . . . . . . . . . . . . . . . . 297 7.7.1.1 High-risk AI Classification within the Meaning of the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 7.7.1.2 Requirements and Obligations of the Hospital Deployer According to the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 7.8 AI in Advertising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 7.8.1 Legal Requirements for AI in Advertising . . . . . . . . . . . . . . . . . . . . . . 302 7.8.1.1 Prohibited AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 7.8.1.2 Overlaps with Other Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 7.8.1.3 Data Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 7.8.1.4 Personalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 7.8.2 Energy Consumption and Sustainability . . . . . . . . . . . . . . . . . . . . . . . 305 7.8.3 Best Practice: Generative AI in Creation . . . . . . . . . . . . . . . . . . . . . . . . 306 7.9 Tourism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 7.9.1 Use case: Operational efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 7.9.2 Use Case: Guest Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 7.9.3 Use Case: Smart Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 7.10 AI in Autonomous Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 7.10.1 Austrian & International Legislation . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 7.10.2 Development of Autonomous Driving Functions . . . . . . . . . . . . . . . . 326 7.10.3 The AI Act and Autonomous Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
XII Table of Contents 8 Public Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Kristina Altrichter, Karin Bruckmüller, Veronica Cretu, Theresa Tisch, Natascha Windholz 8.1 “Public Decision Making” and AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 8.1.1 Use Cases in Annex III AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 8.1.2 Example: Allocation of Social Benefits . . . . . . . . . . . . . . . . . . . . . . . . . 330 8.1.3 Example: Allocation of Kindergarten Spots . . . . . . . . . . . . . . . . . . . . 332 8.2 AI in Criminal Prosecution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 8.2.1 Use of Biometric Real-time Remote Identification Systems . . . . . . . 334 8.2.2 Implementation Obligations of the Member States . . . . . . . . . . . . . . 336 8.3 AI in Elections and Democratic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 8.3.1 Emerging Discussions about the Impact of AI on Democracy and Electoral Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 8.3.2 How Should AI be Defined in the Context of Elections? . . . . . . . . . . 340 8.3.3 Exploiting Opportunities and Minimizing Risks through the Use of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 8.3.4 AI and Election Integrity: a Hypothetical Analysis of the Cambridge Analytica Scandal in the Context of the AI Act . . . . . . . . 347 8.4 AI in the NIS Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 8.4.1 Introduction NIS and NIS 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 8.4.1.1 NIS2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 8.4.2 Importance of NIS2 for the Supply Chain . . . . . . . . . . . . . . . . . . . . . . . 353 8.4.3 Use of AI in NIS Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 8.4.3.1 Annex I AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 8.4.3.2 Annex III AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 9 Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Gabriele Bolek-Fügl, Valerie Hafez, Sabine Singer 9.1 Ethical Guidelines for Trustworthy AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 9.1.1 What is it About? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 9.1.2 Ethical Principles of the Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 9.1.3 Core Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 9.1.4 Methods for Implementing the Core Requirements . . . . . . . . . . . . . . 365 9.1.5 Tools for Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 9.2 Relevant AI Guidelines & Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 9.2.1 OECD Council Recommendation on Artificial Intelligence . . . . . . . . 369
Table of Contents XIII 9.2.2 The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence . . . . . . . . . . . . . . . . . . 372 9.2.3 Compliance Tools for Many Occasions . . . . . . . . . . . . . . . . . . . . . . . . . . 375 9.2.4 Artificial Intelligence Risk Management Framework . . . . . . . . . . . . 378 9.2.5 Further Formative Ethical Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . 381 9.3 EU and Global Bodies, Boards and Committees . . . . . . . . . . . . . . . . . . . . . . . . . 383 9.4 From Digital Humanism to a Value-based AI System . . . . . . . . . . . . . . . . . . . . 385 9.4.1 Value-based Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 9.4.2 Advantages and Strategic Importance of Value-based Engineering 389 9.4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 10 Governance in the Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Gabriele Bolek-Fügl, Karin Bruckmüller, Veronica Cretu, Valerie Hafez, Klaudia Zotzmann-Koch 10.1 Practical Example: Assessment of a Use Case in Accordance with the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 10.1.1 Description of the Use Case: AI-supported Fire Detection and Alarm System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 10.1.2 How do You Start? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 10.1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 10.2 Risk Management, Human Supervision and Useful Tools. . . . . . . . . . . . . . . . 403 10.2.1 Embedding Governance in the Life Cycle of an AI System . . . . . . . . 404 10.2.2 Recognizing and Addressing Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 10.2.2.1 Approaches to Risks, Incidents, Accidents and Affected Parties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 10.2.2.2 Measuring Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 10.2.2.3 Responsibility in the Event of Incidents and Accidents . . 409 10.2.2.4 Perceiving and controlling the unknown . . . . . . . . . . . . . . . 409 10.2.3 Human Supervision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 10.2.3.1 Break Down Supervision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 10.2.3.2 Develop and Maintain Supervisory Skills . . . . . . . . . . . . . . . 413 10.2.3.3 Making Supervision Context-sensitive . . . . . . . . . . . . . . . . . 413 10.2.3.4 Involving External Parties in Supervision . . . . . . . . . . . . . . 414 10.2.3.5 Human Supervision: Pros and Cons . . . . . . . . . . . . . . . . . . . . 415 10.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
XIV Table of Contents 10.3 Data and Knowledge Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 10.3.1 Pilars of the Data Governance Framework . . . . . . . . . . . . . . . . . . . . . 424 10.4 Audit of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 10.4.1 Fundamentals of the Audit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 10.4.2 Audit Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 10.4.3 Difference Between Risk Management and Audit . . . . . . . . . . . . . . . . 431 10.4.4 Helpful Audit Checklists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 10.4.5 Example of a Simple AI Audit Checklist . . . . . . . . . . . . . . . . . . . . . . . . 434 10.5 Code of Conduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 10.5.1 Example of a Code of Conduct for the Use of Artificial Intelligence in the Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 10.5.2 Further Considerations on the AI Code of Conduct . . . . . . . . . . . . . . 445 10.6 AI and Sustainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 10.6.1 ESG – Environmental, Social and Corporate Governance . . . . . . . . 447 10.6.2 Diversity, Inclusion, Justice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 10.6.3 Benefits for the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 10.6.4 High-risk AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 10.6.5 Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 10.6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 11 The Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Foreword Actually, AI is not a new topic; the roots of this supposedly “innovative” technology go back to the 1950s. However, the general public only became aware of AI with the pub- lication of various models of generative AI for creating images, texts, music or pro- gramming code. The EU’s AI Act has been a key milestone on the path to human rights-friendly and innovation-promoting compliance when using AI in companies, in administration and also in law enforcement. The EU Commission addressed AI back in 2018 and published a on AI in Europe. In 2021, three years later, the first draft of the AI Act was published. The emergence of “general-purpose AI” and generative AI in the public eye and the possibility of AI being used by the general public have given the negotiations on the AI Act another new twist. This has raised new questions as to whether and how such AI models should be regulated. Following a political agree- ment in December 2023, the final texts were published in the Official Journal of the EU in July 2024. The aim was to launch the new AI Act before the EU elections in June 2024. The practical handbook on the AI Act was primarily developed within the Women in AI Austria network. The contributions of the members reflect their different profes- sional backgrounds. Women in AI Austria is an Austrian association whose goal is to increase the partici- pation and representation of women and girls in the field of artificial intelligence. We are also committed to promoting gender diversity in AI based on a digital-humanistic approach. Members of the association are exclusively natural persons, regardless of their gender, education or profession. All members are volunteers. The association was founded in 2020 as part of the global network of Women in AI and its members are involved in research, making statements in politics, distributing educational material, organizing events and representing the association at events. Women in AI
XVI  Foreword Austria strives to provide an interdisciplinary setting where people can exchange ideas about AI. Specialists, entrepreneurs and lawyers from Women in AI Austria and from a wide range of specialist fields have contributed their knowledge, expertise, time and effort to jointly shape and publish this handbook. The aim was not only to analyze the AI Act, but also to look at areas of law with which there are significant overlaps, to analyze use cases and to work out what it means to use AI in certain industries or sectors. After all, the AI Act was and will never be a “standalone law”. Rather, it has an impact on a wide range of product specifications and, above all, data protection issues and the topic of intellectual property. Without the network of Women in AI Austria (https://www.womeninai.at/) and the authors’ diverse, wide-ranging knowledge of AI, this handbook would not have been possible. You can also follow us on LinkedIn ( https://at.linkedin.com/company/women-in-ai-austria) Many thanks to Women in AI Austria and the authors for their commitment! Natascha Windholz Vienna, August 2024
1 What is AI and How Do Data Science and Data Analytics Differ? Gabriele Bolek-Fügl Artificial intelligence is at the center of a revolution that is not only changing the way machines think and learn, but also challenging our understanding of intelligence itself. This revolution is driven by big data and the extraordinary computing power of modern computers, which can perform complex statistical calculations with a speed and precision that has long been unimaginable. AI has many faces. From simple programs that perform individual tasks with an ac- curacy that surpasses human ability, to complex systems that learn to adapt and make decisions that are reminiscent of the human mind. Increasing digitalization can free us from repetitive and tedious tasks and give us the freedom to act more creatively and strategically. If you have little talent or experience in one area, you can compensate for this with virtual assistants. AI offers opportuni- ties to expand our skills and push our boundaries. However, to take full advantage, we need to learn how to work effectively with AI. But what exactly makes a machine “intelligent”? Is it its ability to recognize and re- spond to human speech? Its efficiency to make decisions in milliseconds that a human could only make with difficulty and after long deliberation? Or is it the ability to learn from experience and improve over time? In fact, AI encompasses a spectrum of technologies that are as diverse as the defini- tions that seek to capture them. So let’s begin our journey into the world of AI with a look at the basic components and the various techniques that are summarized under the term “artificial intelligence”.
2 1 What is AI and How Do Data Science and Data Analytics Differ? 1.1  The Cornerstones of AI In the field of AI, names and terms are often used without knowing the exact defini- tions and backgrounds. But in the world of mathematicians, computer scientists and architects of AI, precision is essential. Clear, unambiguous definitions are needed to push the boundaries of what is possible and develop the next generation of intelligent systems. When developing any AI system, there are components that are necessary regardless of the specific technology or use case. These cornerstones form the foundation on which more complex AI algorithms can be built. Let’s therefore take a look at these components of AI-systems and define the associated details: Table 1.1 Overview of important AI Components Component Description Importance Complexity Customizability Data Information for learning and decision-making High Variable High Algorithms Procedures or methods for processing data High High High Computing power For processing large amounts of data and complex calculations High Medium Medium Storage Necessary for storing data, models and results Medium Low Medium Measurement and model optimization Important for evaluating the effectiveness of AI models and their optimi- zation High High High Interfaces for interaction Enable interaction between humans and AI systems Medium Medium High Security and data protection Protect data from unauthorized access and misuse, ensure compli- ance with legal standards High High Medium Importance: A rating of “High” means that this component is very important in the context of the correct and efficient operation of an AI system. Medium and Low indicate less important criteria. Complexity: The rating refers to the degree of difficulty with which reliably correct or performant results are generated by the AI. Adaptability: The rating indicates how flexibly a company can adapt the components of an AI to its own needs and tasks.
The above is a preview of the first 20 pages. Register to read the complete e-book.