Statistics
58
Views
0
Downloads
0
Donations
Uploader

高宏飞

Shared on 2025-11-28
Support
Share

AuthorSunil Gregory, Anindya Sircar

AI has moved from experimental applications to core business functions. For enterprises, this rapid transition has created an urgent need to address the risks and responsibilities associated with AI. Unchecked, AI can present numerous hazards: algorithms may reflect and amplify human biases, data security breaches can compromise sensitive information, and opaque decision-making processes can undermine stakeholder trust. If unmanaged, these risks harm individuals and society and erode the value and competitive advantage that AI offers. AI governance addresses these challenges by establishing accountability structures, data and model management policies, and regulatory compliance measures. AI Governance Handbook: A Practical Guide for Enterprise AI Adoption has been crafted to serve as a comprehensive guide for managing these challenges. It offers a structured approach for enterprises aligning their AI efforts with strategic goals while ensuring fairness, transparency, accountability, privacy, and compliance with evolving legal standards. The governance approach outlined in this handbook advocates for a proactive stance, emphasizing the need to foresee potential issues and embed responsible practices into the organization’s AI lifecycle from the outset. Through this lens, AI governance is not merely a set of controls but a value-driven approach to sustaining trust, minimizing risks, and maximizing the benefits of AI.

Tags
No tags
Publisher: Springer
Publish Year: 2025
Language: 英文
Pages: 291
File Format: PDF
File Size: 3.1 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

Professional Practice in Governance and Public Organizations Sunil Gregory Anindya Sircar AI Governance Handbook A Practical Guide for Enterprise AI Adoption Foreword by ‘Kris’ Gopalakrishnan
Professional Practice in Governance and Public Organizations
The Professional Practice in Governance and Public Organizations series features cutting-edge insights and practical guidance for professionals in the areas of economics, politics, public policy, public administration, and international organizations. With concise, accessible volumes, the series explores the latest developments in governance, public law-making, organizational and political strategies, institutional policies, policy instruments, public management, and finance. Core topics include decision-making, leadership, and the transformative impact of digitalization. Each book is authored by practitioners, experts, and leading authorities from public and international organizations, think tanks, and NGOs, ensuring a blend of theoretical depth and real-world applicability. While designed primarily for professionals in these fields, the series also serves as an invaluable resource for students of economics, political science, public policy, and public administration, equipping them with practical guides for their future careers.
Sunil Gregory • Anindya Sircar AI Governance Handbook A Practical Guide for Enterprise AI Adoption Foreword by “Kris” Gopalakrishnan
ISSN 2731-9776 ISSN 2731-9784 (electronic) Professional Practice in Governance and Public Organizations ISBN 978-3-031-89265-3 ISBN 978-3-031-89266-0 (eBook) https://doi.org/10.1007/978-3-031-89266-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2025 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland If disposing of this product, please recycle the paper. Sunil Gregory Mckinney, TX, USA Anindya Sircar IPR Chair Nalsar University of Law Hyderabad, Telangana, India
v We are living in times when the pace of technological transformation is exponential, and artificial intelligence (AI) is at the core of this shift. From revolutionizing healthcare to redefining education, manufacturing, and finance, AI is beginning to underpin every layer of our social and economic fabric. The challenge before us is not just how we create intelligent systems, but also how we govern them. I have always believed that data and AI, when harnessed responsibly, can be forces for inclusive growth. One of the central recommendations of the Committee on Non-Personal Data Governance (India) which I chaired was the creation of high- value datasets − an idea born from the recognition that access to quality, representa- tive, and diverse data is crucial for building robust AI systems. We envisioned these datasets as public goods, available for research, innovation, and AI model training in a way that respects privacy, ensures accountability, and serves the public interest. Also, as someone deeply involved in nurturing India’s innovation ecosystem, from university research through deep tech startups to digital public infrastructures, I see AI as a transformative lever. But to fully realize this potential, we must anchor our efforts in a strong governance foundation. AI should be developed not only to serve efficiency and profitability but also to advance inclusivity, accessibility, and social good. AI governance, therefore, is not merely about mitigating risk; it is also about enabling responsible innovation. It is about putting frameworks in place that both enable and enforce a balance between promoting the free flow of ideas and innova- tions and protecting individuals, communities, and institutions from harm. Governance should not stifle progress; rather, it should ensure that progress is sus- tainable, equitable, and human-centric. This book, AI Governance Handbook: A Practical Guide for Enterprise AI Adoption, by Sunil Gregory and Anindya Sircar comes at a time when that balance is sorely needed. We are witnessing unprecedented leaps in AI capabilities, genera- tive AI, multimodal systems, and robotic systems, that learn and adapt in ways that were unthinkable just a few years ago. Yet we also see valid concerns about data misuse, algorithmic bias, explainability, legal compliance, and the ethical guardrails required to keep these AI systems aligned with societal values. Foreword
vi What I particularly appreciate about this book is its practical orientation. Governance, when discussed in the abstract, can often feel disconnected from oper- ational realities. This handbook bridges that gap. It offers a structured yet scalable framework that enterprises, whether large corporations or startups, can adopt to scale their AI initiatives responsibly. The authors do not treat governance as a bottle- neck but as an enabler. Their four-dimensional guardrails – strategic, technical, ethical, and legal – pro- vide an end-to-end roadmap. The emphasis on aligning AI efforts with corporate objectives, embedding ethical values, and ensuring compliance with evolving laws reflects the very essence of what responsible innovation must look like. Importantly, this book encourages organizations to be forward-looking—not just react to regula- tions, but also proactively shape their AI strategies to build trust and transparency. I believe that our future depends on how well we manage the duality of AI – its promise and its peril. To do this, we need frameworks that are dynamic, grounded in ethics, yet adaptable to change. We need governance models that work across sec- tors and jurisdictions while respecting local contexts. We need to invest in digital public goods, capacity building, and collaboration across government, industry, and civil society. And most of all, we need to embed a human-centered approach in every stage of the AI lifecycle. This book offers a timely and essential guide to achieving these goals. It will serve as a cornerstone for enterprises navigating complex AI landscape, helping them scale responsibly and ethically. I commend Sunil Gregory and Anindya Sircar for their thoughtful and thorough work. Let us ensure that as we build intelligent machines, we remain intelligent stew- ards of the future. Bengaluru, India Senapathy “Kris” Gopalakrishnan Foreword
vii Preface Panacea and Pandora are two interesting, almost antithetical characters in Greek mythology. Panacea (Πανάκεια), the daughter of Asclepius and Epione, was the goddess of universal remedy. At the same time, Pandora (Πανδώρα), on the other hand, was the first human woman created by God Hephaestus on the instructions of Zeus. According to the myth, Pandora opened “pithos” (a jar commonly referred to as “Pandora’s box”), releasing all the evils of humanity. Enterprises today treat AI as a Panacea for all their growth and efficiency needs. However, it can potentially be a “Pandora’s Box” creating the “perfect storm”, a disastrous situation created by a rare combination of multiple factors occurring simultaneously, where each element might not be severe, but together they produce a catastrophe. A telling example is the case of Boeing, the 3rd of the eighteen vision- ary companies tracked by James Collins and Jerry Porras in their iconic book “Built to Last: Successful Habits of Visionary Companies1.” Professors at Stanford University Graduate School of Business, James Collins and Jerry Porras, inter- viewed 1000 CEOs over six years and identified eighteen truly exceptional and long-lasting companies with an average age of nearly one hundred years and have outperformed the general stock market by a factor of fifteen since 1926. Their research compared visionary companies against their direct top competitors and examined why exceptional companies differ from others. In the last few years, Boeing has fallen from its heights of glory, even prompting Netflix to come out with a telltale documentary, “Downfall: The Case Against Boeing2,” examining the decline in fortunes of Boeing. Since the deadly crashes of its 737 Max 8 jets in 2018 and 2019, Boeing has been facing a “Perfect Storm” with safety concerns, legal troubles, Production issues, Labor issues, and Whistleblower deaths. Boeing has been losing money for more than five years. The company has burned through $25 billion in cash and has net debts of $45 billion! 1 Collins; James C, Porras; Jerry I, “Built to Last: Successful Habits of Visionary Companies” 1994, Harper Business, ISBN 0-060-56610-8 2 Downfall: The Case Against Boeing, accessed from https://www.netflix.com/title/81272421on 15 June 2024
viii Boeing’s challenges, particularly with the 737 MAX crisis, underscore the poten- tial pitfalls of poorly managed AI adoption. The 737 was designed in early 1964 as a short-to-medium range, smaller-capacity Boeing single-aisle jetliner family mem- ber. Since it was initially designed to serve secondary airports with less-developed infrastructure like boarding stairs instead of jetways, it was designed with short landing gear to allow the fuselage to sit as low as possible. Since then, the 737 has become very popular, becoming the most-produced airliner family in the world with over 10K+ planes produced in multiple versions, 737-300, 737-400, and 737-500, with increased fuselage length. The fourth generation 737 Next Generation (737NG) series introduced in 1996 brought significant change with enlarged and redesigned wings, larger fuel tanks for more range, new cockpits, and uprated CFM-56 engines. However, the FAA rated and certified all these generations under the original 737- type certificate. 737’s most significant competitor is the Airbus A320 family (A320, A319, A321, and A318). Since the A320 is a new design, unlike the 737, it did not have technical debt and had a higher stance with plenty of ground clearance to accommodate large- diameter engines. Airbus introduced the A320 Neo family with ultra-fuel-efficient engines with larger engine diameters, and Boeing started losing customers to the A320 Neo. Boeing introduced the 737 MAX series, their next-generation single- aisle jetliner with comparable ultra-fuel-efficient LEAP-1B engines to compete with Airbus. Since the engine was too large for the traditional 737 design, rather than redesigning, Boeing moved the engine nacelles forward and higher than the previous CFM 56 engines, which changed the flight characteristics as the larger engine created a lift and an upward movement. While a trained pilot could have eas- ily handled the changed plane characteristics, it would have incurred additional training costs and would have resulted in the FAA changing its “type-rating.” Rather than doing an aerodynamic redesign of the airplane, Boeing went for the quickest and least expensive fix using the AI system called “Maneuvering Characteristics Augmentation System (MCAS),“ which was designed as an extremely limited AI previously used in military tanker version 767. This AI system is supposed to trim forward, counterbalancing the lift created by nacelles without the pilot noticing any impact. MCAS operates automatically without pilot input and uses sensors to detect when the aircraft’s angle of attack exceeds a certain threshold, based on factors like air speed and altitude, to push the aircraft’s nose down to pre- vent stalls automatically. A series of gaps resulted in Max 747 crashes. Max 747 relies on data from a single angle of attack (AoA) sensor. Both the Lion Air Flight 610 and Ethiopian Airlines Flight 302’s sensors provided faulty readings, and their system mistakenly activated, repeatedly forcing the nose down despite pilots’ attempts to counteract it. The existence or functionality of the MCAS system was not disclosed to pilots or airlines and they were not trained adequately on the new system, which prevented pilots from understanding how MCAS could override manual controls, leaving them unable to respond effectively during emergencies. The absence of simulator training or clear guidance on MCAS deprived pilots of the knowledge and skills necessary to counteract its malfunctions. Lack of detailed impact analysis and Preface
ix regulatory oversight and expedited certification led to inadequate testing and risk assessment. Flaws in MCAS design, such as its reliance on a single sensor and repeated overrides of pilot input, were not sufficiently scrutinized or mitigated. Integrating AI-driven systems like the Maneuvering Characteristics Augmentation System (MCAS) revealed issues in design transparency, pilot training, and safety testing. Inadequate communication about AI system functionality and insufficient safeguards to handle system failures contributed to accidents and widespread trust erosion. This highlights the critical need for robust AI governance, thorough testing, and a focus on safety and accountability in AI deployment within aerospace and other industries. AI has moved from experimental applications to core business functions. For enterprises, this rapid transition has created an urgent need to address the risks and responsibilities associated with AI. Unchecked, AI can present numerous hazards: algorithms may reflect and amplify human biases, data security breaches can com- promise sensitive information, and opaque decision-making processes can under- mine stakeholder trust. If unmanaged, these risks harm individuals and society and erode the value and competitive advantage that AI offers. AI governance addresses these challenges by establishing accountability struc- tures, data and model management policies, and regulatory compliance measures. AI Governance Handbook: A Practical Guide for Enterprise AI Adoption has been crafted to serve as a comprehensive guide for managing these challenges. It offers a structured approach for enterprises aligning their AI efforts with strategic goals while ensuring fairness, transparency, accountability, privacy, and compliance with evolving legal standards. The governance approach outlined in this handbook advo- cates for a proactive stance, emphasizing the need to foresee potential issues and embed responsible practices into the organization’s AI lifecycle from the outset. Through this lens, AI governance is not merely a set of controls but a value-driven approach to sustaining trust, minimizing risks, and maximizing the benefits of AI. Navigating the Book’s Structure The AI Governance Handbook is organized into seven chapters, each addressing a critical area of AI governance. These chapters provide a cohesive guide to respon- sible AI adoption, from establishing foundational concepts to exploring legal considerations. The opening chapter explores the development of AI and machine learning, detailing how these technologies have progressed from niche research topics to ubiquitous tools in enterprise settings. The chapter covers the inherent risks of AI, including those unique to enterprise settings, such as the impact of biased data and the dangers of poorly managed AI decision-making processes. The second chapter covers the rule of law regarding AI/ML, offering an overview of the global land- scape of AI policies and regulations. It also compares AI governance frameworks across different jurisdictions, highlighting the complexities of operating in a Preface
x worldwide market with varying regulatory standards. The chapter introduces the concept of “guardrails“ for AI governance, outlining emerging principles that guide responsible AI adoption. The following four chapters discuss the guardrails in detail. The third chapter focuses on strategic guardrails, which institutionalize policies and frameworks that align AI with organizational goals. It examines how enterprises can develop AI systems that are not only effective but also aligned with strategic priorities and market commitments. The chapter provides a framework for making informed decisions about AI investments, ensuring that AI adoption is purposeful and contributes to a competitive advantage. The fourth chapter discusses the design and deployment of scalable AI systems, emphasizing data management, model monitoring, and AI observability. It covers best practices for handling common technical challenges, such as confabulations (hallucinations) and the ability to trou- bleshoot model performance. Additionally, it addresses the need for AI standards, certification processes, and protocols to deprecate outdated AI systems. The next chapter delves into the ethical dimensions of AI, examining principles such as anti- bias, trust, responsibility, and reliability. The chapter explores how organizations can integrate ethical considerations into their governance practices to foster “dependable AI” that respects social values and contributes to human well-being. It provides frameworks for ethical decision-making and discusses common challenges and debates in the field. AI’s legal and regulatory landscape is evolving, and enter- prises must stay abreast of changes to avoid legal pitfalls. The sixth chapter outlines current and anticipated legal standards, including AI-specific regulations, intellec- tual property rights, and data protection laws. The chapter discusses the implica- tions of various regulatory regimes on enterprise AI, exploring how organizations can navigate these complexities to ensure compliance. The final chapters conclude with a look at the future, touching on the ethical and technical considerations that organizations must prepare for as they continue their AI journey. The AI Governance Handbook is designed to be a practical resource for navigat- ing this complex and dynamic field. Designed for a broad audience, including exec- utives, managers, data scientists, engineers, and compliance professionals, this handbook bridges the gap between AI governance’s technical and strategic dimen- sions. Each of these stakeholders has a vital role in building an AI ecosystem that drives business value responsibly. This book focuses on actionable frameworks, guidelines, and principles that serve as a foundation for developing, deploying, and governing AI technologies in ways that are ethical, legally sound, and strategically aligned with organizational objectives. We have kept this handbook as a compre- hensive end-to-end guidebook for Enterprise AI adoption. However, each topic cov- ered in this book deserves a “detailed work.” AI governance is a fast-changing field. Much underlying technology and legislative activity shall undergo massive revision at a breakneck speed. However, this book’s underlying philosophy, structure, and frameworks shall remain relevant. This book would not have been possible without experts from diverse domains’ guidance, support, and encouragement. The detailed list of senior industry, academic, and regulatory experts who helped us write this book is given in the “Acknowledgments” section at the end of this book. Preface
xi It is our privilege and an honor that Senapathy “Kris” Gopalakrishnan, Co-Founder of Infosys Limited, and Chairman of Technology startup accelerator “Axilor Ventures,” has graciously agreed to write the preface for our book. Kris has been a pioneering advocate for AI technologies globally. He is the principal donor behind the establishment of the “Centre for Brain Research” at the Indian Institute of Science (IISc), Bangalore. In addition, he has instituted distinguished visiting chairs in Neurocomputing and Data Science at both IISc Bengaluru and the Indian Institute of Technology Madras (IIT Madras), Chennai. Kris also serves as the Chair of the Governing Board of India’s National Mission on Cyber-Physical Systems (CPS), which is driving the creation of 25 hubs focused on CPS and allied technolo- gies across the country. Kris’ insights into adoption of AI are particularly critical as enterprises navigate the challenges of integrating this transformative technology. His emphasis on ethi- cal practices and long-term impact provides a vital lens for ensuring AI is deployed responsibly, with a focus on fairness, transparency, and societal benefit. In an era where AI systems increasingly influence critical decisions, Mr. Gopalakrishnan’s guidance will serve as a compass for organizations aiming to leverage AI’s potential while safeguarding trust and accountability. Mckinney, TX, USA Sunil Gregory Hyderabad, Telangana, India Anindya Sircar Preface
xiii Our whole Universe was in a hot, dense state Then, nearly fourteen billion years ago, expansion started, wait The Earth began to cool The autotrophs began to drool Neanderthals developed tools We built a wall (We built the pyramids) Math, Science, history, unraveling the mystery That all started with the Big Bang (Bang) “Big Bang Theory Theme,” also known as “The History of Everything,” the theme song of the hit American sitcom “The Big Bang Theory1,” created by Chuck Lorre and Bill Prady, is the most “hummable one.” The song by Canadian band “Barenaked Ladies,” written by Ed Robertson, references the beginnings of the Universe, and the last line directly references one of the most famous concepts in science, “The Big Bang Theory.” The European Union Council adopting the “general approach”2 on the Artificial Intelligence Act (AI Act) on 06 December 2022 was the “Big Bang Moment” of this book. In December 2022 I was discussing my PhD thesis with Dr. Sircar, and the topic veered toward the impending EU AI Act. ChatGPT debuted a week back, world attention had started moving from Metaverse toward AI, and the EU’s adoption was to open a floodgate of legal and regulatory activities worldwide. We realized the gap between the academic realm of AI, the legal and regulatory intent, and the enterprise’s need to adopt AI. This guidebook came out as our attempt to plug in the gap. 1 The Big Bang Theory, accessed from http://the-big-bang-theory.com/about/ on 01 December 2024 2 Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights, accessed from https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial- intelligence- act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/ on 01 December 2024 Acknowledgements
xiv Bernard of Chartres, the twelfth-century French Neo-Platonist philosopher, scholar, and administrator, is credited with the metaphor “nani gigantum humeris insidentes,” or standing on the shoulders of giants (which was later popularized by Sir Isaac Newton). This book is made possible by collaborating with experts in the field, and we would like to acknowledge the support we received. We extend our heartfelt gratitude to Senapathy “Kris” Gopalakrishnan for taking the time to read this book and providing us with an inspiring foreword. Kris Gopalakrishnan is a true futurist who has been articulating the transformative poten- tial of artificial intelligence long before it entered the mainstream lexicon. His com- mitment to shaping this future goes beyond thought leadership. Through the Pratiksha Trust, established with his wife Sudha Gopalakrishnan, he supports cut- ting-edge brain research at IISc Bangalore, particularly the Brain, Computation, and Data Science group. One of their most ambitious initiatives is the “Brain Co-processors” grand challenge3, which seeks to develop both invasive and non- invasive technologies to augment or restore brain functions like memory, attention, vision, and motor control. In combining philanthropy, science, and foresight, Kris is not only anticipating the future – he’s actively building it. Our thanks also go to Ganesh KC, President, Pratithi Investments, who found the book valuable, brought it to Mr. Gopalakrishnan’s attention, and facilitated this opportunity. We sincerely appreciate Chetan Sharma, a veteran in the wireless industry who advises Fortune 50 wireless company boards, for his invaluable insights. Special thanks to senior executives from regulatory functions, including Shaji P.  Jacob, retired Principal Chief Commissioner of Income Tax, Government of India, and Krishna Kumar Edathil, Texas State IT Director and founder of the Texas State’s Artificial Intelligence Center of Excellence, for their perspectives on governance. We also thank senior strategy executives, including Anurag Vardhan Sinha, SVP and Head of Communication, Media & Technology, Cognizant Technology Solutions; Krishnan Narayanan Co-Founder and President of “itihaasa Research and Digital” a not-for-profit company that studies the evolution of technology and business domains in India, Anand Santhanam, SVP and Head of Strategic Large Deals, Infosys; Christina Shanks, Global Head of Strategy and Transformation, JP Morgan Chase; Venkata Ramana, Vice President, Head of Strategy and Operations, Infosys Consulting; Binoy Augustine, Head of Business Planning and Operations for Asia Pacific, Infosys Limited; and Nitesh Agarwal, Chief Strategy and Risk Officer, Tech Mahindra for sharing their expertise in strategy, risk, compliance, and governance. Academics have been instrumental in shaping our understanding of AI’s com- plexities. We thank Prof. Dr. Deepak Nair of the Center for Neuroscience, IISc, for his pioneering research on synaptic signal processing. Dr. Fr. Jaison Paul CMI, Principal at RSET, whose work in high-performance scientific computing and AI ethics has been invaluable. We are thankful to the honorable Vice Chancellor, the 3 Brian Co-Processors: A Moonshot Neuroscience Project in India; itihaasa Research and Digital; accessed from https://brain-computation.iisc.ac.in/wp-content/uploads/2024/06/Brain-Co- processors-A-moonshot-neuroscience-project-in-India.pdf on 15 March 2025. Acknowledgements and Declarations
xv Registrar, other faculties, the dedicated staff of NALSAR University of Law for their unwavering support and encouragement throughout the process of creating this book. Furthermore, we extend our sincere appreciation to the team at the DPIIT IPR Chair for their cooperation and tireless efforts in facilitating the realization of this project. Our gratitude also extends to industry practitioners who reviewed the technical aspects of this book. Special thanks to Debojyoti Bhattacharya, Senior Principal Cybersecurity Architect, Arm, Prasad Pyla, Principal AI Architect, Cognizant Technology Solutions; Praveen Prabha Ravindran, Senior Principal Engineer, AI, Dell Technologies; Ranjit Menon, AI Evangelist; Sameel Baker, Enterprise AI Architect, JDC/SAP; and Prasad Pillai, Founder and CEO, Mindspace.ai and Vinay Venu Co-Founder Samanvay Research and Development Foundation. We are thankful for insights from the Banking and Finance industry provided by Dr. Samoj Panicker, Head of Compliance Surveillance Data and Analytics, Standard Chartered Bank; Wil Leblanc, Vice President, Federal Reserve; Jijo Peter, Vice President, Deutsche Bank; Bipin Gregory, Product Owner, Commonwealth Bank of Australia; Nijith Gangadharan, First Vice President, Pacific Premier Bank; and Praveen Bhaskaran, General Manager, Power Finance Corporation Ltd. In health- care, we owe special thanks to Dr. Vinu Madhu, Medical Oncologist; Dr. Sandhya Nair, Radiologist, UT Health San Antonio; and Dr. Deepak Gregory, Cardiothoracic Anesthetist, NHS UK, for their perspectives on AI’s ethical implications in medical diagnosis and treatment. From the Communication and Technology sector, we thank Pranay Bajpai, Vice President Digital, Verizon; Ramesh Ramakrishnan, Vice President Platforms, Verizon; Richard J.  DeStefano, Vice President, Digital Engagement & Service Transformation, Charter Communications; Anand Raghavan Srinivasan, VP and Head of Communication, Cognizant; Ram Kulkarni, Associate Vice President, Engineering, Infosys, Sundaravadivelu Vajravelu, Director AI and Digital Experiences, Dish Network; Dani Parthan, Vice President, Product Engineering, Cognizant; Bavani Subramaniam, Associate Vice President, Cognizant Technology Solutions; and Bala Shanmugakumar, Head of Industry Consulting, Communication, Media & Technology, Cognizant Technology Solutions; and Mohan Jacob, Sr. Offering Manager, Resilient Navigation, Pressure & Magnetic Sensors, Honeywell Aerospace; for their reviews, feedback, and constant encouragement. We sincerely thank Dr. Prashanth Mahagaonkar, Executive Editor, Springer Heidelberg, for recognizing the potential of this book and connecting us with Felix Torres Serrano, Publishing Editor, Springer USA and Anjana Bhargavan, Copy edi- tor, Springer USA. Our heartfelt gratitude goes to Felix Torres Serrano for his guid- ance, insightful feedback, and unwavering support throughout the publishing manner. His information and encouragement had been instrumental in bringing this book to fruition. A special mention to Amal Baker, a K12 student, for his tireless efforts in proofreading multiple drafts of this book. Finally, we sincerely thank our families for their unwavering support. Anitha Varghese and Dr Rakhi Sircar, your encouragement has been a pillar of strength. To Acknowledgements and Declarations
xvi Katherine, Immanuel, Grace, Riddhi, and Aditya, your humor and boundless energy have reminded us why ethical debates in AI, such as those surrounding spatial AI and the omniverse, will shape the future. Thank you all for being part of this journey. Competing Interests The authors have no competing interests to declare that are relevant to the content of this manuscript. Acknowledgements
xvii About the Book Enterprise AI introduces a technology inflection point by enabling businesses to harness the power of data, automation, and advanced analytics to drive innovation, improve efficiency, and enhance competitiveness in the digital economy. We are living in a “Pre-regulatory Era” when it comes to AI Governance. Legislators and regulators across jurisdictions are promulgating policies, frameworks, and rules to ensure a Desirable, Robust, Dependable, and Lawful AI ecosystem. Enterprise AI adoption requires a strategic approach, aligning AI initiatives with the organiza- tion’s overall goals and objectives, while managing the risks. AI Governance Handbook is a practical resource for stakeholders involved in AI initiatives, includ- ing executives, managers, data scientists, engineers, and compliance professionals and aims to equip them with the knowledge, tools, and strategies they need to lead and navigate the complex landscape of AI adoption.
xix Contents 1 Artificial Intelligence: A Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 From Biological Intelligence to Artificial Intelligence . . . . . . . . . . 5 1.2 Language Processing: The Next Frontier in Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.1 Language and Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.2 Natural Language Processing (NLP) . . . . . . . . . . . . . . . . . . 11 1.2.3 Language Models (LMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.4 Large Language Models (LLMs) and Future Possibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3 Future of AI: Long March Toward Sentience . . . . . . . . . . . . . . . . . 17 1.3.1 Frontier AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.2 Artificial General Intelligence . . . . . . . . . . . . . . . . . . . . . . . 20 1.4 Challenges Posed by AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5 Risks Associated with Artificial Intelligence . . . . . . . . . . . . . . . . . . 25 1.5.1 Existential Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.5.2 Data & Security Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.6 What is the Objective of this Book? . . . . . . . . . . . . . . . . . . . . . . . . 27 2 Artificial Intelligence: Governing Principles . . . . . . . . . . . . . . . . . . . . 29 2.1 Age of AI Nationalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2 Artificial Intelligence and the Rule of Law . . . . . . . . . . . . . . . . . . . 32 2.2.1 The American “Market-Driven” Model . . . . . . . . . . . . . . . . 33 2.2.2 The Chinese “State-Driven” Model . . . . . . . . . . . . . . . . . . . 35 2.2.3 The European “Rights-Driven” Model . . . . . . . . . . . . . . . . 38 2.3 AI Regulations Across the World . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.1 International and National Frameworks, Strategies, and Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.2 AI Legislations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.3 Regulatory Agency, Structure, and Charter . . . . . . . . . . . . . 51
xx 2.4 Enterprise AI Adoption: Four-Dimensional Guardrails . . . . . . . . . . 54 2.4.1 Strategic Guardrails for “Desirable Enterprise AI” . . . . . . . 55 2.4.2 Technical and Operational Guardrails for “Robust Enterprise AI” . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.4.3 Ethical Guardrails for “Dependable Enterprise AI”. . . . . . . 57 2.4.4 Legal and Regulatory Guardrails for “Lawful AI” . . . . . . . 58 3 Strategic Guardrails for “Desirable Enterprise AI” . . . . . . . . . . . . . . 59 3.1 Enterprise AI Governance Framework . . . . . . . . . . . . . . . . . . . . . . . 61 3.1.1 Enterprise AI Policy Framework . . . . . . . . . . . . . . . . . . . . . 61 3.1.2 Enterprise AI Governance Organization: Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2 Enterprise AI Projects: Intake Process . . . . . . . . . . . . . . . . . . . . . . . 64 3.3 Enterprise AI Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 AI Red Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.4 Stakeholder Engagement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.5 Enterprise AI Readiness Assessment . . . . . . . . . . . . . . . . . . . . . . . . 78 4 Technical & Operational Guardrails for “Robust Enterprise AI” . . . 79 4.1 AI Technology Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.1.1 Autonomous Generative AI or AI Agents and Agentic Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.1.2 From Agentic AI to Ambient Agents and Risk of Zero-Day Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.2 AI Data Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.3 AI Model Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.3.1 AI Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.3.2 AI Model Performance Evaluation . . . . . . . . . . . . . . . . . . . 98 4.3.3 AI Model Drift Management . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 AI Hallucinations or Confabulations . . . . . . . . . . . . . . . . . . . . . . . . 98 4.5 AI Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.6 Enterprise AI Standards and Certification . . . . . . . . . . . . . . . . . . . . 105 4.7 Deprecation: AI Systems and AI Data . . . . . . . . . . . . . . . . . . . . . . . 111 5 Ethical Guardrails for “Dependable Enterprise AI” . . . . . . . . . . . . . . 115 5.1 Ethical Values and Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.1.1 Anti-Bias: Diversity, Non-discrimination, and Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.1.2 Trust: Transparency, Explainability, and Human Oversight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.1.3 Responsibility: Privacy, Safety, and Security. . . . . . . . . . . . 126 5.1.4 Reliability: Accuracy, Accountability, and Contestability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5.1.5 Sustainability: Economic, Social, and Environmental Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 5.2 AI Ethics Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Contents
xxi 6 Legal & Regulatory Guardrails for “Lawful Enterprise AI” . . . . . . . 151 6.1 AI Regulations and Enterprise Obligations . . . . . . . . . . . . . . . . . . . 152 6.1.1 The EU Artificial Intelligence Act (AI Act) . . . . . . . . . . . . . 152 6.1.2 US Federal and State AI Legislations and Enterprise Obligations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 6.1.3 AI Legislation Across the World and Enterprise Obligations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 6.2 Intellectual Property Regulations and Enterprise Obligations . . . . . 190 6.2.1 Intellectual Property Ownership of AI-Created Work . . . . . 195 6.2.2 Training Data and Intellectual Property . . . . . . . . . . . . . . . . 198 6.3 Data Regulations and Enterprise Obligations . . . . . . . . . . . . . . . . . 208 6.3.1 Data Privacy and Enterprise Obligations . . . . . . . . . . . . . . . 208 6.3.2 Data Localization and Enterprise Obligations . . . . . . . . . . . 212 6.3.3 Data Portability and Enterprise Obligations . . . . . . . . . . . . 220 6.4 AI Product Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 7.1 Quantum Leap in Training Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 7.1.1 Behavioral Data as Training Data . . . . . . . . . . . . . . . . . . . . 234 7.1.2 Neural Data as Training Data . . . . . . . . . . . . . . . . . . . . . . . . 235 7.2 Quantum Leap in Computing Power . . . . . . . . . . . . . . . . . . . . . . . . 236 7.3 Quantum Leap in Biological Computing and Bionic Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 7.4 Quantum Leap in Physical AI capabilities . . . . . . . . . . . . . . . . . . . . 245 7.5 Quantum Leap in Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 8 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 8.1 Appendix 1: International and National Frameworks, Strategies, and Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 8.2 Appendix 2: Major AI Legislations across the world . . . . . . . . . . . 254 8.3 Appendix 3: US Federal AI Laws (Enacted and pending) . . . . . . . . 255 8.4 Appendix 4: US State AI Laws (Enacted and Pending). . . . . . . . . . 256 8.5 Appendix 5: Data Protection Act/Legislations . . . . . . . . . . . . . . . . 259 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Contents
The above is a preview of the first 20 pages. Register to read the complete e-book.