Statistics
2
Views
0
Downloads
0
Donations
Support
Share
Uploader

高宏飞

Shared on 2026-03-25

AuthorLouise Macfadyen

Designing AI Interfaces is a practical, design-first guide for product teams building with large language models and autonomous systems. As artificial intelligence becomes central to modern product design, UX professionals must adapt their toolkits to meet new demands. In Designing AI Interfaces, senior product designer Louise Macfadyen offers a timely, practice-oriented guide for building intuitive, ethical, and effective user experiences with large language models (LLMs) and autonomous AI systems. From content moderation to interruptibility, this book presents actionable design patterns for today's most advanced AI interactions—with clear technical insights to help designers understand how AI systems process inputs, generate outputs, and make decisions on users' behalf. Written specifically for navigating the AI transition, this book provides concrete strategies for managing risk, enabling transparency, and fostering user trust in increasingly agentic systems. Readers will learn how to enable users to steer and shape AI responses in real time, incorporate ethical and UX principles into actionable design strategies, and navigate trade-offs in autonomy and control—all while gaining fluency in key AI concepts to collaborate more effectively with engineering teams. - Gain an applicable mental model for how AI systems reason, process and act, and how they're experienced by users - Design effective and ethical interfaces for LLMs and AI agents - Apply best-practice patterns for content warnings, permissions, and oversight - Collaborate confidently with engineering and product teams - Evaluate your org's AI maturity and advocate for responsible implementation

Tags
No tags
ISBN: 8341639823
Publisher: O'Reilly Media
Publish Year: 2026
Language: 英文
Pages: 209
File Format: PDF
File Size: 18.0 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

D esig ning A I Interfa ces Louise Macfadyen Designing AI Interfaces Design Principles for Creative and Autonomous AI
ISBN: 979-8-341-63982-9 US $79.99 CAN $99.99 DESIGN Designing AI Interfaces “Louise Macfadyen has written a practical guide for designers who will need to not just design but also co-engineer AI systems for the highest reliability. They will need to speak both human and machine. This book teaches them how.” John Maeda, VP engineering, Microsoft AI, and author of How To Speak Machine Designing AI Interfaces is a practical, design-first guide for product teams building with large language models and autonomous systems. As artificial intelligence becomes central to modern product design, UX professionals must adapt their toolkits to meet new demands. In Designing AI Interfaces, senior product designer Louise Macfadyen offers a timely, practice-oriented guide for building intuitive, ethical, and effective user experiences with large language models (LLMs) and autonomous AI systems. From content moderation to interruptibility, this book presents actionable design patterns for today’s most advanced AI interactions—with clear technical insights to help designers understand how AI systems process inputs, generate outputs, and make decisions on users’ behalf. Written specifically for navigating the AI transition, this book provides concrete strategies for managing risk, enabling transparency, and fostering user trust in increasingly agentic systems. Readers will learn how to enable users to steer and shape AI responses in real time, incorporate ethical and UX principles into actionable design strategies, and navigate trade-offs in autonomy and control—all while gaining fluency in key AI concepts to collaborate more effectively with engineering teams. • Gain an applicable mental model for how AI systems reason, process and act, and how they’re experienced by users • Design effective and ethical interfaces for LLMs and AI agents • Apply best-practice patterns for content warnings, permissions, and oversight • Collaborate confidently with engineering and product teams • Evaluate your org’s AI maturity and advocate for responsible implementation Louise Macfadyen is a product designer, writer, and creative technologist who specializes in AI interface patterns. Over the past decade, she has worked with organizations including Google, Microsoft, Pinterest, Nike, Gap Sustainability, and the Environmental Working Group and has spoken internationally at events and institutions including Google I/O, Women Who Code, Rhizome, and the Museum of Contemporary Digital Art. She lives in New York City.
Louise Macfadyen Designing AI Interfaces Design Principles for Creative and Autonomous AI
979-8-341-63982-9 [LSI] Designing AI Interfaces by Louise Macfadyen Copyright © 2026 Louise Macfadyen. All rights reserved. Published by O’Reilly Media, Inc., 141 Stony Circle, Suite 195, Santa Rosa, CA 95401. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (https://oreilly.com). For more information, contact our corporate/institu‐ tional sales department: 800-998-9938 or corporate@oreilly.com. Acquisitions Editor: Nicole Butterfield Development Editor: Corbin Collins Production Editor: Aleeya Rahman Copyeditor: J.M. Olejarz Proofreader: Sonia Saruba Indexer: BIM Creatives, LLC Cover Designer: Susan Brown Cover Illustrator: Monica Kamsvaag Interior Designer: David Futato Interior Illustrator: Kate Dullea March 2026: First Edition Revision History for the First Edition 2026-03-11: First Release See https://oreilly.com/catalog/errata.csp?isbn=9798341639829 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Designing AI Interfaces, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the author and do not represent the publisher’s views. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
Table of Contents Foreword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii 1. Understanding Large Language Models and Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Early Predictive Language Models for Consumers 1 A Short History of AI 3 The Beginning: Symbols 5 User Experience in the AI Era 10 Know Your User 12 Technical and Evaluatory Skills 16 Ethical Considerations in AI Design 17 Communication: Knowledge Sharing and Role Clarity 19 Reading This Book: The Input-Computation-Output Structure 24 Inputs 24 Computation 25 Outputs 25 Summary 26 2. Capability, Discovery, and Orchestration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Capabilities: What Can the Model Do? 31 Training 32 Evals 33 Discovery 35 Intent with AI 36 Discoverability and Initialization 37 Adoption and Momentum Behavior 42 iii
Orchestration 44 Models, Agents, and Tool Selection 46 Usage and Metering 52 Permissions and Capabilities 54 Library Management and Assets 55 Summary 56 3. Designing for AI Inputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 From Multics to Google Search: The Shifting Burden 60 Three Channels of Intent 62 Implicit Context 62 Explicit Prompting 63 The CARE Framework 64 Prompt Intent 68 Risks in Explicit Prompting 68 Direct Manipulation 71 Input Modalities 74 Text 74 Image 79 Voice 80 Guiding Users Toward Better Inputs 81 Starter Prompts as Product Positioning 84 Summary 85 4. Computation: Designing for the Processing and Generation Phase. . . . . . . . . . . . . . . . . 87 The Computation Pipeline: From Input to Output 89 Step 1: Input Processing and Preparation 90 Step 2: Routing 92 Step 3: Generation and Inference 93 Don’t Make Me Wait: Managing Latency and Delays 94 Visual Methods for Mitigating Latency 96 Position: Where the Action Is At 98 Speed: Difficult to Predict 100 Notify Me: Notifications and Status Indicators 100 The Value of Segmenting by User Need 104 When the System Doesn’t Deliver: Designing for Errors 107 Errors in AI 109 Error Messages 109 Midstream Errors and Retrieval 110 Maintaining Momentum 110 Summary 112 iv | Table of Contents
5. Output: Designing the Delivery and Presentation of LLM Responses. . . . . . . . . . . . . . 113 Designing Outputs: The Output Is Not the Answer 114 Design Principles of Outputs 116 Designing for Clarity 116 Designing for Verifiability 120 Designing Grounding 125 Designing for Actionability 128 Designing for Adjustability 132 Designing for Multiturn Outputs 134 Watermarking and Detection 135 Understanding AI Misuse 135 Risks of Generative AI Content 136 How Watermarking Works 137 Image and Video Watermarking 139 Detection Systems and the Future of Watermarking 139 Managing Problematic Outputs 141 Summary 143 6. Agentic AI: Designing for Systems That Plan, Act, and Adapt. . . . . . . . . . . . . . . . . . . . . 145 Understanding Agentic Design Patterns 147 The Reflection Pattern: Building in Self-Critique 148 How Reflection Works in Practice 150 Design Challenges with Reflection 150 Reflection Across Different Domains 151 The Tool Use Pattern: From Adviser to Operator 151 The Shift from Static to Dynamic Assistance 152 Design Implications of Tool Use 153 The Planning Pattern: Breaking Down Complexity 154 Designing for Visible Planning 156 Conditional Planning and Branching 157 The Multiagent Collaboration Pattern: Specialized Coordination 157 Multiagent Interface Challenges 158 Conflict Resolution in Multiagent Systems 159 The ReAct Pattern: Adaptive Problem Solving 159 Implementation and Use Cases 160 Designing for Iterative Agents 161 Moving from Engineering to Experience 162 Where Users Experience Agentic Patterns 162 Processing 163 Clarification 165 Progress Communication 167 Table of Contents | v
Alternative Progress Patterns: Browser and Computer Use Visualization 168 Checkpoints 170 New Principles for Agentic Interface Design 172 Reveal the Plan 172 Prioritize What Matters Most 173 Design for Shared Control 174 Summary 176 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 vi | Table of Contents
Foreword One of the most persistent myths in AI right now is that design doesn’t matter when the AI is powerful enough. The argument goes something like this: if the model is smart enough, it doesn’t need a good interface. Users will just talk to it, it’ll under‐ stand them, and everyone goes home happy. You can tell it what you want, and it does it! UI is seen as old-fashioned, just like Windows 95. I’ve been arguing against this for years, not because I’m a designer defending my turf (although let’s be real, I am), but because the argument is empirically wrong. And now, thankfully, Louise Macfadyen has written the book that proves it. The book you’re holding makes a case I find deeply satisfying, not because it vindi‐ cates a position I already held (although, again, it does), but because it goes further than my ranting and arrives somewhere much more useful and practical. Macfadyen takes a deceptively simple framework (input-computation-output) and shows that different species of potential failure haunt each stage. Her argument isn’t that design should be layered on top of AI (Lipstick on a Pig 3.0 anyone?); it’s that design is the discipline that determines whether an AI system works at all, for actual people, in the actual world. We’ve been in this situation before, and we’ve been wrong in the same ways. When ELIZA arrived in 1966, people knew it was a program. MIT professor Joseph Weizenbaum built it as a “trivial” demonstration of natural language processing. He didn’t expect that users would develop emotional attachments to it. And yet patients confided in it. Secretaries asked him to leave the room so they could speak to ELIZA in private. People projected understanding, care, and intelligence onto a system that had none of those things. By his own account, Weizenbaum was disturbed, not by ELIZA, but by us. Macfadyen opens with this history and the ELIZA effect: the natural human impulse to assume systems making humanlike language have human qualities. But you can’t write this off as 1960s computer naïveté. It’s the constant condition of designing for language-based interfaces. Every chatbot, assistant, and copilot is prone to this kind vii
of projection. And the chat interface, the standard text box we use for AI interaction now? Not helping. If anything, its design tends to make this worse. The idea that the “UI is going away” often ends with people pointing to the text box as the ultimate interface. The final step after the command line (CLI) and graphical user interfaces (GUI) is conversational AI. The argument is that we can ditch the learning curves, confusing buttons, and old design problems; we’ll just chat with the computer. Have these people never watched Star Trek? Assuming AI just knows what the user means is a risky strategy. It relies on the idea that the model will perfectly understand intent, the answer it spits out will be exactly what’s expected, and if it messes up, the user will immediately know. Decades of research in human-computer interaction (HCI) would like a word. What Macfadyen uncovers is that what users type is often the least significant input the model receives. By the time your words reach the model, the system has already assembled a context window: your conversation history, your document, your prefer‐ ences, environmental signals, and, possibly, the outputs of previous tool calls. The model sees all of that before it sees your prompt. Most users have no idea this is hap‐ pening. Most interfaces give no indication that it’s happening. The blank text box implies that the conversation starts fresh each time: a polite fiction that can produce baffling, untrustworthy results. This is a design problem. Not a model problem. Not an engineering problem. A design problem. Another thing Macfadyen addresses that I’ve been hollering about: the 25 years of Google-trained behavior that users bring to every AI interaction. People have become remarkably sophisticated searchers. They’ve learned to add “Reddit” to queries for real opinions, for example. They know how to filter by file type, date, and domain. They’ve developed a finely tuned understanding of how to get what they want from a search box. Then we hand them an AI assistant, which uses a box that looks almost identical, and we act surprised when they treat it like a search engine. They’re not wrong to think this; they’re applying the best literacy they have to a new system that didn’t bother to explain itself. The onboarding problem for AI, Macfadyen argues, isn’t teaching people something new. It’s untraining habits they’ve spent two decades reinforcing. That’s a harder problem, which can only be solved through UI design. That clean, approachable, conversational minimalist chat interface may be the same aesthetic that makes AI dangerous. It hides probabilistic complexity. It reinforces the illusion of intelligence. It makes errors invisible. The design that lowers the barrier to viii | Foreword
entry is also the design that lowers the barrier to misplaced trust. There’s no clean version of this. A text box isn’t the pinnacle of user experience: no one interface paradigm is. The future is a proliferation of modalities: direct manipulation for spatial or visual or complex tasks, conversational input when ambiguity or exploration is the point, and even no interface when it makes sense for AI to work behind the scenes. What Macfadyen gives us are principles for navigating that proliferation rather than advo‐ cating for one interface to replace them all. Principles age better than patterns. Macfadyen maps user intent across three simultaneous channels: implicit context (the document open, the time of day, what the user did previously), explicit prompting (what they prompt), and direct manipulation (what they click, select, and highlight). This isn’t a neat taxonomy for its own sake: it’s a diagnostic tool. When an AI feature fails (and they do, regularly) it’s almost always traceable to a breakdown in one of these channels. Either the system is ignoring the context it should be reading or it’s misinterpreting the prompt, because the user doesn’t know how to prompt effectively or it’s waiting or trying to parse typed input when a direct manipulation affordance would have been clearer, faster, and more accessible. The computation phase (you know, that essential part in between when the AI is doing its thing) has always been designed as a black box. A spinner. A progress bar. Sometimes a percentage that lies to you. Don’t worry about that. Watzlawick’s axiom says, “It’s impossible not to communicate.” Silence communicates. Delay communi‐ cates. A spinner that runs for eight seconds communicates something different than one that runs for 800 milliseconds, even if the underlying operation is identical. This is not a small thing (although it’s designed as one). Perceived reliability and actual reli‐ ability can diverge entirely, and the gap is filled by whatever signal the interface sends during computation. That means the waiting state is a design surface, not an unavoidable tax. Confession: I hadn’t really thought about this much. I’ve spent a lot of time on inputs and outputs, but I haven’t focused enough on the actual UX in between. Macfadyen’s breakdown of latency is solid. It sorts wait times by user activity, things like grabbing information, real-time control, buying something, chatting, creating content, or hands-off operation. The key takeaway is that waiting isn’t just about time; it’s about the type of wait. Different tasks make the delay feel completely different. That’s why managing the wait isn’t just about appearance; it’s essential for user trust. UX design is about look and feel, after all. Chapter 5, covering the design of outputs, contains the line that I think should be printed above every product team’s whiteboard: “The output is not the answer.” While the dominant AI UI (the textbox) aims for an appearance of authority and objectivity, AI outputs are not objective truths from infallible machines. Adding Foreword | ix
confidence indicators, as Macfadyen notes, worsens this problem. A model’s “93% confidence” is not a measurable accuracy like a spell-checker’s. Instead, it represents the probability of the next token in an unexplainable, high-dimensional space. Pre‐ senting this as a percentage is misleading; it’s a false precision that creates manufac‐ tured trust, rather than true transparency. Most people treat sycophancy as a model problem. Macfadyen frames it as a UX problem. AI models trained on how people react are basically built to mirror vibes, picking up and reflecting enthusiasm or confidence, even if you’re off-base. This isn’t a bug: it’s what happens when you optimize for user happiness and engagement. Because the interface never pushes back or offers a different perspective, it just boosts whatever you put in, creating this echo chamber that ends up sounding authoritative. I have a lot of sympathy for the instinct to reduce friction. Nobody wants to use a software tool that argues with them. But there’s a meaningful difference between fric‐ tion that wastes time and friction that prevents bad decisions. Macfadyen’s implica‐ tion that interfaces should be designed to create space for disagreement is right. This might mean something as simple as a “Consider an alternative view” button. It might mean options. It might mean surfacing the model’s uncertainty where it actually exists. What it can’t mean is nothing. Chapter 6, which covers agentic AI, is where I think the book is most forward- looking, and where the design challenges are most acute. Remember Microsoft’s Clippy? Yes, the one everyone made fun of. Clippy actually had the right idea: proactive, in-context help. The problem wasn’t the concept; it was how they did it. Clippy was a total failure because it had no real context awareness, ignored what users wanted, and couldn’t even finish the tasks it suggested. Basically, it was annoying because it couldn’t live up to the hype. Agentic systems change the design problem in a way that feels radical. In traditional systems, even AI systems, computation is hidden. In agentic AI, it explodes into a sequence of visible operations: decomposition, delegation, tool calls, clarifications, and checkpoints. The transparency is welcome, even necessary. But the cognitive load is real. Macfadyen’s three design principles for this new territory (Reveal the Plan, Prioritize What Matters Most, and Design for Shared Control) are a clear articulation of the agentic design challenge. I’m a big fan of “Reveal the Plan.” Before an agent acts, it should show its interpreta‐ tion of what it thinks it’s being asked to do—not just for transparency’s sake, but for scope. When a user approves a plan, they’re agreeing to a contract establishing what’s expected. If the results don’t align with the plan, the plan should serve as a reference point for review. This is how you build trust in a system that operates invisibly: not by making all operations visible (that way lies madness), but by making the plan visible, x | Foreword
so users know what they agreed to. Trust isn’t built solely by good results: it’s built on predictable behavior. The hardest problems in product design are often organizational, not technical. AI doesn’t change that. In fact, it can make it worse. While the three-level maturity model presented in Chapter 1 can function as a diag‐ nostic tool for AI teams, it also reveals an underlying cause of poor AI UX. Some AI products feel disjointed: features lack cohesion, capabilities can unexpectedly confuse users, and model updates can subtly introduce errors. Macfadyen rightly identifies these issues as possibly symptomatic of damaged communication channels between engineering, design, and product management. The incoherence stems not from the model itself, but from organizational dysfunction. Model limitations are somewhat legible. You can point to a benchmark, run an eval, file a bug. Organizational dysfunction, on the other hand, is harder to diagnose and harder to fix. But three decades of doing this work leads me to believe it’s a more common issue than people realize. I’ve seen (and unfortunately used) products with impressive models but horrible interfaces, and you can probably trace those horrible interfaces directly back to the organizational culture that made them. There’s a book that needed to be written about this moment in AI design. Not an enthusiast’s book; we have plenty of those. Not a skeptic’s book; we have plenty of those, too. It should be a practitioner’s book, written by someone who understands both the technology and the people who use it, for teams who are trying to build things that actually work. This is that book. Use this book with colleagues from engineering and product, not just with other designers. The frameworks are interdisciplinary by design. They describe problems that no single role can solve. AI Design is a team sport. Read the chapters on compu‐ tation (Chapter 4) and agentic AI (Chapter 6) even if your current work doesn’t touch them, because it will. Read the section in Chapter 1 on organizational maturity with whomever is responsible for your team’s structure, because it’s one of the most impor‐ tant in the book and the one least likely to be considered when your product has issues. The gap between what users believe they’re talking to and what the system actually does remains one of the central unsolved problems of our field. It was unsolved in 1966. It’s unsolved now. The technology has changed almost beyond recognition. People haven’t. — Dan Saffer Human-Computer Interaction Institute Carnegie Mellon University Foreword | xi
(This page has no text content)
Preface In his book Thinkertoys (Ten Speed Press, 2006), Michael Michalko offers innovators a number of different frameworks for working through tricky problems. Using exer‐ cises and analogies, he invites readers to apply a variety of methodologies that pro‐ mote abstract, creative thinking. In one part, he describes the “Phoenix Checklist,” a rich set of questions developed by, of all people, the CIA, designed to help agents move forward when a task seems uncertain—or impossible. Many of the questions focus on exploring and defining the problem and the sur‐ rounding context. For example: • What information do you have? • What are the unknown factors? • Why is it necessary to solve the problem? But the heart of the process involves exploring other solutions and evaluating their relevance to the problem at hand: • Have you seen this problem in a slightly different form? • Suppose you find a problem similar to yours that has already been resolved. Can you use the same method? • What have others done? It’s this focus on foundational problems that piqued my interest. When I first came across this methodology, I had just begun working with AI at Google on its TensorFlow project and regularly felt baffled and hopeless about how to proceed. In the intervening years, I’ve returned to these questions as an establishing process for designing for AI, with a particular focus on how the early work on graphical user interfaces (GUIs) and human-computer interaction (HCI) laid a useful foundation for thinking about designing for the new technology. xiii
The field of software design emerged as computing expanded beyond its original spe‐ cialist audience. Suddenly, the people building systems and the people using them were no longer the same, which meant a voice was needed to advocate for the user’s experience. Better user experiences opened up broader adoption and new markets, allowing technology companies to compete not just on capability but on accessibility. This transformed computers from tools for experts into tools for everyone. But in the AI era, where products compete primarily on model performance and technical benchmarks, the designer’s strategic and visual skill can feel devalued relative to the need for faster inference, larger context windows, and better accuracy scores. As product decisions around AI are increasingly made by technical teams and influ‐ enced by competitive pressures, designers are at risk of losing their ability to advocate for the user. This isn’t to say engineers aren’t capable of making good user experience decisions, but that appointing a voice to challenge assumptions and embed good practices is how we arrived at usable software in the first place. It would be easy to say in response that design isn’t the arbiter of this technical terri‐ tory; that, as Henry Ford said, we’re in danger of inventing a faster horse when we should be inventing the car. But in this book I emphasize that designers are vitally positioned to understand users’ needs when working with AI: they should help every‐ one focus on the principles of the task users are seeking to accomplish. The user isn’t thinking about horses or cars—they just seek to go from point A to point B as quickly and efficiently as possible, and to do that they need solutions they couldn’t have imag‐ ined themselves. Who Should Read This Book This book is for product designers. Whether you’re a few years into your career or have been practicing for decades, if you’ve found yourself beginning an AI project and wondering where to start, this book will give you the grounding you need. You might be a designer who has spent years refining your craft around deterministic systems, where buttons do what they say and outputs are predictable, and you’re now being asked to design for something that feels fundamentally slippery. You might be earlier in your career, entering the field at a moment when AI capabilities are already woven into the products you’re expected to ship, and you have little guidance on how to think about them differently from traditional features. Either way, my goal is for this book to meet you where you are. I’ve written with the assumption that you have limited technical knowledge of how AI systems work. You don’t need to understand transformers, training data, or model architectures to design excellent AI interfaces, but you do need a working mental model of what these systems can and cannot do, and how their capabilities translate into user-facing experiences. We’ll build that understanding together, drawing on the computing patterns you already know well. xiv | Preface
If you’re a researcher, engineer, or product manager curious about how design think‐ ing applies to AI, you’re welcome here too. The frameworks and patterns in this book translate across disciplines, and you may find that a designer’s lens offers a useful complement to your existing expertise. Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Constant width Used for program listings, as well as within paragraphs to refer to program ele‐ ments such as variable or function names, databases, data types, environment variables, statements, and keywords. Constant width bold Shows commands or other text that should be typed literally by the user. Constant width italic Shows text that should be replaced with user-supplied values or by values deter‐ mined by context. This element signifies a tip or suggestion. This element signifies a general note. This element indicates a warning or caution. Preface | xv
O’Reilly Online Learning For more than 40 years, O’Reilly Media has provided technol‐ ogy and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit https://oreilly.com. How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 141 Stony Circle, Suite 195 Santa Rosa, CA 95401 800-889-8969 (in the United States or Canada) 707-827-7019 (international or local) 707-829-0104 (fax) support@oreilly.com https://oreilly.com/about/contact.html We have a web page for this book, where we list errata and any additional informa‐ tion. You can access this page at https://oreil.ly/designing-AI-interfaces. For news and information about our books and courses, visit https://oreilly.com. Find us on LinkedIn: https://linkedin.com/company/oreilly-media. Watch us on YouTube: https://youtube.com/oreillymedia. Acknowledgements This curious, complicated book owes an enormous debt of gratitude to the many sup‐ porters who helped in its creation. Firstly, to my wonderful partner and fiancé, Fred Benenson, who instilled in me the confidence to pursue this work, thank you. Your technical skill, deep intelligence, and constant encouragement helped this project feel not only doable but worth the squeeze, as you might say—thank you for your love and support. xvi | Preface
To the team who made this book possible, I am so grateful for your support and help along the way. To Mike Loukides, thank you for your early support and interest in the original idea. To Nicole Butterfield, for your persistent and dogged belief in this title, especially at moments when it was still finding its shape. To Corbin Collins, for your quick-eyed editing, unending patience, and steady reassurance throughout the pro‐ cess. To Aleeya Rahman and Josh Olejarz for their support bringing the book over the final production hurdles. To my team of technical editors, Hannah Gutkauf, David Evans, Rajeshwari Ganesan, and Dan Saffer, who kept this book from slipping into pure hallucination, thank you. It is deeply improved by the care with which you shared your knowledge and time. To my parents, Alison and Alan, who learned far more about AI than they might ever have hoped to know, thank you for your curiosity, your good humor, and your will‐ ingness to listen as I talked through half-formed ideas aloud. To my colleagues at Instrument who worked alongside me on that first AI project, thank you. The atmosphere of learning you created framed LLMs as a creative, exploratory technology and sparked a curiosity that stayed with me long after the project ended. In particular, I want to thank Nishat Akhtar, David Brewer, Meaza Abate, and Arthur Lender. Finally, to my dear friend Erin Allen, your technical mind and kind heart carried me through many of the most challenging parts of this book. I am endlessly grateful for your steadiness, your clarity, and your friendship. Preface | xvii
(This page has no text content)