Generative AI with LangChain_Second Edition (Ben Auffarth, Leonid Kuligin) (z-library.sk, 1lib.sk, z-lib.sk)
AIAuthor:Ben Auffarth, Leonid Kuligin
Gain a solid foundation in LangChain, agentic AI, and LangGraph, and learn to build production-ready systems with multi-agent architectures, advanced RAG pipelines, Tree of Thought reasoning, agent handoffs, and fine-grained error handling.
Tags
Support Statistics
¥.00 ·
0times
Text Preview (First 20 pages)
Registered users can read the full content for free
Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.
Page
1
(This page has no text content)
Page
2
Generative AI with LangChain Second Edition Build production-ready LLM applications and advanced agents using Python, LangChain, and LangGraph Ben Auffarth Leonid Kuligin Generative AI with LangChain
Page
3
Second Edition Copyright © 2025 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Portfolio Director: Gebin George Relationship Lead: Ali Abidi Project Manager: Prajakta Naik Content Engineer: Tanya D’cruz
Page
4
Technical Editor: Irfa Ansari Copy Editor: Safis Editing Indexer: Manju Arasan Proofreader: Tanya D’cruz Production Designer: Ajay Patule Growth Lead: Nimisha Dua First published: December 2023 Second edition: May 2025 Production reference: 1190525 Published by Packt Publishing Ltd. Grosvenor House 11 St Paul’s Square Birmingham B3 1RB, UK. ISBN 978-1-83702-201-4 www.packtpub.com
Page
5
To the mentors who guided me throughout my life— especially Tony Lindeberg, whose personal integrity and perseverance are a tremendous source of inspiration—and to my son, Nicholas, and my partner, Diane. —Ben Auffarth To my wife, Ksenia, whose unwavering love and optimism have been my constant support over all these years; to my mother-in-law, Tatyana, whose belief in me—even in my craziest endeavors—has been an incredible source of strength; and to my kids, Matvey and Milena: I hope you’ll read it one day. —Leonid Kuligin Contributors About the authors Dr. Ben Auffarth, PhD, is an AI implementation expert with more than 15 years of work experience. As the founder of Chelsea AI Ventures, he specializes in helping small and medium enterprises implement enterprise-grade AI solutions that deliver tangible ROI. His systems have prevented millions in fraud losses and process transactions at sub-300ms latency. With a background in computational neuroscience, Ben brings rare depth to practical AI
Page
6
applications—from supercomputing brain models to production systems that combine technical excellence with business strategy. First and foremost, I want to thank my co-author, Leo —a superstar coder—who’s been patient throughout and always ready when advice was needed. This book also wouldn’t be what it is without the people at Packt, especially Tanya, our editor, who offered sparks of insight and encouraging words whenever needed. Finally, the reviewers were very helpful and generous with their critiques, making sure we didn’t miss anything. Any errors or oversights that remain are entirely mine. Leonid Kuligin is a staff AI engineer at Google Cloud, working on generative AI and classical machine learning solutions, such as demand forecasting and optimization problems. Leonid is one of the key maintainers of Google Cloud integrations on LangChain and a visiting lecturer at CDTM (a joint institution of TUM and LMU). Prior to Google, Leonid gained more than 20 years of experience building B2C and B2B applications based on complex machine learning and data processing solutions—such as search, maps, and investment management—in German, Russian, and U.S. technology, financial, and retail companies.
Page
7
I want to express my sincere gratitude to all my colleagues at Google with whom I had the pleasure and joy of working, and who supported me during the creation of this book and many other endeavors. Special thanks go to Max Tschochohei, Lucio Floretta, and Thomas Cliett. My appreciation also goes to the entire LangChain community, especially Harrison Chase, whose continuous development of the LangChain framework made my work as an engineer significantly easier. About the reviewers Max Tschochohei advises enterprise customers on how to realize their AI and ML ambitions on Google Cloud. As an engineering manager in Google Cloud Consulting, he leads teams of AI engineers on mission-critical customer projects. While his work spans the full range of AI products and solutions in the Google Cloud portfolio, he is particularly interested in agentic systems, machine learning operations, and healthcare applications of AI. Before joining Google in Munich, Max spent several years as a consultant, first with KPMG and later with the Boston Consulting Group. He also led the digital transformation of NTUC Enterprise, a Singapore government organization. Max holds a PhD in Economics from Coventry University. Rany ElHousieny is an AI Solutions Architect and AI Engineering Manager with over two decades of experience
Page
8
in AI, NLP, and ML. Throughout his career, he has focused on the development and deployment of AI models, authoring multiple articles on AI systems architecture and ethical AI deployment. He has led groundbreaking projects at companies like Microsoft, where he spearheaded advancements in NLP and the Language Understanding Intelligent Service (LUIS). Currently, he plays a pivotal role at Clearwater Analytics, driving innovation in generative AI and AI-driven financial and investment management solutions. Nicolas Bievre is a Machine Learning Engineer at Meta with extensive experience in AI, recommender systems, LLMs, and generative AI, applied to advertising and healthcare. He has held key AI leadership roles at Meta and PayPal, designing and implementing large-scale recommender systems used to personalize content for hundreds of millions of users. He graduated from Stanford University, where he published peer-reviewed research in leading AI and bioinformatics journals. Internationally recognized for his contributions, Nicolas has received awards such as the “Core Ads Growth Privacy” Award and the “Outre-Mer Outstanding Talent” Award. He also serves as an AI consultant to the French government and as a reviewer for top AI organizations. Join our communities on Discord and Reddit
Page
9
Have questions about the book or want to contribute to discussions on Generative AI and LLMs? Join our Discord server at https://packt.link/4Bbd9 and our Reddit channel at https://packt.link/wcYOQ to connect, share, and collaborate with like-minded AI professionals. Discord QR Reddit QR
Page
10
Preface With Large Language Models (LLMs) now powering everything from customer service chatbots to sophisticated code generation systems, generative AI has rapidly transformed from a research lab curiosity to a production workhorse. Yet a significant gap exists between experimental prototypes and production-ready AI applications. According to industry research, while enthusiasm for generative AI is high, over 30% of projects fail to move beyond proof of concept due to reliability issues, evaluation complexity, and integration challenges. The LangChain framework has emerged as an essential bridge across this divide, providing developers with the tools to build robust, scalable, and practical LLM applications. This book is designed to help you close that gap. It’s your practical guide to building LLM applications that actually work in production environments. We focus on real-world problems that derail most generative AI projects: inconsistent outputs, difficult debugging, fragile tool integrations, and scaling bottlenecks. Through hands-on examples and tested patterns using LangChain, LangGraph, and other tools in the growing generative AI ecosystem, you’ll learn to build systems that your
Page
11
organization can confidently deploy and maintain to solve real problems.
Page
12
Who this book is for This book is primarily written for software developers with basic Python knowledge who want to build production- ready applications using LLMs. You don’t need extensive machine learning expertise, but some familiarity with AI concepts will help you move more quickly through the material. By the end of the book, you’ll be confidently implementing advanced LLM architectures that would otherwise require specialized AI knowledge. If you’re a data scientist transitioning into LLM application development, you’ll find the practical implementation patterns especially valuable, as they bridge the gap between experimental notebooks and deployable systems. The book’s structured approach to RAG implementation, evaluation frameworks, and observability practices addresses the common frustrations you’ve likely encountered when trying to scale promising prototypes into reliable services. For technical decision-makers evaluating LLM technologies within their organizations, this book offers strategic insight into successful LLM project implementations. You’ll understand the architectural patterns that differentiate experimental systems from production-ready ones, learn to identify high-value use cases, and discover how to avoid the integration and scaling issues that cause most projects to fail. The book provides clear criteria for evaluating
Page
13
implementation approaches and making informed technology decisions.
Page
14
What this book covers Chapter 1, The Rise of Generative AI, From Language Models to Agents, introduces the modern LLM landscape and positions LangChain as the framework for building production-ready AI applications. You’ll learn about the practical limitations of basic LLMs and how frameworks like LangChain help with standardization and overcoming these challenges. This foundation will help you make informed decisions about which agent technologies to implement for your specific use cases. Chapter 2, First Steps with LangChain, gets you building immediately with practical, hands-on examples. You’ll set up a proper development environment, understand LangChain’s core components (model interfaces, prompts, templates, and LCEL), and create simple chains. The chapter shows you how to run both cloud-based and local models, giving you options to balance cost, privacy, and performance based on your project needs. You’ll also explore simple multimodal applications that combine text with visual understanding. These fundamentals provide the building blocks for increasingly sophisticated AI applications. Chapter 3, Building Workflows with LangGraph, dives into creating complex workflows with LangChain and LangGraph. You’ll learn to build workflows with nodes and edges, including conditional edges for branching based on
Page
15
state. The chapter covers output parsing, error handling, prompt engineering techniques (zero-shot and dynamic few-shot prompting), and working with long contexts using Map-Reduce patterns. You’ll also implement memory mechanisms for managing chat history. These skills address why many LLM applications fail in real-world conditions and give you the tools to build systems that perform reliably. Chapter 4, Building Intelligent RAG Systems, addresses the “hallucination problem” by grounding LLMs in reliable external knowledge. You’ll master vector stores, document processing, and retrieval strategies that improve response accuracy. The chapter’s corporate documentation chatbot project demonstrates how to implement enterprise-grade RAG pipelines that maintain consistency and compliance—a capability that directly addresses data quality concerns cited in industry surveys. The troubleshooting section covers seven common RAG failure points and provides practical solutions for each. Chapter 5, Building Intelligent Agents, tackles tool use fragility—identified as a core bottleneck in agent autonomy. You’ll implement the ReACT pattern to improve agent reasoning and decision-making, develop robust custom tools, and build error-resilient tool calling processes. Through practical examples like generating structured outputs and building a research agent, you’ll understand what agents are and implement your first plan-and-solve
Page
16
agent with LangGraph, setting the stage for more advanced agent architectures. Chapter 6, Advanced Applications and Multi-Agent Systems, covers architectural patterns for agentic AI applications. You’ll explore multi-agent architectures and ways to organize communication between agents, implementing an advanced agent with self-reflection that uses tools to answer complex questions. The chapter also covers LangGraph streaming, advanced control flows, adaptive systems with humans in the loop, and the Tree-of- Thoughts pattern. You’ll learn about memory mechanisms in LangChain and LangGraph, including caches and stores, equipping you to create systems capable of tackling problems too complex for single-agent approaches—a key capability of production-ready systems. Chapter 7, Software Development and Data Analysis Agents, demonstrates how natural language has become a powerful interface for programming and data analysis. You’ll implement LLM-based solutions for code generation, code retrieval with RAG, and documentation search. These examples show how to integrate LLM agents into existing development and data workflows, illustrating how they complement rather than replace traditional programming skills. Chapter 8, Evaluation and Testing, outlines methodologies for assessing LLM applications before production deployment. You’ll learn about system-level evaluation,
Page
17
evaluation-driven design, and both offline and online methods. The chapter provides practical examples for implementing correctness evaluation using exact matches and LLM-as-a-judge approaches and demonstrates tools like LangSmith for comprehensive testing and monitoring. These techniques directly increase reliability and help justify the business value of your LLM applications. Chapter 9, Observability and Production Deployment, provides guidelines for deploying LLM applications into production, focusing on system design, scaling strategies, monitoring, and ensuring high availability. The chapter covers logging, API design, cost optimization, and redundancy strategies specific to LLMs. You’ll explore the Model Context Protocol (MCP) and learn how to implement observability practices that address the unique challenges of deploying generative AI systems. The practical deployment patterns in this chapter help you avoid common pitfalls that prevent many LLM projects from reaching production. Chapter 10, The Future of LLM Applications, looks ahead to emerging trends, evolving architectures, and ethical considerations in generative AI. The chapter explores new technologies, market developments, potential societal impacts, and guidelines for responsible development. You’ll gain insight into how the field is likely to evolve and how to position your skills and applications for future advancements, completing your journey from basic LLM
Page
18
understanding to building and deploying production-ready, future-proof AI systems. To get the most out of this book Before diving in, it’s helpful to ensure you have a few things in place to make the most of your learning experience. This book is designed to be hands-on and practical, so having the right environment, tools, and mindset will help you follow along smoothly and get the full value from each chapter. Here’s what we recommend: Environment requirements: Set up a development environment with Python 3.10+ on any major operating system (Windows, macOS, or Linux). All code examples are cross-platform compatible and thoroughly tested. API access (optional but recommended): While we demonstrate using open-source models that can run locally, having access to commercial API providers like OpenAI, Anthropic, or other LLM providers will allow you to work with more powerful models. Many examples include both local and API-based approaches, so you can choose based on your budget and performance needs. Learning approach: We recommend typing the code yourself rather than copying and pasting. This hands-on practice reinforces learning and encourages
Page
19
experimentation. Each chapter builds on concepts introduced earlier, so working through them sequentially will give you the strongest foundation. Background knowledge: Basic Python proficiency is required, but no prior experience with machine learning or LLMs is necessary. We explain key concepts as they arise. If you’re already familiar with LLMs, you can focus on the implementation patterns and production-readiness aspects that distinguish this book. Software/Hardware covered in the book Python 3.10+ LangChain 0.3.1+ LangGraph 0.2.10+ Various LLM providers (Anthropic, Google, OpenAI, local models) You’ll find detailed guidance on environment setup in Chapter 1, along with clear explanations and step-by-step instructions to help you get started. We strongly recommend following these setup steps as outlined—given the fast-moving nature of LangChain, LangGraph and the broader ecosystem, skipping them might lead to avoidable issues down the line. Download the example code files
Page
20
The code bundle for the book is hosted on GitHub at https://github.com/benman1/generative_ai_with_lang chain. We recommend typing the code yourself or using the repository as you progress through the chapters. If there’s an update to the code, it will be updated in the GitHub repository. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing. Check them out! Download the color images We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://packt.link/gbp/9781837022014. Conventions used There are a number of text conventions used throughout this book. CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. For example: “Let’s also restore from the initial checkpoint for thread-a . We’ll see that we start with an empty history:” A block of code is set as follows:
Comments 0
Loading comments...
Reply to Comment
Edit Comment