Author:Chi Wang & Donald Szeto
No description
Tags
Support Statistics
¥.00 ·
0times
Text Preview (First 20 pages)
Registered users can read the full content for free
Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.
Page
1
M A N N I N G Chi Wang Donald Szeto Codelab by Yan Xue Foreword by Silvio Savarese and Caiming Xiong A guide for software engineers
Page
2
(B) Dataset manager (C) Model trainer (D) Model serving (E) Metadata & artifacts store AI applications (A) Deep learning system: Public API sets Data scientists Product managers Web UI Data Researchersengineers Other data warehouses MLOps engineers (G) Interactive data science environment (Jupyter Notebook) Data collectors (F) Workflow orchestration The Reference Deep Learning System Architecture Upload by >> dr-notes.com
Page
3
Designing Deep Learning Systems A GUIDE FOR SOFTWARE ENGINEERS CHI WANG AND DONALD SZETO CODE LAB BY YAN XUE FOREWORD BY SILVIO SAVARESE AND CAIMING XIONG MANN I NG SHELTER ISLAND
Page
4
For online information and ordering of this and other Manning books, please visit www.manning.com. The publisher offers discounts on this book when ordered in quantity. For more information, please contact Special Sales Department Manning Publications Co. 20 Baldwin Road PO Box 761 Shelter Island, NY 11964 Email: orders@manning.com ©2023 by Manning Publications Co. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps. Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine. The author and publisher have made every effort to ensure that the information in this book was correct at press time. The author and publisher do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause, or from any usage of the information herein. Manning Publications Co. Development editor: Frances Lefkowitz 20 Baldwin Road Technical development editor: Ninoslav Čerkez PO Box 761 Review editor: Adriana Sabo Shelter Island, NY 11964 Production editor: Andy Marinkovich Copy editor: Kristen Bettcher Proofreader: Alisa Larson Technical proofreader: Al Krinker Typesetter: Dennis Dalinnik Cover designer: Marija Tudor ISBN: 9781633439863 Printed in the United States of AmericaUpload by >> dr-notes.com
Page
5
iii brief contents 1 ■ An introduction to deep learning systems 1 2 ■ Dataset management service 28 3 ■ Model training service 72 4 ■ Distributed training 103 5 ■ Hyperparameter optimization service 134 6 ■ Model serving design 159 7 ■ Model serving in practice 179 8 ■ Metadata and artifact store 228 9 ■ Workflow orchestration 246 10 ■ Path to production 271
Page
6
contents foreword ix preface xi acknowledgments xiii about this book xv about the authors xix about the cover illustration xx 1 An introduction to deep learning systems 1 1.1 The deep learning development cycle 3 Phases in the deep learning product development cycle 5 Roles in the development cycle 10 ■ Deep learning development cycle walk-through 12 ■ Scaling project development 13 1.2 Deep learning system design overview 13 Reference system architecture 14 ■ Key components 16 Key user scenarios 20 ■ Derive your own design 22 Building components on top of Kubernetes 24 1.3 Building a deep learning system vs. developing a model 26iv Upload by >> dr-notes.com
Page
7
CONTENTS v2 Dataset management service 28 2.1 Understanding dataset management service 30 Why deep learning systems need dataset management 30 Dataset management design principles 34 ■ The paradoxical character of datasets 35 2.2 Touring a sample dataset management service 37 Playing with the sample service 37 ■ Users, user scenarios, and the big picture 42 ■ Data ingestion API 44 ■ Training dataset fetching API 48 ■ Internal dataset storage 54 ■ Data schemas 56 ■ Adding new dataset type (IMAGE_CLASS) 59 Service design recap 60 2.3 Open source approaches 61 Delta Lake and Petastorm with Apache Spark family 61 Pachyderm with cloud object storage 67 3 Model training service 72 3.1 Model training service: Design overview 73 Why use a service for model training? 74 ■ Training service design principles 76 3.2 Deep learning training code pattern 77 Model training workflow 77 ■ Dockerize model training code as a black box 79 3.3 A sample model training service 79 Play with the service 80 ■ Service design overview 81 Training service API 83 ■ Launching a new training job 84 Updating and fetching job status 88 ■ The intent classification model training code 89 ■ Training job management 90 Troubleshooting metrics 92 ■ Supporting new algorithm or new version 92 3.4 Kubeflow training operators: An open source approach 93 Kubeflow training operators 94 ■ Kubernetes operator/controller pattern 94 ■ Kubeflow training operator design 96 ■ How to use Kubeflow training operators 97 ■ How to integrate these operators into an existing system 98 3.5 When to use the public cloud 99 When to use a public cloud solution 100 ■ When to build your own training service 100
Page
8
CONTENTSvi4 Distributed training 103 4.1 Types of distributed training methods 104 4.2 Data parallelism 105 Understanding data parallelism 105 ■ Multiworker training challenges 107 ■ Writing distributed training (data parallelism) code for different training frameworks 109 ■ Engineering effort in data parallel–distributed training 113 4.3 A sample service supporting data parallel–distributed training 115 Service overview 115 ■ Playing with the service 117 Launching training jobs 118 ■ Updating and fetching the job status 122 ■ Converting the training code to run distributedly 123 ■ Improvements 124 4.4 Training large models that can’t load on one GPU 124 Traditional methods: Memory saving 124 ■ Pipeline model parallelism 126 ■ How software engineers can support pipeline parallelism 131 5 Hyperparameter optimization service 134 5.1 Understanding hyperparameters 135 What is a hyperparameter? 135 ■ Why are hyperparameters important? 136 5.2 Understanding hyperparameter optimization 137 What is HPO? 137 ■ Popular HPO algorithms 140 Common automatic HPO approaches 145 5.3 Designing an HPO service 147 HPO design principles 147 ■ A general HPO service design 148 5.4 Open source HPO libraries 150 Hyperopt 151 ■ Optuna 153 ■ Ray Tune 155 Next steps 158 6 Model serving design 159 6.1 Explaining model serving 160 What is a machine learning model? 161 ■ Model prediction and inference 162 ■ What is model serving? 163 ■ Model serving challenges 164 ■ Model serving terminology 165 6.2 Common model serving strategies 166 Direct model embedding 166 ■ Model service 166 Model server 167Upload by >> dr-notes.com
Page
9
CONTENTS vii6.3 Designing a prediction service 168 Single model application 169 ■ Multitenant application 172 Supporting multiple applications in one system 174 ■ Common prediction service requirements 177 7 Model serving in practice 179 7.1 A model service sample 180 Play with the service 180 ■ Service design 181 ■ The frontend service 183 ■ Intent classification predictor 188 ■ Model eviction 194 7.2 TorchServe model server sample 194 Playing with the service 194 ■ Service design 195 The frontend service 196 ■ TorchServe backend 197 TorchServe API 197 ■ TorchServe model files 199 Scaling up in Kubernetes 203 7.3 Model server vs. model service 204 7.4 Touring open source model serving tools 205 TensorFlow Serving 206 ■ TorchServe 208 ■ Triton Inference Server 211 ■ KServe and other tools 215 ■ Integrating a serving tool into an existing serving system 217 7.5 Releasing models 218 Registering a model 220 ■ Loading an arbitrary version of a model in real time with a prediction service 221 ■ Releasing the model by updating the default model version 222 7.6 Postproduction model monitoring 224 Metric collection and quality gate 225 ■ Metrics to collect 225 8 Metadata and artifact store 228 8.1 Introducing artifacts 229 8.2 Metadata in a deep learning context 229 Common metadata categories 230 ■ Why manage metadata? 232 8.3 Designing a metadata and artifacts store 235 Design principles 235 ■ A general metadata and artifact store design proposal 236 8.4 Open source solutions 239 ML Metadata 239 ■ MLflow 242 ■ MLflow vs. MLMD 245
Page
10
CONTENTSviii9 Workflow orchestration 246 9.1 Introducing workflow orchestration 247 What is workflow? 247 ■ What is workflow orchestration? 248 The challenges for using workflow orchestration in deep learning 250 9.2 Designing a workflow orchestration system 252 User scenarios 252 ■ A general orchestration system design 254 Workflow orchestration design principles 256 9.3 Touring open source workflow orchestration systems 257 Airflow 258 ■ Argo Workflows 260 ■ Metaflow 265 When to use 269 10 Path to production 271 10.1 Preparing for productionization 274 Research 276 ■ Prototyping 277 ■ Key takeaways 278 10.2 Model productionization 278 Code componentization 280 ■ Code packaging 281 ■ Code registration 281 ■ Training workflow setup 281 ■ Model inferences 283 ■ Product integration 284 10.3 Model deployment strategies 285 Canary deployment 285 ■ Blue-green deployment 285 Multi-armed bandit deployment 286 appendix A A “hello world” deep learning system 288 appendix B Survey of existing solutions 298 appendix C Creating an HPO service with Kubeflow Katib 309 index 329Upload by >> dr-notes.com
Page
11
foreword A deep learning system can be assumed to be efficient if it can bridge two different worlds—research and prototyping with production operations. Teams who design such systems must be able to communicate with practitioners across these two worlds and work with the different sets of requirements and constraints that come from each. This requires a principled understanding of how the components in deep learning systems are designed and how they are expected to work in tandem. Very little of the existing literature covers this aspect of deep learning engineering. This information gap becomes an issue when junior software engineers are onboarded and expected to become effective deep learning engineers. Over the years, engineering teams have filled this void by using their acquired experience and ferreting out what they need to know from the literature. Their work has helped traditional software engineers build, design, and extend deep learning sys- tems in a relatively short amount of time. So it was with great excitement that I learned that Chi and Donald, both of whom have led deep learning engineering teams, have taken the very important initiative of consolidating this knowledge and sharing it in the form of a book. We are long overdue for a comprehensive book on building systems that support bringing deep learning from research and prototyping to production. Designing Deep Learning Systems finally fills this need. The book starts with a high-level introduction describing what a deep learning sys- tem is and does. Subsequent chapters discuss each system component in detail and provide motivation and insights about the pros and cons of various design choices.ix
Page
12
FOREWORDxEach chapter ends with an analysis that helps readers assess the most appropriate and relevant options for their own use cases. The authors conclude with an in-depth discus- sion, pulling from all previous chapters, on the challenging path of going from research and prototyping to production. And to help engineers put all these ideas into practice, they have created a sample deep learning system, with fully functional code, to illustrate core concepts and offer a taste to those who are just entering the field. Overall, readers will find this book easy to read and navigate while bringing their understanding of how to orchestrate, design, and implement deep learning systems to a whole new level. Practitioners at all levels of expertise who are interested in design- ing effective deep learning systems will appreciate this book as an invaluable resource and reference. They will read it once to get the big picture and then return to it again and again when building their systems, designing their components, and making cru- cial choices to satisfy all the teams that use the systems. —SILVIO SAVARESE, EVP, Chief Scientist, Salesforce —CAIMING XIONG, VP, SalesforceUpload by >> dr-notes.com
Page
13
preface A little more than a decade ago, we had the privilege of building some early end user– facing product features that were powered by artificial intelligence. It was a huge undertaking. Collecting and organizing data that would be fit for model training was not a usual practice at that time. Few machine learning algorithms were packaged as ready-to-use libraries. Performing experiments required running management manu- ally and building out custom workflows and visualizations. Custom servers were made to serve each type of model. Outside of resource-intensive tech companies, almost every single new AI-powered product feature was built from scratch. It was a far-reaching dream that intelligent applications would one day become a commodity. After working with a few AI applications, we realized that we had been repeating a similar ritual each time, and it seemed to us that it made more sense to design a sys- tematic way, with prototyping, for delivering AI product features to production. The fruit of this effort was PredictionIO, an open source suite of framework software that put together state-of-the-art software components for data collection and retrieval, model training, and model serving. Fully customizable through its APIs and deploy- able as services with just a few commands, it helped shorten the time required at every stage, from running data science experiments to training and deploying production- ready models. We were thrilled to learn that developers around the world were able to use PredictionIO to make their own AI-powered applications, resulting in some amaz- ing boosts to their businesses. PredictionIO was later acquired by Salesforce to tackle a similar problem on an even larger scale.xi
Page
14
PREFACExii By the time we decided to write this book, the industry was thriving with a healthy AI software ecosystem. Many algorithms and tools have become available to tackle dif- ferent use cases. Some cloud vendors such as Amazon, Google, and Microsoft even provide complete, hosted systems that make it possible for teams to collaborate on experimentation, prototyping, and production deployments at one centralized loca- tion. No matter what your goal is, you now have many choices and numerous ways to put them together. Still, as we work with teams to deliver deep learning–powered product features, there have been some recurring questions. Why is our deep learning system designed the way it is? Is this the best design for other specific use cases? We noticed that junior software engineers were the ones most often asking these questions, and we inter- viewed a few of them to find out why. They revealed that their conventional software engineering training did not prepare them to work effectively with deep learning sys- tems. And when they looked for learning resources, they found only scant and scattered information on specific system components, with hardly any resources discussing the fundamentals of the software components, why they were put together the way they were, and how they worked together to form a complete system. To address this problem, we started building a knowledge base, which eventually evolved into manual-like learning material explaining the design principles of each system component, the pros and cons of the design decisions, and the rationale from both technical and product perspectives. We were told that our material helped to quickly ramp up new teammates and allowed traditional software engineers with no prior experience in building deep learning systems to get up to speed. We decided to share this learning material with a much larger audience, in the form of a book. We contacted Manning, and the rest was history.Upload by >> dr-notes.com
Page
15
acknowledgments Writing a book indeed takes a lot of solitary effort, but this book would not have been possible without the help of the following individuals. Working with different teams at the Salesforce Einstein groups (Einstein platform, E.ai, Hawking) formed the basis of a large part of this book. These brilliant and influ- ential teammates include (in alphabetical order) Sara Asher, Jimmy Au, John Ball, Anya Bida, Gene Becker, Yateesh Bhagavan, Jackson Chung, Himay Desai, Mehmet Ezbiderli, Vitaly Gordon, Indira Iyer, Arpeet Kale, Sriram Krishnan, Annie Lange, Chan Lee, Eli Levine, Daphne Liu, Leah McGuire, Ivaylo Mihov, Richard Pack, Henry Saputra, Raghu Setty, Shaun Senecal, Karl Skucha, Magnus Thorne, Ted Tuttle, Ian Varley, Yan Yang, Marcin Zieminski, and Leo Zhu. We also want to take this opportunity to thank our development editor, Frances Lefkowitz. She is not only an excellent editor who provides great writing guidance and inline editing but also a great mentor who guided us throughout the entire book- writing process. This book wouldn’t be of its current quality or completed as planned without her. Our thanks go out to the Manning team for their guidance throughout the book’s writing process. We really appreciate the opportunity to have readers’ opinions in the early stages of the book’s writing through Manning Early Access Program (MEAP). To all the reviewers—Alex Blanc, Amit Kumar, Ayush Tomar, Bhagvan Kommadi, Dinkar Juyal, Esref Durna, Gaurav Sood, Guillaume Alleon, Hammad Arshad, Jamie Shaffer, Japneet Singh, Jeremy Chen, João Dinis Ferreira, Katia Patkin, Keith Kim, Larry Cai, Maria Ana, Mikael Dautrey, Nick Decroos, Nicole Königstein, Noah Flynn,xiii
Page
16
ACKNOWLEDGMENTSxivOliver Korten, Omar El Malak, Pranjal Ranjan, Ravi Suresh Mashru, Said Ech-Chadi, Sandeep D., Sanket Sharma, Satej Kumar Sahu, Sayak Paul, Shweta Joshi, Simone Sguazza, Sriram Macharla, Sumit Bhattacharyya, Ursin Stauss, Vidhya Vinay, and Wei Luo—your suggestions helped make this a better book. I would like to thank my wife Pei Wu for her unconditional love and tremendous sup- port throughout the process of writing this book. During the tough times of the Covid pandemic, Pei remained a peaceful and quiet corner that allowed the book to be com- posed amid a busy family with two lovely young children—Catherine and Tiancheng. Also, I would like to extend my gratitude to Yan Xue, a talented 10X developer who wrote nearly the entire code lab. His help makes the code lab not only high qual- ity but also easy to learn. Yan’s wife, Dong, supported him wholeheartedly so Yan could concentrate on the book lab. The other person I want to thank is Dianne Siebold, a talented and experienced tech writer at Salesforce. Dianne inspired me with her own writing experiences and encouraged me to begin writing. — Chi Wang Co-founding PredictionIO (later acquired by Salesforce) has taught me invaluable les- sons about building open source machine learning developer products. This adven- turous and rewarding journey would not be possible without courageous souls who placed immense trust in one another. They are (in alphabetical order) Kenneth Chan, Tom Chan, Pat Ferrel, Isabelle Lee, Paul Li, Alex Merritt, Thomas Stone, Marco Vivero, and Justin Yip. Simon Chan deserves a special mention. Chan co-founded PredictionIO, and I also had the honor to work with and learn from him in his previous entrepreneurial endeavors. He was the first person who officially introduced programming to me when we were both attending the same secondary school (Wah Yan College, Kow- loon) in Hong Kong. Other inspiring figures from the school include (in alphabetical order) Donald Chan, Jason Chan, Hamlet Chu, Kah Kuen Fu, Jeffrey Hau, Francis Kong, Eric Lau, Kam Lau, Raylex Lee, Kevin Lei, Danny Shing, Norman So, Steven Tung, and Lobo Wong. I am extremely grateful to my parents and my brother Ronald. They provided me with early exposure to computers. Their perpetual support played a vital role in my formative years as I aspired to become a computer engineer. My son, Spencer, is the walking proof of why biological deep neural networks are the most amazing things in the world. He is a wonderful gift who shows me every day that I can always grow and become better. Words cannot express how much my wife, Vicky, means to me. She can always bring out the best in me so that I can keep moving forward during difficult moments. She is the best companion that I could ever ask for. — Donald SzetoUpload by >> dr-notes.com
Page
17
about this book This book aims to equip engineers to design, build, or set up effective machine learn- ing systems and to tailor those systems to whatever needs and situations they may encounter. The systems they develop will facilitate, automate, and expedite the devel- opment of machine learning (deep learning, in particular) projects across a variety of domains. In the deep learning field, it is the models that get all the attention. Perhaps rightly so, when you consider that new applications developed from those models are coming onto the market regularly—applications that make consumers excited, such as human- detecting security cameras, virtual characters in internet video games who behave like real humans, a program that can write code to solve arbitrary problems posed to it, and advanced driver assistance systems that can one day lead to fully autonomous and self- driving cars. Within a very short period of time, the deep learning field is filled with immense excitement and promising potential waiting to be fully realized. But the model does not act alone. To bring a product or service to fruition, a model needs to be situated within a system or platform (we use these terms inter- changeably) that supports the model with various services and stores. It needs, for instance, an API, a dataset manager, and storage for artifacts and metadata, among others. So behind every team of deep learning model developers is a team of non– deep learning developers creating the infrastructure that holds the model and all the other components. The problem we have observed in the industry is that often the developers tasked with designing the deep learning system and components have only a cursory knowledgexv
Page
18
ABOUT THIS BOOKxviof deep learning. They do not understand the specific requirements that deep learn- ing needs from system engineering, so they tend to follow generic approaches when building the system. For example, they might choose to abstract out all work related to deep learning model development to the data scientist and only focus on automation. So the system they build relies on a traditional job scheduling system or business intel- ligence data analysis system, which is not optimized for how deep learning training jobs are run, nor for deep learning-specific data access patterns. As a result, the system is hard to use for model development, and model shipping velocity is slow. Essentially, engineers who lack a profound understanding of deep learning are being asked to build systems to support deep learning models. As a consequence, these engineering systems are inefficient and inappropriate for deep learning systems. Much has been written about deep learning model development from the data sci- entist’s point of view, covering data collection and dataset augmentation, writing train- ing algorithms, and the like. But very few books, or even blogs, deal with the system and services that support all these deep learning activities. In this book, we discuss building and designing deep learning systems from a soft- ware developer perspective. The approach is to first describe a typical deep learning system as a whole, including its major components and how they are connected; then we dive deep into each of the main components in a separate chapter. We begin every component chapter by discussing requirements. We then introduce design principles and sample services/code and, finally, evaluate open source solutions. Because we cannot cover every existing deep learning system (vendor or open source), we focus on discussing requirements and design principles (with examples) in the book. After learning these principles, trying the book’s sample services, and reading our discussion of open source options, we hope readers can conduct their own research to find what suits them best. Who should read this book? The primary audience of this book is software engineers (including recently gradu- ated CS students) who want to quickly transition into deep learning system engineer- ing, such as those who want to work on deep learning platforms or integrate some AI functionality—for example, model serving—into their products. Data scientists, researchers, managers, and anyone else who uses machine learning to solve real-world problems will also find this book useful. Upon understanding the under- lying infrastructure (or system), they will be equipped to provide precise feedback to the engineering team for improving the efficiency of the model development process. This is an engineering book, and you don’t need a background in machine learn- ing, but you should be familiar with basic computer science concepts and coding tools, such as microservices, gRPC, and Docker, to run the lab and understand the technical material. No matter your background, you can still benefit from the book’s nontechnical material to help you better understand how machine learning and deep learning systems work to bring products and services from ideas into production.Upload by >> dr-notes.com
Page
19
ABOUT THIS BOOK xvii By reading this book, you will be able to understand how deep learning systems work and how to develop each component. You will also understand when to gather requirements from users, translate requirements into system component design choices, and integrate components to create a cohesive system that helps your users quickly develop and deliver deep learning features. How this book is organized: A roadmap There are 10 chapters and three appendixes (including one lab appendix) in this book. The first chapter explains what a deep learning project development cycle is and what a basic deep learning system looks like. The next chapters dive into each functional component of the reference deep learning system. Finally, the last chapter discusses how models are shipped to production. The appendix contains a lab session to allow readers to try out the sample deep learning system. Chapter 1 describes what a deep learning system is, the different stakeholders of the system, and how they interact with it to deliver deep learning features. We call this interaction the deep learning development cycle. Additionally, you will conceptualize a deep learning system, called a reference architecture, that contains all essential ele- ments and can be adapted based on your requirements. Chapters 2 to 9 cover each core component of the reference deep learning system architecture, such as dataset management service, model training service, auto hyper- parameter optimization service, and workflow orchestration service. Chapter 10 describes how to take a final product from the research or prototyping stage to make it ready to be released to the public. Appendix A introduces the sample deep learning system and demonstrates the lab exercise, appendix B surveys existing solutions, and appendix C discusses Kubeflow Katib. About the code We believe the best way to learn is by doing, practicing, and experimenting. To demo the design principles explained in this book and provide hands-on experi- ence, we created a sample deep learning system and code lab. All the source code, set-up instructions, and lab scripts of the sample deep learning system are available on GitHub (https://github.com/orca3/MiniAutoML). You can also obtain executable snippets of code from the liveBook (online) version of this book at https://livebook .manning.com/book/designing-deep-learning-systems and from the Manning web- site (www.manning.com). The “hello world” lab (in appendix A) contains a complete, though simplified, mini deep learning system with the most essential components (dataset manage- ment, model training and serving). We suggest you try out the “hello world” lab after reading the first chapter of the book or do it before trying our sample services in this book. This lab also provides shell scripts and links to all the resources you need to get started.
Page
20
ABOUT THIS BOOKxviii Besides the code lab, this book contains many examples of source code in num- bered listings and in line with normal text. In both cases, the source code is formatted in a fixed-width font like this to separate it from ordinary text. Sometimes code is also in bold to highlight code that has changed from previous steps in the chapter, such as when a new feature adds to an existing line of code. In many cases, the original source code has been reformatted; we’ve added line breaks and reworked indentation to accommodate the available page space in the book. In rare cases, even this was not enough, and listings include line-continuation markers (➥). Additionally, comments in the source code have often been removed from the listings when the code is described in the text. Code annotations accompany many of the listings, highlighting important concepts. liveBook discussion forum Purchase of Designing Deep Learning Systems includes free access to liveBook, Man- ning’s online reading platform. Using liveBook’s exclusive discussion features, you can attach comments to the book globally or to specific sections or paragraphs. It’s a snap to make notes for yourself, ask and answer technical questions, and receive help from the author and other users. To access the forum, go to https://livebook .manning.com/book/designing-deep-learning-systems/discussion. You can also learn more about Manning’s forums and the rules of conduct at https://livebook.manning .com/discussion. Manning’s commitment to our readers is to provide a venue where a meaningful dialogue between individual readers and between readers and the author can take place. It is not a commitment to any specific amount of participation on the part of the author, whose contribution to the forum remains voluntary (and unpaid). We sug- gest you try asking the authors some challenging questions lest their interest stray! The forum and the archives of previous discussions will be accessible from the pub- lisher’s website as long as the book is in print.Upload by >> dr-notes.com
The above is a preview of the first 20 pages. Register to read the complete e-book.
Comments 0
Loading comments...
Reply to Comment
Edit Comment