Scaling Python with Ray Adventures in Cloud and Serverless Patterns (Holden Karau, Boris Lublinsky) (Z-Library)

Author: Holden Karau, Boris Lublinsky

科学

Serverless computing enables developers to concentrate solely on their applications rather than worry about where they've been deployed. With the Ray general-purpose serverless implementation in Python, programmers and data scientists can hide servers, implement stateful applications, support direct communication between tasks, and access hardware accelerators. In this book, experienced software architecture practitioners Holden Karau and Boris Lublinsky show you how to scale existing Python applications and pipelines, allowing you to stay in the Python ecosystem while reducing single points of failure and manual scheduling. Scaling Python with Ray is ideal for software architects and developers eager to explore successful case studies and learn more about decision and measurement effectiveness. If your data processing or server application has grown beyond what a single computer can handle, this book is for you. You'll explore distributed processing (the pure Python implementation of serverless) and learn how to: • Implement stateful applications with Ray actors • Build workflow management in Ray • Use Ray as a unified system for batch and stream processing • Apply advanced data processing with Ray • Build microservices with Ray • Implement reliable Ray applications

📄 File Format: PDF
💾 File Size: 3.5 MB
21
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
Ka ra u & Lub linsky Scaling Python with Ray Adventures in Cloud and Serverless Patterns Holden Karau & Boris Lublinsky Foreword by Robert Nishihara
📄 Page 2
DATA “Scaling Python with Ray is a concise and practical guide to adopting Ray and using it effectively. Informed by years of industry experience in data systems and distributed computing, Holden and Boris deliver the indispensable guide that users of Ray need.” —Dean Wampler, PhD Engineering Director, Accelerated Discovery Platform, IBM Research Scaling Python with Ray US $59.99 CAN $74.99 ISBN: 978-1-098-11880-8 Twitter: @oreillymedia linkedin.com/company/oreilly-media youtube.com/oreillymedia Serverless computing enables developers to concentrate solely on their applications rather than worry about where they’ve been deployed. With the Ray general-purpose serverless implementation in Python, programmers and data scientists can hide servers, implement stateful applications, support direct communication between tasks, and access hardware accelerators. In this book, experienced software architecture practitioners Holden Karau and Boris Lublinsky show you how to scale existing Python applications and pipelines, allowing you to stay in the Python ecosystem while reducing single points of failure and manual scheduling. Scaling Python with Ray is ideal for software architects and developers eager to explore successful case studies and learn more about decision and measurement effectiveness. If your data processing or server application has grown beyond what a single computer can handle, this book is for you. You’ll explore distributed processing (the pure Python implementation of serverless) and learn how to: • Implement stateful applications with Ray actors • Build workflow management in Ray • Use Ray as a unified system for batch and stream processing • Apply advanced data processing with Ray • Build microservices with Ray • Implement reliable Ray applications Holden Karau is a queer transgender Canadian, Apache Spark committer, Apache Software Foundation member, and an active open source contributor. Boris Lublinsky is a chief architect for IBM’s Discovery Accelerator Platform, where he specializes in Kubernetes, serverless, workflows, and complex systems design.
📄 Page 3
Holden Karau and Boris Lublinsky Foreword by Robert Nishihara Scaling Python with Ray Adventures in Cloud and Serverless Patterns Boston Farnham Sebastopol TokyoBeijing
📄 Page 4
978-1-098-11880-8 [LSI] Scaling Python with Ray by Holden Karau and Boris Lublinsky Copyright © 2023 Holden Karau and Boris Lublinsky. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com. Acquisitions Editors: Aaron Black and Jessica Haberman Development Editor: Virginia Wilson Production Editor: Gregory Hyman Copyeditor: Sharon Wilkey Proofreader: Justin Billing Indexer: nSight, Inc. Interior Designer: David Futato Cover Designer: Karen Montgomery Illustrator: Kate Dullea December 2022: First Edition Revision History for the First Edition 2022-11-29: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781098118808 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Scaling Python with Ray, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors, and do not represent the publisher’s views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
📄 Page 5
Table of Contents Foreword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1. What Is Ray, and Where Does It Fit?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Why Do You Need Ray? 2 Where Can You Run Ray? 3 Running Your Code with Ray 5 Where Does It Fit in the Ecosystem? 5 Big Data / Scalable DataFrames 7 Machine Learning 8 Workflow Scheduling 8 Streaming 9 Interactive 9 What Ray Is Not 9 Conclusion 10 2. Getting Started with Ray (Locally). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Installation 11 Installing for x86 and M1 ARM 12 Installing (from Source) for ARM 12 Hello Worlds 13 Ray Remote (Task/Futures) Hello World 13 Data Hello World 16 Actor Hello World 17 Conclusion 19 iii
📄 Page 6
3. Remote Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Essentials of Ray Remote Functions 22 Composition of Remote Ray Functions 27 Ray Remote Best Practices 30 Bringing It Together with an Example 31 Conclusion 33 4. Remote Actors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Understanding the Actor Model 35 Creating a Basic Ray Remote Actor 36 Implementing the Actor’s Persistence 41 Scaling Ray Remote Actors 45 Ray Remote Actors Best Practices 50 Conclusion 51 5. Ray Design Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Fault Tolerance 53 Ray Objects 56 Serialization/Pickling 59 cloudpickle 59 Apache Arrow 61 Resources / Vertical Scaling 62 Autoscaler 64 Placement Groups: Organizing Your Tasks and Actors 65 Namespaces 70 Managing Dependencies with Runtime Environments 70 Deploying Ray Applications with the Ray Job API 71 Conclusion 74 6. Implementing Streaming Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Apache Kafka 76 Basic Kafka Concepts 77 Kafka APIs 80 Using Kafka with Ray 81 Scaling Our Implementation 87 Building Stream-Processing Applications with Ray 88 Key-Based Approach 89 Key-Independent Approach 95 Going Beyond Kafka 95 Conclusion 96 iv | Table of Contents
📄 Page 7
7. Implementing Microservices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Understanding Microservice Architecture in Ray 97 Deployment 98 Additional Deployment Capabilities 101 Deployment Composition 104 Using Ray Serve for Model Serving 105 Simple Model Service Example 106 Considerations for Model-Serving Implementations 107 Speculative Model Serving Using the Ray Microservice Framework 109 Conclusion 111 8. Ray Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 What Is Ray Workflows? 113 How Is It Different from Other Solutions? 114 Ray Workflows Features 114 What Are the Main Features? 114 Workflow Primitives 115 Working with Basic Workflow Concepts 116 Workflows, Steps, and Objects 116 Dynamic Workflows 117 Virtual Actors 118 Workflows in Real Life 118 Building Workflows 118 Managing Workflows 119 Building a Dynamic Workflow 121 Building Workflows with Conditional Steps 121 Handling Exceptions 122 Handling Durability Guarantees 123 Extending Dynamic Workflows with Virtual Actors 124 Integrating Workflows with Other Ray Primitives 129 Triggering Workflows (Connecting to Events) 130 Working with Workflow Metadata 131 Conclusion 133 9. Advanced Data with Ray. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Creating and Saving Ray Datasets 136 Using Ray Datasets with Different Tools 138 Using Tools on Ray Datasets 139 pandas-like DataFrames with Dask 140 Indexing 141 Table of Contents | v
📄 Page 8
Shuffles 142 Embarrassingly Parallel Operations 148 Working with Multiple DataFrames 149 What Does Not Work 151 What’s Slower 152 Handling Recursive Algorithms 152 What Other Functions Are Different 153 pandas-like DataFrames with Modin 153 Big Data with Spark 154 Working with Local Tools 154 Using Built-in Ray Dataset Operations 155 Implementing Ray Datasets 158 Conclusion 159 10. How Ray Powers Machine Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Using scikit-learn with Ray 161 Using Boosting Algorithms with Ray 165 Using XGBoost 166 Using LightGBM 167 Using PyTorch with Ray 169 Reinforcement Learning with Ray 174 Hyperparameter Tuning with Ray 180 Conclusion 186 11. Using GPUs and Accelerators with Ray. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 What Are GPUs Good At? 187 The Building Blocks 188 Higher-Level Libraries 188 Acquiring and Releasing GPU and Accelerator Resources 189 Ray’s ML Libraries 190 Autoscaler with GPUs and Accelerators 190 CPU Fallback as a Design Pattern 191 Other (Non-GPU) Accelerators 192 Conclusion 192 12. Ray in the Enterprise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Ray Dependency Security Issues 193 Interacting with the Existing Tools 193 Using Ray with CI/CD Tools 194 Authentication with Ray 194 Multitenancy on Ray 195 vi | Table of Contents
📄 Page 9
Credentials for Data Sources 196 Permanent Versus Ephemeral Clusters 196 Ephemeral Clusters 197 Permanent Clusters 197 Monitoring 198 Instrumenting Your Code with Ray Metrics 201 Wrapping Custom Programs with Ray 203 Conclusion 204 A. Space Beaver Case Study: Actors, Kubernetes, and More. . . . . . . . . . . . . . . . . . . . . . . . . 205 B. Installing and Deploying Ray. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 C. Debugging with Ray. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Table of Contents | vii
📄 Page 10
(This page has no text content)
📄 Page 11
Foreword In this book, Holden Karau and Boris Lublinksy touch on the biggest trend in com‐ puting today: the growing need for scalable computing. This trend is being driven, in large part, by the proliferation of machine learning (ML) throughout many industries and the growing amount of computational resources needed to do ML in practice. The last decade has seen significant shifts in the nature of computing. In 2012, when I first began working in ML, much of it was managed on a single laptop or server, and many practitioners were using Matlab. That year was something of an inflection point as deep learning made a splash by winning the ImageNet competition by an astounding margin. That led to a sustained trend over many years in which more and more computation on more and more data has led to better results. This trend has yet to show signs of slowing down and, if anything, has accelerated in recent years with the advent of large language models. This shift—from small models on small data to large models on large data—has changed the practice of ML. Software engineering now plays a central role in ML, and teams and organizations that successfully leverage ML often build large in-house infrastructure teams to support the distributed systems necessary for scaling ML applications across hundreds or thousands of machines. So at the same time that ML is growing in its capabilities and becoming more relevant for a variety of businesses, it is also becoming increasingly difficult to do because of the significant infrastructure investment required to do it. To get to a state where every business can leverage and get value out of ML, we will have to make it far easier to apply in practice. This will mean eliminating the need for developers to become experts in distributed systems and infrastructure. This goal, making scalable computing and scalable ML easy to do, is the purpose of Ray and the reason that we created Ray in the first place. This is a natural contin‐ uation of a progression in computing. Going back a handful of decades, there was a time when developers had to program in Assembly Language and other low-level ix
📄 Page 12
machine languages in order to build applications, and so the best developers were the people who could perform low-level memory optimizations and other manipulations. That made software development difficult to do and limited the number of people who could build applications. Today, very few developers think about Assembly. It is no longer on the critical path for application development, and as a result, far more people can develop applications and build great products today. The same thing will happen with infrastructure. Today, building and managing infra‐ structure for scaling Python applications and scaling ML applications is on the critical path for doing ML and for building scalable applications and products. However, infrastructure will go the way of Assembly Language. When that happens, it will open up the door and far more people will build these kinds of applications. Scaling Python and Ray can serve as an entry point for anyone looking to do ML in practice or looking to build the next generation of scalable products and applications. It touches on a wide variety of topics, ranging from scaling a variety of important ML patterns, from deep learning to hyperparameter tuning to reinforcement learning. It touches on the best practices for scaling data ingest and preprocessing. It covers the fundamentals of building scalable applications. Importantly, it touches on how Ray fits into the broader ML and computing ecosystem. I hope you enjoy reading this book! It will equip you to understand the biggest trend in computing and can equip you with the tools to navigate and leverage that trend as you look to apply ML to your business or build the next great product and application. — Robert Nishihara Cocreator of Ray; cofounder and CEO of Anyscale San Francisco, November 2022 x | Foreword
📄 Page 13
Preface We wrote this book for developers and data scientists looking to build and scale appli‐ cations in Python without becoming systems administrators. We expect this book to be most beneficial for individuals and teams dealing with the growing complexity and scale of problems moving from single-threaded solutions to multithreaded, all the way to distributed computing. While you can use Ray from Java, this book is in Python, and we assume a general familiarity with the Python ecosystem. If you are not familiar with Python, excellent O’Reilly titles include Learning Python by Mark Lutz and Python for Data Analysis by Wes McKinney. Serverless is a bit of a buzzword, and despite its name, the serverless model does involve rather a lot of servers, but the idea is you don’t have to manage them explic‐ itly. For many developers and data scientists, the promise of having things magically scale without worrying about the servers’ details is quite appealing. On the other hand, if you enjoy getting into the nitty-gritty of your servers, deployment mecha‐ nisms, and load balancers, this is probably not the book for you—but hopefully, you will recommend this to your colleagues. What You Will Learn In reading this book, you will learn how to use your existing Python skills to enable programs to scale beyond a single machine. You will learn about techniques for distributed computing, from remote procedure calls to actors, and all the way to distributed datasets and machine learning. We wrap up this book with a “real-ish” example in Appendix A that uses many of these techniques to build a scalable backend, while integrating with a Python-based web-application and deploying on Kubernetes. xi
📄 Page 14
A Note on Responsibility As the saying goes, with great power comes great responsibility. Ray, and tools like it, enable you to build more complex systems handling more data and users. It’s important not to get too excited and carried away solving problems because they are fun, and stop to ask yourself about the impact of your decisions. You don’t have to search very hard to find stories of well-meaning engineers and data scientists accidentally building models or tools that caused devastating impacts, such as breaking the new United States Department of Veteran Affairs payment system, or hiring algorithms that discriminate on the basis of gender. We ask that you keep this in mind when using your newfound powers, for one never wants to end up in a textbook for the wrong reasons. Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Constant width Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords. Constant width italic Shows text that should be replaced with user-supplied values or by values deter‐ mined by context. This element signifies a tip or suggestion. This element signifies a general note. This element indicates a warning or caution. xii | Preface
📄 Page 15
License Once published in print and excluding O’Reilly’s distinctive design elements (i.e., cover art, design format, “look and feel”) or O’Reilly’s trademarks, service marks, and trade names, this book is available under a Creative Commons Attribution- Noncommercial-NoDerivatives 4.0 International Public License. We thank O’Reilly for allowing us to make this book available under a Creative Commons license. We hope that you will choose to support this book (and the authors) by purchasing several copies with your corporate expense account (it makes an excellent gift for whichever holiday season is coming up next). Using Code Examples The Scaling Python Machine Learning GitHub repository contains most of the exam‐ ples for this book. Most examples in this book are in the ray_examples directory. Examples related to Dask on Ray are found in the dask directory, and those using Spark on Ray are in the spark directory. If you have a technical question or a problem using the code examples, please send email to bookquestions@oreilly.com. This book is here to help you get your job done. In general, if example code is offered with this book, you may use it in your programs and documentation. You do not need to contact us for permission unless you’re reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing examples from O’Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product’s documentation does require permission. We appreciate, but generally do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: “Scaling Python with Ray by Holden Karau and Boris Lublinsky (O’Reilly). Copyright 2023 Holden Karau and Boris Lublinsky, 978-1-098-11880-8.” If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at permissions@oreilly.com. O’Reilly Online Learning For more than 40 years, O’Reilly Media has provided technol‐ ogy and business training, knowledge, and insight to help companies succeed. Preface | xiii
📄 Page 16
Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit https://oreilly.com. How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 800-998-9938 (in the United States or Canada) 707-829-0515 (international or local) 707-829-0104 (fax) We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at https://oreil.ly/scaling-python-ray. Email bookquestions@oreilly.com to comment or ask technical questions about this book. For news and information about our books and courses, visit https://oreilly.com. Find us on LinkedIn: https://linkedin.com/company/oreilly-media. Follow us on Twitter: https://twitter.com/oreillymedia. Watch us on YouTube: https://youtube.com/oreillymedia. Acknowledgments We would like to acknowledge the contribution of Carlos Andrade Costa, who wrote Chapter 8 with us. This book would not exist if not for the communities it is built on. Thank you to the Ray/Berkeley community and the PyData community. Thank you to all the early readers and reviewers for your contributions and guidance. These reviewers include Dean Wampler, Jonathan Dinu, Adam Breindel, Bill Chambers, Trevor Grant, Ruben Berenguel, Michael Behrendt, and many more. A special thanks to Ann Spencer for reviewing the early proposals of what eventually became this and Scaling Python with Dask (O’Reilly), which Holden coauthored with Mika Kimmins. Huge thanks to the O’Reilly editorial and production teams, especially Virginia Wil‐ son and Gregory Hyman, for helping us get our writing into shape and tirelessly working with us to minimize errors, typos, etc. Any remaining mistakes are the authors’ fault, sometimes against the advice of our reviewers and editors. xiv | Preface
📄 Page 17
From Holden I would also like to thank my wife and partners for putting up with my long in-the- bathtub writing sessions. A special thank you to Timbit for guarding the house and generally giving me a reason to get out of bed (albeit often a bit too early for my taste). From Boris I would also like to thank my wife, Marina, for putting up with long writing sessions and sometimes neglecting her for hours, and my colleagues at IBM for many fruitful discussions that helped me better understand the power of Ray. Preface | xv
📄 Page 18
(This page has no text content)
📄 Page 19
CHAPTER 1 What Is Ray, and Where Does It Fit? Ray is primarily a Python tool for fast and simple distributed computing. Ray was created by the RISELab at the University of California, Berkeley. An earlier iteration of this lab created the initial software that eventually became Apache Spark. Research‐ ers from the RISELab started the company Anyscale to continue developing and to offer products and services around Ray. You can also use Ray from Java. Like many Python applications, under the hood Ray uses a lot of C++ and some Fortran. Ray streaming also has some Java components. The goal of Ray is to solve a wider variety of problems than its predecessors, support‐ ing various scalable programing models that range from actors to machine learning (ML) to data parallelism. Its remote function and actor models make it a truly general-purpose development environment instead of big data only. Ray automatically scales compute resources as needed, allowing you to focus on your code instead of managing servers. In addition to traditional horizontal scaling (e.g., adding more machines), Ray can schedule tasks to take advantage of different machine sizes and accelerators like graphics processing units (GPUs). Since the introduction of Amazon Web Services (AWS) Lambda, interest in serverless computing has exploded. In this cloud computing model, the cloud provider allocates machine resources on demand, taking care of the servers on behalf of its customers. Ray provides a great foundation for general-purpose serverless platforms by provid‐ ing the following features: 1
📄 Page 20
• It hides servers. Ray autoscaling transparently manages servers based on the application requirements. • By supporting actors, Ray implements not only a stateless programming model (typical for the majority of serverless implementations) but also a stateful one. • It allows you to specify resources, including hardware accelerators required for the execution of your serverless functions. • It supports direct communications between your tasks, thus providing support for not only simple functions but also complex distributed applications. Ray provides a wealth of libraries that simplify the creation of applications that can fully take advantage of Ray’s serverless capabilities. Normally, you would need different tools for everything, from data processing to workflow management. By using a single tool for a larger portion of your application, you simplify not only development but also your operation management. In this chapter, we’ll look at where Ray fits in the ecosystem and help you decide whether it’s a good fit for your project. Why Do You Need Ray? We often need something like Ray when our problems get too big to handle in a single process. Depending on how large our problems get, this can mean scaling from multicore all the way through multicomputer, all of which Ray supports. If you find yourself wondering how you can handle next month’s growth in users, data, or complexity, our hope is you will take a look at Ray. Ray exists because scaling software is hard, and it tends to be the kind of problem that gets harder rather than simpler with time. Ray can scale not only to multiple computers but also without you having to directly manage servers. Computer scientist Leslie Lamport has said, “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.” While this kind of failure is still possible, Ray is able to automatically recover from many types of failures. Ray runs cleanly on your laptop as well as at scale with the same APIs. This provides a simple starting option for using Ray that does not require you to go to the cloud to start experimenting. Once you feel comfortable with the APIs and application structure, you can simply move your code to the cloud for better scalability without needing to modify your code. This fills the needs that exist between a distributed system and a single-threaded application. Ray is able to manage multiple threads and GPUs with the same abstractions it uses for distributed computing. 2 | Chapter 1: What Is Ray, and Where Does It Fit?
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now
Back to List