📄 Page
1
C iceri, Fa rley, Ford , H a rm el-La w , Keeling , Lilientha l, Rosa , von Zitzew itz, W eiss & W ood s Softw a re A rchitecture M etrics Softw a re A rchitecture M etrics Christian Ciceri, Dave Farley, Neal Ford, Andrew Harmel-Law, Michael Keeling, Carola Lilienthal, João Rosa, Alexander von Zitzewitz, Rene Weiss & Eoin Woods Software Architecture Metrics Case Studies to Improve the Quality of Your Architecture
📄 Page
2
SOF T WARE ARCHITEC TURE Software Architecture Metrics US $59.99 CAN $74.99 ISBN: 978-1-098-11223-3 Twitter: @oreillymedia linkedin.com/company/oreilly-media youtube.com/oreillymedia Software architecture metrics play a key role in keeping software projects maintainable and ensuring high-quality architecture, as well as warning of dangerous accumulations of architectural and technical debt. In this practical book, leading hands-on software architects Christian Ciceri, Dave Farley, Neal Ford, Andrew Harmel-Law, Michael Keeling, Carola Lilienthal, João Rosa, Alexander von Zitzewitz, Rene Weiss, and Eoin Woods share case studies to introduce metrics every software architect should know. This isn’t a book about theory. It’s more about practice and implementation, based on real-world experience and written for software architects and developers. This book shares key software architecture metrics to help you set the right KPIs and measure the results. You’ll learn more about decision and measurement effectiveness. Learn how to: • Measure how well your software architecture is meeting your goals • Choose the right metrics to track (and skip the ones you don’t need) • Improve observability, testability, and deployability • Prioritize software architecture projects • Build insightful and relevant dashboards Christian Ciceri is a software architect and cofounder at Apiumhub. Dave Farley is a thought-leader in the field of continuous delivery, DevOps, and software development. Neal Ford is a director, software architect, and meme wrangler at Thoughtworks. Andrew Harmel-Law is a tech principal at Thoughtworks. Dr. Carola Lilienthal is managing director of Workplace Solutions GmbH. Michael Keeling is an experienced software architect, agile practitioner, and programmer. João Rosa is a principal consultant at Xebia. Alexander von Zitzewitz is a founder of hello2morrow. Rene Weiss is a CTO at Finabro. Eoin Woods is CTO at Endava. C iceri, Fa rley, Ford , H a rm el-La w , Keeling , Lilientha l, Rosa , von Zitzew itz, W eiss & W ood s Softw a re A rchitecture M etrics Softw a re A rchitecture M etrics
📄 Page
3
Christian Ciceri, Dave Farley, Neal Ford, Andrew Harmel-Law, Michael Keeling, Carola Lilienthal, João Rosa, Alexander von Zitzewitz, Rene Weiss, and Eoin Woods Software Architecture Metrics Case Studies to Improve the Quality of Your Architecture Boston Farnham Sebastopol TokyoBeijing
📄 Page
4
978-1-098-11223-3 [LSI] Software Architecture Metrics by Christian Ciceri, Dave Farley, Neal Ford, Andrew Harmel-Law, Michael Keeling, Carola Lilienthal, João Rosa, Alexander von Zitzewitz, Rene Weiss, and Eoin Woods Copyright © 2022 Apiumhub S.L. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com. Acquisitions Editor: Melissa Duffield Development Editor: Sarah Grey Production Editor: Katherine Tozer Copyeditor: nSight, Inc. Proofreader: Sonia Saruba Indexer: Sue Klefstad Interior Designer: David Futato Cover Designer: Karen Montgomery Illustrator: Kate Dullea May 2022: First Edition Revision History for the First Edition 2022-05-18: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781098112233 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Software Architecture Metrics, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors and do not represent the publisher’s views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
📄 Page
5
Table of Contents Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1. Four Key Metrics Unleashed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Definition and Instrumentation 2 Refactoring Your Mental Model 3 Pipelines as Your First Port of Call 4 Locating Your Instrumentation Points 6 Capture and Calculation 10 Display and Understanding 12 Target Audience 13 Visualization 13 Front Page 17 Discussions and Understanding 18 Ownership and Improvement 18 Conclusion 19 2. The Fitness Function Testing Pyramid: An Analogy for Architectural Tests and Metrics. . . 21 Fitness Functions and Metrics 21 Fitness Functions: Test Coverage 23 Fitness Functions: Integration Tests with Network Latency 24 Introduction to Fitness Function Categories 25 Mandatory Fitness Function Categories 25 Optional Fitness Function Categories 28 Fitness Function Categories: Catalog Overview 30 The Testing Pyramid 30 The Fitness Function Testing Pyramid 32 The Top Layer 33 The Middle Layer 34 iii
📄 Page
6
The Bottom Layer 34 Examples and Their Full Categorization 35 Fully Categorizing Top-Layer Examples 37 Developing Your Fitness Functions and Metrics 39 Conclusion 41 3. Evolutionary Architecture: Guiding Architecture with Testability and Deployability. . 43 The Importance of Learning and Discovery 44 The Tools of Sustainable Change 44 Testability: Creating High-Quality Systems 45 Deployability: Scaling Development of Our Systems 47 Conclusion 47 4. Improve Your Architecture with the Modularity Maturity Index. . . . . . . . . . . . . . . . . . . 49 Technical Debt 49 Origination of Technical Debt 50 Assessment with the MMI 52 Modularity 53 Hierarchy 54 Pattern Consistency 56 Calculating the MMI 57 Architecture Review to Determine the MMI 61 Conclusion 63 5. Private Builds and Metrics: Tools for Surviving DevOps Transitions. . . . . . . . . . . . . . . . . 65 Key Terms 66 CI/CD 66 DevOps 67 The “Ownership Shift” 68 Empowering the Local Environment Again 69 The Private Build 70 Case Study: The Unstable Trunk 72 Bug A1 72 Bug A2 73 Bug A3 73 Bug A4 73 Case Study: The Blocked Consultant 74 Metrics 75 Time to Feedback 76 Evitable Integration Issues in the Deployed Application per Iteration 76 Time Spent Restoring Trunk Stability per Iteration 77 The Cost of Private Builds 78 iv | Table of Contents
📄 Page
7
Metrics in Practice 78 High Time to Feedback, High Evitable Integration Issues, Low Time to Trunk Stability 78 Low Time to Feedback, High Evitable Integration Issues, Low Time to Trunk Stability 79 High Time to Feedback, Low Evitable Integration Issues, Low Time to Trunk Stability 79 Low Evitable Integration Issues and High Time to Trunk Stability 79 Conclusion 80 6. Scaling an Organization: The Central Role of Software Architecture. . . . . . . . . . . . . . . . 81 YourFinFreedom Breaks the Monolith 83 Implementing a Distributed Big Ball of Mud 85 Seeking Direction 87 From Best Effort to Intentional Effort 88 Increasing Software Architecture Intentionality, Guided by Metrics 91 Managing Expectations with Communication 99 Learning and Evolving the Architecture 102 And What About Anna? 104 Conclusion 104 7. The Role of Measurement in Software Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Adding Measurement to Software Architecture 106 Measurement Approaches 108 Runtime Measurement of Applications and Infrastructure 108 Software Analysis 109 Design Analysis 109 Estimates and Models 109 Fitness Functions 110 Measuring System Qualities 110 Performance 111 Scalability 113 Availability 114 Security 116 Getting Started 118 Hypothetical Case Study 119 Pitfalls 121 Conclusion 123 8. Progressing from Metrics to Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 The Path to Fitness Functions 125 From Metrics to Engineering 127 Table of Contents | v
📄 Page
8
Automation Operationalizes Metrics 130 Case Study: Coupling 132 Case Study: Zero-Day Security Check 136 Case Study: Fidelity Fitness Functions 138 Conclusion 141 9. Using Software Metrics to Ensure Maintainability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 The Case for Using Metrics 143 Entropy Kills Software 144 The Toxicity of Cyclic Dependencies 146 How Metrics Can Help 147 Why Are Metrics Not More Widely Used? 148 Tools to Gather Metrics 149 Useful Metrics 150 Metrics to Measure Coupling and Structural Erosion 150 Metrics to Measure Size and Complexity 160 Change History Metrics 162 Other Useful Metrics 163 Architectural Fitness Functions 165 How to Track Metrics over Time 167 A Few Golden Rules for Better Software 168 Conclusion 169 10. Measure the Unknown with the Goal-Question-Metric Approach. . . . . . . . . . . . . . . . . 171 The Goal-Question-Metric Approach 172 Create a GQM Tree 172 Prioritize Metrics and Devise a Data Collection Strategy 174 Case Study: The Team That Learned to See the Future 177 System Context 177 Incident #1: Too Many Requests to the Foo Service 179 Incident #2: Seeing the Future 181 Reflection 182 Run a GQM Workshop 182 Workshop Summary 182 Workshop Steps 184 Facilitation Guidelines and Hints 185 Conclusion 186 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 vi | Table of Contents
📄 Page
9
Preface Software architecture metrics are used to measure the maintainability and architec‐ tural quality of a software project, and to provide warnings early in the process about any dangerous accumulations of architectural or technical debt. In this book, 10 leading hands-on practitioners (Christian Ciceri, David Farley, Neal Ford, Andrew Harmel-Law, Michael Keeling, Carola Lilienthal, João Rosa, Alexander von Zitzewitz, Rene Weiss, and Eoin Woods) introduce key software architecture metrics that every software architect should know. The architects in this group have all published renowned software architecture articles and books, regularly participate in interna‐ tional events, and give practical workshops. We all strive to balance theory and practice. This book, however, is not about theory; it’s about practice and implementation, about what has already been tried and has worked, with valuable experiences and case studies. We focus not only on improv‐ ing the quality of architecture but on associating objective metrics with business outcomes in ways that account for your own situation and the trade-offs involved. We conducted a survey and found that there is strong demand for software architec‐ ture metrics resources, yet very few are available. We hope this contribution will make a difference and help you set the right KPIs and measure the results accurately and insightfully. We are grateful to the Global Software Architecture Summit, which reunited us and gave us the idea of writing a software architecture metrics book together. All of the book’s chapters and case studies are as different as the authors themselves: we made a point of using examples from different industries and challenges so that every reader can find a solution or an inspiration. vii
📄 Page
10
What Will You Learn? By the end of this book you’ll understand how to: • Measure how well your software architecture is meeting goals • Guide your architecture toward testability and deployability • Prioritize software architecture work • Create predictability from observability • Identify key KPIs for your software project • Build and automate a metrics dashboard • Analyze and measure the success of your project or process • Build goal-driven software architecture Who This Book Is For This book is written by and for software architects. If you’re eager to explore suc‐ cessful case studies and learn more about decision and measurement effectiveness, whether you work in-house for a software development company or as an independ‐ ent consultant, this book is for you. The 10 authors, all experienced practitioners, share their advice and wisdom, present‐ ing diverse viewpoints and ideas. As you work on different projects, you might find some chapters more relevant to your work than others. You might use this book on a regular basis, or you might use it once to set the KPIs and then come back to it later to teach and inspire new team members. Having the right software architecture metrics and tools can make architecture checking much faster and less costly. It can allow you to run checks throughout the life of a software project, starting right at the beginning. Metrics also help you evaluate your software architecture at each sprint to make sure it’s not drifting toward becoming impossible to maintain. They can also help you compare architectures to pick the one that best fits your project’s requirements. Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. viii | Preface
📄 Page
11
Constant width Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords. O’Reilly Online Learning For more than 40 years, O’Reilly Media has provided technol‐ ogy and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit https://oreilly.com. How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 800-998-9938 (in the United States or Canada) 707-829-0515 (international or local) 707-829-0104 (fax) We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at https://oreil.ly/software-architecture-metrics. Email bookquestions@oreilly.com to comment or ask technical questions about this book. For news and information about our books and courses, visit https://oreilly.com. Find us on LinkedIn: https://linkedin.com/company/oreilly-media. Follow us on Twitter: https://twitter.com/oreillymedia. Watch us on YouTube: https://www.youtube.com/oreillymedia. Preface | ix
📄 Page
12
Acknowledgments This book wouldn’t be possible without the contribution of the authors, O’Reilly editors, and Apiumhub, who gathered all of us together. We would like to say an additional thank you to: • Apiumhub CMO Ekaterina Novoseltseva, who managed the process of writing this book and publishing it with O’Reilly and also wrote the introduction • O’Reilly Senior Acquisitions Editor Melissa Duffield, who took care of us and made our experience with O’Reilly smooth and pleasant • O’Reilly Developmental Editor Sarah Grey, who structured our content and made it easily readable • O’Reilly Production team: Katherine Tozer, Adam Lawrence, Steve Fenton, Greg‐ ory Hyman, and Kristen Brown, who copyedited and distributed the book Christian Ciceri I would like to say thank you to Ekaterina Novoseltseva and Apiumhub for giving me the chance to write this book, which was always in my dreams. Global Software Architecture Summit for meeting all these software architects, who push me forward and who generate interesting discussions. VYou app, for making me innovate and implement new software architecture metrics. And additional thanks goes to my cat, who is always there for me, supporting me in any situation. Dave Farley Thanks to the folks at Apiumhub and O’Reilly for herding the cats and organizing me, and everyone else, to make this book possible. Neal Ford Thanks to Ekaterina and the others at Apiumhub for doing the cat-herding required to make this a reality. Thanks to my employer Thoughtworks and all its employees, who always surprise me with their level of passion and engagement in the technology world. And last and always, thanks to my wife Candy for putting up with all this writing, which takes me away from her and our kittens-becoming-cats. Andrew Harmel-Law Thank you to my wife and children for putting up with me and to my coworkers at Thoughtworks for inspiring me and letting me take this approach to its logical conclusion. x | Preface
📄 Page
13
The claims made in this chapter might have been half theoretical had I not had the chance to put this all into practice at an organization that really got and trusted me. Open GI, a specialized SaaS provider for the insurance industry in the UK and Ireland, was that client, and thank you to everyone whom I worked with there. My codeveloper/coconspirator Pete Hunter deserves a special mention. He grokked what we were doing immediately, championed it unrelentingly, improved it relentlessly (as we paired on this every step of the way), and taught me so much about how we could make it work. Thanks finally to Ekaterina and Apiumhub for asking me to be involved, chasing me up, and answering all my stupid questions. Michael Keeling My sincere thanks go out to Anastas Stoyanovsky, Colin Dean, George Fairbanks, Joe Runde, and Ricky Kotermanski, who all helped review early chapter drafts. Addition‐ ally, thank you to all my colleagues, both current and former, with whom I’ve had the privilege to work. Experience reports like the ones in this book can only be written by teams that take risks and try out new ideas. Never stop seeking out ways to become even more awesome than you already are! Marie, my queen, thank you for helping me find the time to work on writing projects like this one. To Owen, thank you. To Finn, saaanoot! Carola Lilienthal My thanks go to all the many great scientists and computer scientists I have had the privilege to work with in my professional life. Many are my colleagues in my company, WPS (Workplace Solutions), or those I meet at conferences and get to learn from in lectures and discussions. I also thank my family, who always encourage and support me when a book or an article needs to be written. João Rosa None of my projects would be possible without the support of my wife, Kary. You and our beautiful little one are the center of my life. Thanks! A special thanks to Xebia for supporting me in this journey. Sharing knowledge is in our DNA. I also would like to acknowledge our technical reviewers, namely Ruth Malan, Anna Shipman, Steve Pereira, and Nick Tune; Apiumhub, for challenging me to write a chapter; and Fai Fung, Thijs Wesselink, and Kenny Baas-Schwegler for reviewing an early version of the chapter draft. Last note, for all of you who are not mentioned here: my memory is terrible, and I can’t recall all your names. Somehow you have influenced my career, and I’m grateful for it. Preface | xi
📄 Page
14
Alexander von Zitzewitz I want to thank my wife Charmaine, my sons, and the great team at hello2morrow for always having my back and supporting my projects with wisdom, good advice, and a lot of patience. Without their continuous support, my work on this book and other achievements in life would not have been possible. Rene Weiss This is a very special happening for me, as this is my first contribution to a book. I had the chance to work with many people who inspired me along my career. I want to introduce two of them here, as they had a major impact on how I actually think about software architecture. These two are Stefan Toth and Stefan Zörner from embarc (Germany), who are great software architects, trainers, and coaches. While I worked with them I was introduced to the idea of evolutionary architectures, and this “seed” finally led to the ideas shared in this book chapter. If you have the chance to meet them at a conference or get your hands on one of their books (at the moment only in German), I would highly recommend that. Finally, I want to thank my girlfriend and partner Anna. She always supported my shifts and ideas in my professional career, and I wouldn’t be where I am today, without her. Thank you. Eoin Woods I’d like to thank my family for their continual support of all of my time-consuming professional projects. I also want to thank Chris Cooper-Bland and Nick Rozanski for their extensive and insightful review of early versions of my chapter, which allowed me to improve it significantly. Our technical reviewers and the excellent team at O’Reilly have made a huge contribution to the quality of the book, so thank you to all of you too. Finally, thank you to my colleagues at Endava who create such a collegiate place to work and yet continually challenge me to be “the best that I can be.” xii | Preface
📄 Page
15
CHAPTER 1 Four Key Metrics Unleashed Andrew Harmel-Law You’d be forgiven for thinking that Dr. Nicole Forsgren, Jez Humble, and Gene Kim’s groundbreaking book Accelerate (IT Revolution Press, 2018) is both the first and last word on how to transform your software delivery performance, all measured by the simple yet powerful four key metrics. Having based my transformation work around many of their book’s recommenda‐ tions, I certainly have no issue with any of its content. But rather than removing the need for anything in even greater detail, I think the book should be discussed and analyzed further to enable the sharing of experiences and the gathering of a community of people practicing architecture who want to improve. I hope that this chapter will contribute to such a discussion. I have seen, when used in the way described later in the chapter, that the four key metrics—deployment frequency, lead time for changes, change failure rate, and time to restore service—lead to a flowering of learning and allow teams to understand the need for a high-quality, loosely coupled, deliverable, testable, observable, and maintainable architecture. Deployed effectively, the four key metrics can allow you as an architect to loosen your grip on the tiller. Instead of dictating and controlling, you can use the four key metrics to generate conversations with team members and stimulate desire to improve overall software architecture beyond yourself. You can gradually move toward a more testable, coherent and cohesive, modular, fault- tolerant and cloud native, runnable, and observable architecture. In the sections that follow, I’ll show to get your four key metrics up and running, as well as (more importantly) how you and your software teams can best use the metrics to focus your continuous improvement efforts and track progress. My focus is on the practical aspects of visualizing the mental model of the four key metrics, sourcing the 1
📄 Page
16
1 Donella Meadows, Thinking in Systems: A Primer, ed. Diana Wright (Chelsea Green Publishing, 2008), p. 162. required three raw data points, then calculating and displaying the four metrics. But don’t worry: I’ll also discuss the benefits of architecture that runs in production. Definition and Instrumentation Paradigms are the sources of systems. From them, from shared social agreements about the nature of reality, come system goals and information flows, feedbacks, stocks, flows and everything else about systems. —Donella Meadows, Thinking in Systems: A Primer1 The mental model that underpins Accelerate gives rise to the four key metrics. I begin here because this mental model is essential to keep in mind as you read this chapter. In its simplest form, the model is a pipeline (or “flow”) of activities that starts whenever a developer pushes their code changes to version control, and ends when these changes are absorbed into the running system that the teams are working on, delivering a running service to its users. You can see this mental model in Figure 1-1. Figure 1-1. The fundamental mental model behind the four key metrics For clarity, let’s visualize what the four key metrics measure within this model: Deployment frequency The number of individual changes that make their way out of the end of the pipe over time. These changes might consist of “deployment units”: code, config, or a combination of both, including, for example, a new feature or a bug fix. Lead time for changes The time a developer’s completed code/config changes take to make their way through the pipeline and out the other end. Taken together, this first pair measures development throughput. This should not be confused with lean cycle time or lead time, which includes time to write the code, 2 | Chapter 1: Four Key Metrics Unleashed
📄 Page
17
2 This need not be a code fix. We’re thinking about service restoration here, so something like an automatic failover is perfectly fine to stop the clock ticking. sometimes the clock even starting when the product manager first comes up with the idea for their new feature. Change failure rate The proportion of changes coming out the pipe that cause a failure in our running service. (The specifics of what defines a “failure” will be covered shortly. For now, just think of failure as something that stops users of your service from getting their tasks done.) Time to restore service How long it takes, after the service experiences a failure, to become aware of it and deliver the fix that restores the service to users.2 Taken together, this second pair gives an indication of service stability. The power of these four key metrics is in their combination. If you improve an element of development throughput but degrade service stability in the process, then you’re improving in an unbalanced way and will fail to realize long-term sustainable benefits. The fundamental point is that you keep an eye on all of the four key metrics. Transformations that realize predictable, long-term value are ones that deliver posi‐ tive impact across the board. Now that we are clear on where our metrics come from, we can complicate matters by mapping the generic mental model onto your actual delivery process. I’ll spend the next section showing how to perform this “mental refactoring.” Refactoring Your Mental Model Defining each metric for your circumstances is essential. As you have most likely guessed, the first two metrics are underpinned by what happens in your CI pipelines, and the second pair require tracking service outages and restoration. Consider scope carefully as you perform this mental refactoring. Are you looking at all changes for all pieces of software across your organization? Or are you considering those in your program of work alone? Are you including infrastructure changes or just observing those for software and services? All these possibilities are fine, but remember: the scope you consider must be the same for each of the four metrics. If you include infrastructure changes in your lead time and deployment frequency, include outages induced by infrastructure changes, too. Refactoring Your Mental Model | 3
📄 Page
18
3 In fact, it’s the model Microsoft wants you to adopt. Pipelines as Your First Port of Call Which pipelines should you be considering? The ones you need are those that listen for code and config changes in a source repository within your target scope, perform various actions as a consequence (such as compilation, automated testing, and packaging), and deploy the results into your production environment. You don’t want to include CI-implemented tasks for things like database backups. If you only have one code repository served by one end-to-end pipeline (e.g., a monolith stored in a monorepo and deployed directly, and in a single set of activities, to production), then your job here is easy. The model for this is shown in Figure 1-2. Figure 1-2. The simplest source-control/pipeline/deployment model you’ll find Unfortunately, while this is exactly the same as our fundamental mental model, I’ve rarely seen this in reality. We’ll most likely have to perform a much broader refactoring of the mental model to reach one that represents your circumstances. The next easiest to measure and our first significant mental refactor is a collection of these end-to-end pipelines, one per artifact or repository (for example, one per microservice), each of which does all its own work and, again, ends in production (Figure 1-3). If you’re using Azure DevOps, for example, it’s simple to create these.3 4 | Chapter 1: Four Key Metrics Unleashed
📄 Page
19
4 CAB stands for “change-advisory board.” The most famous example is the group that meets regularly to approve releases of code and config in the classic book The Phoenix Project (IT Revolution Press, 2018), by Gene Kim, Kevin Behr, and George Spafford. Figure 1-3. The “multiple end-to-end pipelines model” is ideal for microservices These first two pipeline shapes are most likely similar to what you have, but I’m going to guess that your version of this picture will be slightly more complicated and require one more refactor to be split into a series of subpipelines (Figure 1-4). Let’s consider an example that shows three of these subpipelines, which fit end-to-end to deliver a change to production. Perhaps the first subpipeline listens for pushes to the repo and undertakes compila‐ tion, packaging, and unit and component testing, then publishes to a binary artifact repository. Maybe this is followed by a second, independent subpipeline that deploys this newly published artifact to one or more environments for testing. Possibly a third subpipeline, triggered by something like a CAB process,4 finally deploys the change to production. Refactoring Your Mental Model | 5
📄 Page
20
Figure 1-4. The “pipeline made of multiple subpipelines” model, which I encounter frequently Hopefully you’ve identified your circumstances. But if not, there is a fourth major variety of pipeline, which our final mental-refactoring step will get us to: the multi‐ stage fan-in, shown in Figure 1-5. Here we typically find individual subpipelines for the first stage, one per repository, which then “fan in” to a shared subpipeline or set of subpipelines that take the change the rest of the way to production. Figure 1-5. The multistage “fan-in pipeline” model Locating Your Instrumentation Points As well as having four metrics, we have four instrumentation points. Let’s now move to locating them in our mental model, whatever form yours takes. We’ve focused on pipelines so far because they typically provide two of those points: a commit time‐ stamp and a deployment timestamp. The third and fourth instrumentation points come from the timestamps created when a service degradation is detected and when it is marked as “resolved.” We can now discuss each in detail. Commit timestamp Subtleties inevitably arise here when you consider teams’ work practices. Are they branching by feature? Are they doing pull requests? Do they have a mix of differ‐ ent practices? Ideally (as the authors of Accelerate suggest), your clock starts tick‐ ing whenever any developer change-set is considered complete and is committed, 6 | Chapter 1: Four Key Metrics Unleashed