Statistics
18
Views
0
Downloads
0
Donations
Uploader

高宏飞

Shared on 2025-12-22
Support
Share

AuthorWilson, Glenn

No description

Tags
No tags
Publisher: Rethink Press
Publish Year: 2020
Language: 英文
Pages: 171
File Format: PDF
File Size: 3.1 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

(This page has no text content)
(This page has no text content)
(This page has no text content)
First published in Great Britain in 2020 by Rethink Press (www.rethinkpress.com ) © Copyright Glenn Wilson All rights reserved. No part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording or otherwise) without the prior written permission of the publisher. The right of Glenn Wilson to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. This book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, resold, hired out, or otherwise circulated without the publisher’s prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser. Cover image © Adobe Stock | Oleksandr
Contents Foreword Introduction 1   DevOps Explained The three ways The five ideals Conclusion 2   Security Explained Types of attacks Adversaries and their weapons Conclusion 3   DevSecOps Security implied in DevOps Points of contention between DevOps and security teams A layered approach to effective DevSecOps Three layers overview Conclusion
4   Layer 1: Security Education Importance of security education Security champions Gamified learning Instructor-led training Self-paced learning Pair programming and peer reviews Informal security knowledge sharing Experimentation Certification Avoiding entropy Conclusion 5   Layer 2: Secure By Design The importance of good design principles Threat modelling Clean code Naming conventions and formatting Common weakness lists Core application security design principles Microservices Container technologies Securing the pipeline Conclusion 6   Layer 3: Security Automation The importance of security automation Application security testing Mobile security testing Runtime application self-protection
Software composition analysis Unit testing Infrastructure as code testing Container image scanning Dynamic threat analysis Network scanning Some testing cannot be automated Monitoring and alerting Vulnerability management Conclusion 7   Laying The Foundation Increase DevSecOps maturity Start reducing technical debt Introduce an education programme Implement security design principles Implement security test automation Measure and adjust DevSecOps starts with people Conclusion 8   Summary References Further Reading Acknowledgements The Author
This book is dedicated to Caz, Abs and Emily
Foreword I have observed the dramatic changes in the software development field through a security lens during the 2010s. The pace at which the business demands rapid releases of new features to capture and accelerate business growth has become the norm. This explosion is obvious when you look at the extraordinary rise of the likes of Etsy, Amazon, Facebook, Netflix, Taobao and WeChat. Each platform caters for the growth and spike in traffic, and the constant releases of new capabilities and updates to their web and mobile interfaces. This fast pace of multiple releases (per day in most cases) has led the industry to accept that a security breach is going to happen to any business and so organisations must be prepared to handle such scenarios. As a counter to this inevitable event, organisations are now driving their delivery teams even harder to not just deliver on time but to provide a quality that exceeds the standard of their competitors – and this includes security. Reputation damage and staying out of the press due to a security breach is now a top agenda item for the executives. Security has finally become a first-class citizen. Being in the right place at the right time (through my commercial engagements) has allowed me to experience the evolution first-hand.
Watching how the software development practices rapidly adapt to the business needs through the creation of new vocabulary and technology, turning software engineering into a sexy field with more and more people aspiring to join, has been an exciting time in my career. During the same period the security industry has achieved different successes; for example, the role of a security representative – Chief Information Security Officer (CISO) – on the board of companies, the acceptance of different shades of penetration testing (red, blue, purple and bug bounties) as an important assurance stage, and the formation of different teams covering the various security domains and disciplines required to serve the business. In this book Glenn highlights the core principles in modern software development and security. He draws upon both his experience as a security practitioner and the wisdom of other leading figures in the area to merge the two practices, explaining the need and demand for what we now call DevSecOps. Glenn proposes a three-layer approach for tackling DevSecOps. Staying up to date with the current practices and emerging technology, and learning from other people’s success and mistakes, is a never-ending journey. Each individual’s experience and skillset are different and so, in the first DevSecOps layer, educational programmes must be devised and adaptable to allow each resource to understand the core principles of DevSecOps. This is to ensure the security fundamentals are well understood by, and ingrained into, the delivery team. Glenn discusses different learning methods to offer teams, ranging from gamifying online training to running tournaments and establishing security champion roles to allow people to put theory into practice – reinforcing the lessons taught. He notes that teaching materials are also available outside of the company. The ‘teaching programme’ adopted for each individual should reflect how they learn and their willingness to expand their security knowledge. The second layer focuses on the design aspect of the solution. Glenn covers good and secure coding practices, such as threat modelling and peer programming. In addition, he shares respected industry references from the OWASP and SANS establishments. The details from these will complement the lessons learned, prescribed from the training programme, and hopefully start to bake security into the core practices of the delivery team. The adopted software architecture for the solution must also be supportive of the modern development practices, and so the world of containers and
microservices technology are also discussed. The reliability of the delivery pipeline and its security becomes more important in the world of DevSecOps. This layer concludes by highlighting good practices for securing the delivery pipeline. Completing the trio, the third layer discusses the different security testing techniques which can be deployed as part of your DevSecOps practice. Automating the deployment and testing is key, but, in reality, not all testing can be automated due to certain industry regulations. Most teams will challenge the need for traditional penetration testing, but the use and purpose of this activity is not dismissed from Glenn’s three-layer framework. To bring everything together, Glenn wraps up the book by structuring a programme of activities for teams to explore and adopt to start their journey into the world of DevSecOps. This book will no doubt become a mandatory reference in the DevSecOps culture – I hope you will enjoy it as much as I have. Michael Man DevSecOps London Gathering September 2020
Introduction Designing applications, whether they are simple APIs (application programming interfaces), microservices or large monoliths, is a complex activity involving resources from many disciplines. In traditional software delivery practices involving a slow linear lifecycle (such as ‘waterfall’), a product starts as an idea to meet a perceived requirement of the end user. This idea is handed over to a team of architects who design the application according to a set of criteria that meet the business objectives. The architecture determines the key features of the software and infrastructure, as well as non-functional requirements such as performance. When the architecture is complete, software engineers write the code that makes these pre-designed features, while operations engineers build the underlying infrastructure and services needed to host the applications. Finally, teams of testers validate the application against the functional and non-functional requirements and provide the quality assurances that meet user expectations. These traditional methodologies have given ground to newer ways of working that merge the roles of product owners, architects, developers, testers and representatives of end-users into integrated product teams providing faster delivery and greater agility and responsiveness to changes
in requirements. Furthermore, cloud hosted applications have led to the inclusion of operations resources within multi-disciplined teams to write infrastructure as code, which enables products to be developed, tested and deployed as complete packages. These teams and their methodologies have come to be known as DevOps (from the combination of dev elopment and op erations). The necessity of delivering new products and features to customers as quickly as possible has driven the journey from traditional waterfall to DevOps. Organisations that struggle to adapt to new ways of working have learned that the lengthy waterfall processes are unable to compete against the more agile and responsive organisations. A major challenge facing the industry is understanding how security practices fit into the newer models. In traditional ways of working, project artefacts are handed from one team to the next when specific milestones have been met. Security integrates into this process with several distinct steps, each requiring the project to validate security controls and document open risks. Thus, the conceptual design is assessed through threat modelling activities for potential security risks. Once the architecture has been fully assessed and open risks have been fully documented, the project advances to the development stage. Developers implement the relevant security controls based on the initial design document and pass on the application to a team of testers. The testing phase may include a dedicated security tester, although this activity normally takes place after functional defects have been identified and fixed and the application is ready for deployment to a live system. Security testing may even be carried out by an external team. Ultimately, the output is a report of vulnerabilities normally classified by criticality. The project delivery manager is cornered into making a decision on whether to delay the project while remediation takes place or to accept the risks and deliver the project without remediation. Finally, the security operations centre (SOC) continues to monitor the application for security incidents using the project’s artefacts, such as the original design documents, vulnerability reports and user guides, as reference points for the application. Although this methodology has its faults, it is a familiar story for many delivery managers working on software projects. Moreover, security departments have invested in this approach by creating teams of specialists covering a number of security disciplines needed to support project delivery. These specialists can include
threat modellers, security testers, firewall administrators, key infrastructure and certificate management teams, and identity and access control teams. The Agile framework (allowing teams to deliver working software in short delivery cycles) and DevOps offer the opportunity to integrate security as part of the ongoing feedback loop. However, in order for this to happen, the traditional central cybersecurity roles need to change in order to continue supporting the delivery of secure software solutions. As Eliza-May Austin, of th4ts3cur1ty.company and co-founder of the Ladies of London Hacking Society, points out: ‘Developers have done phenomenal work in the DevOps space, whereas Security has not kept up with them. It’s not feasible for DevOps to slow down, so security needs to step up. This is easier said than done since it requires a cultural shift in the way the software delivery teams, and security teams work together.’ The lack of security engineers with experience in Agile and DevOps within an organisation means that they are not being integrated into the DevOps teams. Furthermore, DevOps teams fall short in the level of knowledge required to integrate security into their ways of working. The result: security teams are treated like external resources; and DevOps and Agile teams are forced to reach out to security teams to configure firewalls, manage certificates and keys, and put in place access controls to support the running of the products. Compounding matters, the delivery team is still likely to hand the product over to a SOC team for ongoing support, creating even more distance between security and DevOps, which ultimately creates a slow feedback loop. The constraints forced upon DevOps teams by security departments provide the catalyst for managers to look for quick wins, often at the expense of security. A potential solution is to integrate security engineers into the DevOps teams, bolstering the knowledge and skills within the team to address this challenge. Unfortunately, that solution does not scale. Typically, the ratio of security engineers to software engineers is very small; estimates range from one security engineer per 100 developers to one per 400. Overworked security engineers struggle to meet the ever-increasing demands of the software delivery teams who are delivering products and product features at an accelerating pace. Therefore, we cannot simply add a security engineer
to a DevOps team and call it ‘DevSecOps ’. A different approach is required – one that is scalable and effective. Some may argue that if DevOps is done correctly, security is implicitly applied. There are two problems with this concept. Firstly, DevOps has different meanings to different people. Some engineers focus on the automation of operations to define DevOps, while others focus on cloud services to define DevOps. Secondly, security is more complex than just doing DevOps right. There are many factors influencing security best practices which need to be explicitly defined. The rest of this book proposes a solution to this problem: I introduce a three-layered approach to embed a culture of security and security practices within the organisation, putting security at the forefront of DevOps to create DevSec Ops.
ONE   DevOps Explained ‘DevOps’ – the word – is simply the merging of the first letters of two distinct functions within IT: the Dev elopers, who are responsible for writing software, and the op erations staff, who are responsible for maintaining the infrastructure on which the software is developed and deployed. Of course, ‘DevOps’ – the function – is far more complex than a merging of two roles into a single team. To understand what DevOps is, you need to delve into its history. Traditional software development is based on the waterfall methodology, framed within a project that starts with ideation and ends with the delivery of a working application. There are many problems with this delivery approach, the main one being the lack of flexibility. Often, the requirements change over the course of the project, leading to increased scope, missed milestones and higher costs. To keep projects on track, it is not uncommon to see a lack of quality in the final project deliverable. The constraints associated with scope, time and cost, and their effect on quality, are represented below.
Constraint triangle: the effect of increased scope and reduced cost and time on quality Furthermore, customer requirements are based on vague assumptions made by various stakeholders during the project lifecycle, which are only validated once the project has ended and the users are finally able to interact with the project deliverable. By this stage, the project team has already been disbanded and handed over (with supporting documentation) to a support team, which restricts product changes to fixing bugs or providing workarounds for users to negotiate poor functionality. These changes often bloat the software with awkward fixes, making the code more difficult to support, ultimately increasing technical debt. The industry’s solution to the problem of slow project delivery was two- fold. The first change was taken from the car manufacturing industry that streamlines its production lines by using lean practices. Lean manufacturing uses small work packages, just-in-time supply chain management and automation of repeatable processes to reduce waste, such as extended lead times and ineffective work. Keeping the processes lean and product-focused promoted team structures and workflows based on the end-to-end tasks
required to deliver value to the customer for a specific product. The second part of the solution came from a group of software engineering experts who produced a manifesto with a set of software developing rules that focuses on the values of individuals and interactions, working software, customer collaboration and responding to change. This Agile Manifesto defines a set of twelve principles that promote quality, continuous delivery (CD), regular feedback loops and collaboration between individuals and teams. Various Agile methodologies evolved from this movement, including test automation and test-driven development (TDD), pair programming and continuous integration (CI). As these ideas developed further, and software development became more streamlined and product-focused, bottlenecks shifted from product development to infrastructure support. Once software was ready to be delivered to an appropriate environment, such as a test environment or pre- production environment, or to a full production environment, the packages were handed over to a team that managed deployments. To remove this bottleneck, infrastructure was automated using code and the operations tasks needed to build the infrastructure were integrated into the Agile development processes. CI extended to CD to create a pipeline that built, tested and delivered the whole package: the application, the services in which the application runs and the infrastructure on which they are all hosted. The integration of development and operations into lean and agile practices is known as DevOps . DevOps teams are made up of a number of other disciplines, including testers, architects and product owners as well as the developers and operations staff. Each DevOps team is able to work as a single unit with minimal dependencies on other teams. There may be some interaction between DevOps teams, but ideally each DevOps unit works in isolation. There is no concept of handing off to another department to perform a function. This self-sufficient team can design a feature, write and test code, generate the environment on which the software runs and deploy the whole package into a production environment, all while looking for ways to continuously improve this process. In the following sections, we will briefly explore the three ways and the five ideals that frame the DevOps movement.
The three ways In the seminal work The DevOps Handbook , co-authors Gene Kim, Jez Humble, Patrick Debois and John Willis describe three principles underpinning DevOps. They built these principles for the software engineering industry by examining successful production lines within the manufacturing industry and evolving best practices for developing software. The first of their principles is the left-to-right process flow. In this principle, the focus is on delivering features in low-risk releases by incorporating automated testing and CI into the deployment pipeline. The second principle is based on using right-to-left feedback mechanisms that allow engineers to anticipate and resolve problems rather than waiting for problems to occur in a production environment. The third principle provides an environment for continuous learning and experimentation, allowing engineers to continuously improve development and operations as an embedded part of the process. These three principles (or ways) are the foundations of DevOps. Three ways of DevOps: left-to-right process flow, right-to-left feedback and continuous improvement
The five ideals Following the release of The DevOps Handbook , Gene Kim’s The Unicorn Project extends the principles of DevOps into five ideals that collectively define the core values of DevOps. The first of these ideals is locality and simplicity . In order for a team to independently build, test and deploy value to customers, it needs to avoid having dependencies on a large number of other teams, people and processes. Each DevOps team can make its own decisions without needing a raft of approvals from others, it promotes the decoupling of components to simplify development and testing, and it recommends making data available in real time to those who need it to complete their tasks efficiently. The second ideal states that teams must have focus, flow and joy , meaning they must be free from constraints that hinder their ability to complete their tasks. Individuals who have to work on multiple activities at the same time or have multiple disruptions while working on an activity are less likely to work to a high standard. If teams are able to focus on individual actions without interruptions, they gain a sense of joy from being able to complete their work. This helps the team deliver value to customers. Improvement of daily work is the third ideal; it focuses on the reduction of technical debt, including security weaknesses. Technical debt, if not addressed, will grow to such an extent that most or all daily work will need to work around it to deliver features, fix defects and mitigate risks. Therefore, a significant proportion of the team’s daily work must involve investing in developer productivity and reducing technical debt. Within The DevOps Handbook , Kim identifies four types of work: business related (such as creating new features), IT, infrastructure improvement and unplanned work. Unplanned work becomes a distraction from delivering value and is symptomatic of large amounts of technical debt. DevOps teams should not be fearful of being open and honest, which is the essence of the fourth ideal, psychological safety . Rather than evolving a ‘blame, name, shame’ culture, individuals must be able to speak up without fear of repercussions. It is important to reinforce a culture of openness within a DevOps team so that if a problem is identified, it is exposed and fixed.
The above is a preview of the first 20 pages. Register to read the complete e-book.