Shift Left: Secure SDLC Explained
On April 2nd, 2020 we hosted our "Shift Left: Secure SDLC Explained" webinar, where our software security expert Tim Hemel discussed the benefits of 'shifting left': dealing with security earlier in the process. We received quite a few questions which Tim gladly answered and we have answered them below. Should you still have any remaining questions, please contact email@example.com.
Is applying a customized security defense for each application necessary?
Every application is different, so that would make the answer a ‘yes’. On the other hand, there are many applications that are similar, and we can re-use many of the defense strategies in those applications. Applications within an organization differ mostly in their functionality, and that means that they will have different security requirements. If teams specialize in certain technologies, their applications will look similar from a technological point of view, and on the architectural level security will look very similar. In the beginning most of the threats and defenses will look new, but as you grow into this secure development process, you will start to see similarities and grow them into standardized solutions for your environment, that you can re-use. That allows you to do security faster and give you room to do more security.
Do we need to have customized third party tools for our security testing? For example, for a particular application we started our code analysis with Checkmarx and the next time we are doing it in an open source application?
That is a difficult question to answer, it depends on the circumstances. If you want to know what the impact is of switching tools instead of using the same tool all the time, then the first problem I can see is that it would be more difficult to compare results from different tests. Then again, I think you should try to run multiple tools and see which one works best for you. Of course, you can run multiple tools on the same code and compare them. Evaluating a tool is a complex story. It is not just about the findings of a tool, but also about costs, integration possibilities and ease of use. If we focus on the tool’s findings, it is difficult to evaluate a tool. You want the tool to give you as much findings as possible, but they should be useful findings as well. Without additional expertise and manual testing, it is difficult to say if the tool reported all weaknesses and vulnerabilities. You can increase some of that via customization of for example scan rules, but for the rest you will have to trust the tool. Missed security findings are invisible. The false positives on the other hand are very visible. If we want to reduce them, we risk missing some actual findings, but having to process many false positives is a lot of work that we don’t really want to do. One way out of this dilemma is to run a tool with less rules, so that you will have less findings, but at least they are useful. Then, once in a while, you run the tool with all rules, and process all findings. Use that feedback to tune your rules for the regular tests.
Should we have the same security practices for all of our applications, or can we have different ones for each application, for example to reduce the financial impact of these practices?
The advantage of a standard set of security practices is that it is clear what to do, and easy to evaluate. At the same time, for some applications, executing all these practices may be more expensive than necessary. Therefore, we need to find a middle way here. Try to standardize as many of the practices as possible, but see where you need to focus. Some applications benefit more from early threat modeling, while others have more need for secure code reviews and security testing, because they have a stable design. You can also vary the intensity of the practices. For example, for web applications with a lower risk profile, you could test against OWASP ASVS level 1, while higher risk applications will need a level 2 test. We want to get the best security value within our budget limits and we need to spend our resources wisely. Therefore, look at the application’s risk, its development status and estimate which practices bring the most benefits.
You haven’t talked about the secure deployment in the webinar, how is this handled in a company?
OWASP SAMM mentions secure deployment, so that is definitely something to pay attention to. Unfortunately we had to be selective in what practices to discuss in the webinar. SAMM advises to automate the deployment process to eliminate its biggest cause of security problems: human mistakes. These practices are similar to what we do in a DevOps way of working and we can learn a lot from DevOps here. Of course you will still need to make sure that what you deploy automatically, is secure: a hardened environment with a secure installation of your software.
Should you then also test the automated tools for deployment?
With automated deployment, you have delegated your process to software. Where previously operators with malicious intentions had all the power to abuse the system, now such attacks would require them to abuse the deployment software. It is an extra barrier, but if malicious insiders are a realistic threat, you should consider this and subject your deployment software to the same scrutiny as the software you are deploying.
Is this the first step to devsecops if an organization works devops? And which steps are needed to go to devsecops?
If you read the DevOps literature, such as the Phoenix Project, you will discover that “building security in” was part of DevOps all along. SecDevOps and DevSecOps are basically the same thing, but with more emphasis on security. But in fact, many of the techniques from DevOps really apply to security practices too. You can eliminate bottlenecks and improve communications between development, operations and security in the same way you improve it between development and operations. You can apply techniques like ‘value stream mapping’ to identify flow problems and change security practices, for example by shifting them more to the left. Creating a culture of working together is even more important when security comes into play, as we often see security findings as criticism. Many people in the security community unfortunately still have a mindset that is not very cooperative, but I hope this will change.
Why is training people on secure coding not enough?
Not all security issues in applications relate to insecure code. As Gary McGraw describes in his book “Software Security - Building Security In”, Microsoft reported that more than 50% of the security problems during its famouse security push (https://www.wired.com/2002/01/bill-gates-trustworthy-computing/) were the result of architectural problems and not code. Therefore it is not only about secure coding, but also secure design and requirements.
Do you need a hacker in your team?
Of course this depends on how you define ‘hacker’. You need to be able to identify and prevent security bugs, but not necessarily exploit them with the latest and greatest techniques. In some cases, you will need such knowledge, but most of the time, it is more effective to detect a potential problem and fix it than to really establish the exploitability of a potential attack. For that, you do not need to know how to write zero-day exploits and other fancy hacker tricks. So the knowledge we need in our team is not necessarily hacker knowledge. Programming knowledge is important to understand how to fix problems. Many people with the hacking expertise discussed above do not have that knowledge. It is easier to teach a developer how to detect and prevent security problems than to teach a hacker how to program. Therefore I would say that having a hacker in your team is not necessary. However, knowing what a hacker can do and understanding how it basically works is a very useful awareness exercise. It gives you an idea about how easy certain problems are to exploit and therefore what the risk could be. So, getting a bit of the ‘hacker mindset’ can be useful. The advice to “think like a hacker” sounds nice, but does not work well in practice. Adam Shostack compares it to thinking like a professional chef (https://adam.shostack.org/blog/2008/09/think-like-an-attacker/). You need experience and knowledge before you can cook a nice meal consistently. As developers, we don’t have time or interest for that. That does not mean that we cannot learn about secure development, we just need to learn it in a way that fits our way of working.
Can the SCRUM master perform the role of a security champion?
What is important in the security champion role is to ensure the availability of security knowledge and to have someone responsible for keeping the process going. A SCRUM master’s task is to facilitate the development process and to eliminate any road blocks. So, for that part of the security champion’s tasks, a SCRUM master would be a good choice. It is not necessary to put all of the security champion’s tasks in one person. You could assign them to multiple persons or even make this a turn-based exercise. Whoever wears the security hat this sprint, will have to keep security going. That will also make security a shared responsibility. As you can see, there are many ways to deal with this, and as long as you make sure the champion’s tasks are done, choose whatever works in your organization.
Can a security champion work on a different team than the software development team?
Ideally you want to have the champion to work as closely with the team as possible, and if you can do that from a different team, then I don’t see problems there. Sometimes you see a dedicated software security team that helps multiple teams simultaneously. It is a way to optimally use the shortage of security champions. The team can have ‘office hours’ to help teams, and assist them with security activities such as threat modeling.
If the company is big enough, is it suitable to have a “Department of the Security Champions” (some kind of Department of Excellence)?
In fact, in the book ‘Secure Software - Building Security In’, Gary McGraw suggests to set up such a department, called a ‘secure software group’. For small organizations, a separate department can be costly, so later the concept of a security champion was introduced. Now we are discovering that the demand for security specialists has outgrown the supply, and we need to find ways to scale the availability of this expertise. One way is to have a software security group. Another way is to have members in the teams that perform security activities on a lower level of expertise, and that the experts work on more challenging issues. In other words, you create a hierarchy of expertise.
Standards and Frameworks
Why did you choose for OWASP SAMM?
I like the structure, because it maps well to the phases of software development. In each of the development phases we can make different mistakes and SAMM suggests matching practices to prevent them. Since version 2, it has matured a lot and I think it provides a workable structure. Note that SAMM only talks about your development process, i.e. what kind of activities you could do, but it does not tell you in detail how to actually do them.
Suppose we follow a mature secure development process. How can we measure / demonstrate the security level of this software? Since OWASP SAMM only talks about the process, we need something different to measure the security of the software itself. In 2014, I made an attempt to do such a thing. We released the “Framework Secure Software”, a document that helped software developers to make their software more secure, and to be able to show how secure their software was. Central to this were practices like security requirements, threat modeling, secure code and testing. The core idea is to keep track of the threats you can expect in your software, how you mitigate it and how you will verify that your mitigation works. By making security visible in that way, you can know the security status at any point in your development process and also know when it is good enough. That also makes security more manageable. You will still need to customize the practices, depending on the type of application, the technologies, the environment and the risk profile. For web applications for example, we like to follow OWASP’s ASVS because it covers many potential security mistakes. For other types of applications, such as mobile or desktop applications, you will need different standards or create your own lists.
If you have any remaining questions, please contact us at firstname.lastname@example.org