Paul Farrington, EMEA CTO, Veracode, discusses the practical steps organisations can take to make sure their development and security teams are working better together to implement a successful ‘shift left’ process.
Shifting security ‘left’ is about more than simply changing the timing of testing. When security shifts to earlier phases of the development lifecycle, it also changes who’s responsible for conducting the testing and addressing the results. Until recently, security testing would take place late in the software development process and then the results passed back ‘over the wall’ to developers. But with the rise of DevSecOps, finding and fixing security-related defects is a shared responsibility between security and development teams.
Security testing has shifted further left into the realm of the developer. The development team now has the ability and the responsibility to embed security in the development phase, while the security team has more input in the development phase, focusing on goals and policy.
This is a significant change that requires entirely new tasks, skills, priorities and mindset. But there is a big obstacle to this change: the fact that most developers don’t have secure coding skills. The reality is that most developers aren’t formally taught secure coding practices and most organisations do not offer this training for development teams.
If you shift security left into developer workflows without training and guidance, it’s likely to introduce delays in developer timelines and still produce vulnerable code. Shifting left only works when developers get the tools and assistance they need to succeed. The speed at which you receive security-testing results is meaningless without the guidance needed to address those results.
The following five steps can offer that guidance:
1. Autonomous security from day one
Automation in security is critical. By making repeatable processes, it reduces the number of human steps slowing things down. But the most important reason why security needs to be automated is because that is how the development teams are working. By automating security, companies don’t need a separate security team to implement work with the development team. This is part of the developers’ process and is becoming the standard way of building software.
In order to do this, the first step in automating security is to look at what tools you already have in place, then figure out the best points to automate in some of the security testing you want to do.
The next step after making sure the tests are running autonomously is to make sure the results are captured in an automated way so this feedback can be integrated into defect tracking systems to further improve the process.
2. Integrate as you code
When working with organisations that want to integrate security into the continuous delivery process, the overriding principle is to put this process in place as early in the development lifecycle as possible. This is essential to create a tight feedback loop between discovery of a problem and allowing a developer to fix the issue quickly. It’s much faster and cheaper to ask a developer to fix something they’ve just coded rather than six months down the line.
Performing security testing just as the developer is writing the code allows for instant feedback. We want to enable developers to be able to do application security testing, but most don’t have an understanding of fixing flaws in the same way application security professionals do. To add to this, developers are following strict timelines for delivering code. Ideally, application security should be viewed as an ongoing partnership between development and security, where security defines what’s acceptable from a quality level, and the developers implement the testing and address issues as they come up.
3. Avoid false alarms
Like any form of testing, application security testing can suffer from false alarms – where the test returns a result indicating a problem when one doesn’t exist. It’s common for earlier generations of application security tools to return false alarms because they were designed to show all the possible issues in a piece of software, including false positives. The problem with this approach is its very difficult to integrate these tools into a shift left workflow because the results have to be reviewed before the developer can process them.
Modern tools need to take this into account and tune both for maximum coverage, making sure all critical security issues are found. It should also check for low noise so developers aren’t troubled with lots of false positives and eventually end up removing testing from their workflows. The entire goal of shifting left and bringing application security closer to the developer is to automate the process, making it quicker with minimal disruption to the development team. If you automate testing with lots of false positives, you’ll be unnecessarily stopping the development delivery process.
By adopting a solution that has a lower false positive rate, businesses can get the benefit of application security testing without unnecessarily disrupting the developer workflow.
4. Create security champions
Having a Security Champion on every development team will guarantee security knowledge is part of every decision when building software. The Security Champion is a member of the development team that has been trained by and works in close concert with the security team, and acts as an adviser and expert who can intervene when design or implementation problems arise in the development process. The individual in this role can help reduce complexity of secure coding among developers by collaborating on immediate remediation. Security Champions also help to reduce culture conflict between development and security by amplifying the security message on a peer-to-peer level. They don’t need to be experts, more like the ‘security consciousness’ of the group.
The CISO is often seen as the one who’s responsible for making sure a company is secure. This is outdated and unrealistic thinking – the CISO can’t be everywhere. In reality, developers need to be responsible for building secure software and the CISO bears the responsibility to provide the tools, process and governance.
5. Develop a culture of visibility
Contrary to how many developers think of application delivery, the responsibility doesn’t stop once the product is in production. One of the major innovations of DevOps is it makes teams responsible for the product and dealing with any issues that may come up during production.
There are several considerations for how to keep visibility on security incidents in production with live running applications. One responsibility is to monitor applications to understand if they are under attack to then take corrective action. This can be done with a variety of tools and practices, many of which teams may already be using, but may require some tuning to isolate attacks. This could include correlating product logs into security intelligence, or writing special rules when certain conditions happen, so that unusual or suspicious activity can be monitored.
Another responsibility teams have is to understand what the entire attack perimeter of their organisation is and to ensure applications have been subjected to the right degree of security rigour. The organisation as a whole needs to know what’s out there to understand which of the applications may put the enterprise at risk and be prepared to act quickly if a highly vulnerable application poses a risk to the enterprise.
The bottom line is application security success is about more than finding security flaws; it’s about fixing them. In the DevOps era, security and development have to work together to ensure that flaws are identified, prioritised and fixed. If developers are provided modern tools to accomplish their goals on schedule while also producing secure code, they will make progress on reducing security debt in their software.