Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Elizabeth White, Zakia Bouachraoui, Pat Romanski, Liz McMillan

Related Topics: @DevOpsSummit, @CloudExpo, Cloud Security

@DevOpsSummit: Blog Feed Post

Why Security Needs DevOps: OpenSSL and Beyond | @DevOpsSummit #DevOps

New methods of infrastructure management are the only way to keep pace with current security threats

Why Security Needs DevOps: OpenSSL and Beyond
By Greg Pollock

On March 18, 2015, system administrators and developers received ominous news: two high severity vulnerabilities in OpenSSL would be announced the next day. Since Heartbleed, OpenSSL had been on a bad streak, and it looked like things were only going to get worse. Operations, development, and security teams braced for impact and then– it wasn't really that bad.

One issue was about the downgrade to export RSA vulnerability and the other was about susceptibility to DoS attacks. The RSA export was a serious problem, but it had been known for months and was just being officially reclassified as high severity. DoS attacks were bad but wouldn't result in data compromise. The "next Heartbleed" turned out to be more like the boy who cried wolf.

Since Heartbleed, vulnerability disclosures have gotten more attention. Sometimes it is well deserved; other times it feels more like media hype. For a larger perspective on the post-Heartbleed security landscape, ScriptRock talked to Jonathan Cran and Scott Petry. Jonathan Cran is the VP of Operations at Bugcrowd, a platform for crowd-sourced security. Scott Petry is the founder of Postini and Authentic8, which makes a disposable browser to insulate businesses from their employees' surfing.

Jonathan Cran pointed to some relatively good news: greater investments in the Linux Core Infrastructure Initiative will start paying down some of the operating system's technical debt. Additionally, the Linux Foundation is paying for a professional audit for the first time and has hired more full time developers to work solely on the kernel.

At the same time, the overall volume of vulnerabilities is only increasing, as Bugcrowd's success suggests. Given the decline of signature based defenses and the unpredictability of new vulnerabilities, Cran's recommendations require infrastructure teams to be flexible. "You need to be able to know what you have, know what vulnerabilities are out there, and quickly make changes," he said. "You need to have systems in place so that once you know what a vulnerability is, you can fix it as soon as possible. That's more about your processes and how you build your infrastructure than about this vulnerability or that one in particular."

Scott Petry painted a less rosy picture of the security landscape. "Will there be more? Oh yeah. Is it going to be OpenSSL? I don't know. Just think about other core utilities- image processing, for example- and you have a list of potential vulnerabilities with the same reach as Heartbleed." (N.B.: this interview was conducted before VENOM, a core vulnerability in many VMs, was disclosed.) Open source software's pace of development and the natural supply/demand have contributed immensely valuable libraries but have also made security testing optional. Like Cran, Petry recommends that infrastructure owners prepare for the unknown unknowns by building flexible, transparent, high quality systems. "Give me visibility, the ability to quickly make changes, and the ability to check that those changes worked."

Black hats and infrastructure owners remain locked in a red queen's race. Hackers continue to find novel ways to exploit the weak spots in complex systems; Operations teams get better at monitoring and automating their infrastructure. Sadly, the bad guys are currently winning: the Verizon Data Breach Investigation Report finds that the difference between time to infection and time to discovery has increased every year from 2004 to 2014. But perhaps an approach to security that takes into account the complete development and deployment cycle- a DevOps, approach- can turn the tide.

The lesson to be learned from Heartbleed and its offspring is not that OpenSSL or open source software is too dangerous to use. Rather, the conclusion is that new methods of infrastructure management - better visibility into system state, more efficient meands of detecting and eliminating vulnerabiltiies, faster ways to apply uniform changes - are the only way to keep pace with current security threats.

Read the original blog entry...

More Stories By ScriptRock Blog

ScriptRock makes GuardRail, a DevOps-ready platform for configuration monitoring.

Realizing we were spending way too much time digging up, cataloguing, and tracking machine configurations, we began writing our own scripts and tools to handle what is normally an enormous chore. Then we took the concept a step further, giving it a beautiful interface and making it simple enough for our bosses to understand. We named it GuardRail after its function — to allow businesses to move fast and stay safe.

GuardRail scans and tracks much more than just servers in a datacenter. It works with network hardware, Cloud service providers, CloudFlare, Android devices, infrastructure, and more.

@DevOpsSummit Stories
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers with heavy investments in serverless computing, when most of the industry has its eyes on Docker and containers.