Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Elizabeth White, Flint Brenton

Related Topics: FinTech Journal, Containers Expo Blog, Cloud Security, @DevOpsSummit

FinTech Journal: Article

Beyond DevOps: Security vs. Speed? | @DevOpsSummit #APM #DevOps

Several problems arise when the harm of software failure cannot be treated as an unbound variable

Fail fast, fail often. Yeah, but the first failure blew up the satellite. Well, this is just a photo-sharing app..not rocket science. Okay, but your photos are accessed by users who have passwords that they probably use for other things..and aren't some photos as important as satellites?

Several problems arise when the harm of software failure cannot be treated as an unbound variable. Here are some thoughts on two. I'll write more on two more (one cognitive, one computational) later.

Problem 1: Identity Persists Across Non-Obviously Coupled Systems (So the Stakes Are Higher Than Your Application)
Worse: security failures cascade well beyond physically contiguous realms (if root then everything) into physically decoupled systems via informational (shared passwords, mailboxes) or physical-but-accidental (power cut then reboot) channels. The brilliant and terrifying Have I been pwned? tool -- to say nothing of the astonishing air-gap-annihilating Stuxnet [pdf] surfaces the obvious but easy-to-forget truisms that simply not having data that should not be accessed by X on the same disk as data that can be accessed by X is not good enough, and that the danger posed by access to one application may be slim compared with the danger posed by access to something more serious via the identity compromised by an in-itself non-dangerous breach.

So even if 'fail fast' is okay for your application, it may not be okay for your users. The result: natural tension between the ideal of continuous delivery -- or even Agile more broadly, or even heavily iterative development in general -- and security.

And while one of the major insights of Agile is that the best refiner is the real world (as opposed to the limited imagination of the planners), one of the major embarrassments of InfoSec is that 95% of security breaches involve human error. For Agile, failure is falling until you can walk. For InfoSec, failure is letting the terrifying cat out of the poorly-designed bag. Post-breach, maybe you've started to salt your hashes (congrats, you're more cryptographically sophisticated than Julius Caesar) but your users' passwords are in the wild.

Problem 2: You Have Actual Human Enemies (So Something Smarter Than Chance Is Trying to Outsmart You)
On sheer randomness, the Internet is getting more dangerous (Akamai records crazy DDoS increases over the past year - 122% for application-level (OSI Layer 7) attacks alone??). But the really scary problem is that real, smart, often well-funded humans are trying to make your software do what you didn't design it to do. For most failures, the enemy is "imprecise requirements" or "poor algorithm design" or "inadequately scalable environment" (or even just 'blundering users'); for security failures, the enemy is malicious engineers.

This is the meatiest bit of the (otherwise slightly theatrical) Rugged Manifesto:

I recognize that my code will be attacked by talented and persistent adversaries who threaten our physical, economic and national security.

Yeah. So engineer.add(<malice, talent, persistence>), return ???? -- and multiply(????, world.get(amountEatenBySoftware) = ????!!!!!

If DevOps is a management practice, then a risk of ????!!!!! is pretty much unacceptable.


None of this, of course, means that Agile isn't an awesome idea. Nor am I suggesting that security can't be baked in to an iterative, continuously improving process - certainly it can, but on the face of it this seems to require a bit of finagling. And of course the proper way to address security will always be risk analysis, with a good lump of threat analysis included in any measure of technical debt.

I'd love to take some taxonomy of software errors (maybe regarding security in particular) and cross-tab cost per error type with cycle time (i.e. length of cycle during which each error that cost d dollars was introduced against cost d), normalizing by estimated technical debt accrued during each cycle (assuming somebody measured that at the time, which probably didn't happen). But maybe someone has done that (definitely seen lots of costs by error but not correlated with cycle time), and (since technical debt is kind of a guess anyway) maybe anecdotes are a better gauge of the security cost of "shift left" anyway.

Anyone have any experiences they'd like to share?

More Stories By John Esposito

John Esposito is Editor-in-Chief at DZone, having recently finished a doctoral program in Classics from the University of North Carolina. In a previous life he was a VBA and Force.com developer, DBA, and network administrator. John enjoys playing piano and looking at diagrams, and raises two cats with his wife, Sarah.

@DevOpsSummit Stories
DXWorldEXPO LLC, the producer of the world's most influential technology conferences and trade shows has announced the 22nd International CloudEXPO | DXWorldEXPO "Early Bird Registration" is now open. Register for Full Conference "Gold Pass" ▸ Here (Expo Hall ▸ Here)
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smart's sake," and many brands remain in that corner. But many brands are also gradually opting for more strategic approaches. They're taking a breath and stepping back to examine both existing and potential IoT experiences, asking themselves whether their products lend real value. Once we reach this goal, the implications for personalization are staggering. Consumers will expect devices they use and i...
Here are the Top 20 Twitter Influencers of the month as determined by the Kcore algorithm, in a range of current topics of interest from #IoT to #DeepLearning. To run a real-time search of a given term in our website and see the current top influencers, click on the topic name. Among the top 20 IoT influencers, ThingsEXPO ranked #14 and CloudEXPO ranked #17.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that enables everyone from Planning-to-Ops to make informed decisions based on business priority and leverage automation to accelerate identifying issues and fast fix to drive continuous feedback and KPI insight.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant that knows everything and can respond to your emotions and verbal commands!