Welcome!

@CloudExpo Authors: Elizabeth White, Liz McMillan, Pat Romanski, Nishanth Kadiyala, William Schmarzo

Related Topics: FinTech Journal, Containers Expo Blog, Cloud Security, @DevOpsSummit

FinTech Journal: Article

Beyond DevOps: Security vs. Speed? | @DevOpsSummit #APM #DevOps

Several problems arise when the harm of software failure cannot be treated as an unbound variable

Fail fast, fail often. Yeah, but the first failure blew up the satellite. Well, this is just a photo-sharing app..not rocket science. Okay, but your photos are accessed by users who have passwords that they probably use for other things..and aren't some photos as important as satellites?

Several problems arise when the harm of software failure cannot be treated as an unbound variable. Here are some thoughts on two. I'll write more on two more (one cognitive, one computational) later.

Problem 1: Identity Persists Across Non-Obviously Coupled Systems (So the Stakes Are Higher Than Your Application)
Worse: security failures cascade well beyond physically contiguous realms (if root then everything) into physically decoupled systems via informational (shared passwords, mailboxes) or physical-but-accidental (power cut then reboot) channels. The brilliant and terrifying Have I been pwned? tool -- to say nothing of the astonishing air-gap-annihilating Stuxnet [pdf] surfaces the obvious but easy-to-forget truisms that simply not having data that should not be accessed by X on the same disk as data that can be accessed by X is not good enough, and that the danger posed by access to one application may be slim compared with the danger posed by access to something more serious via the identity compromised by an in-itself non-dangerous breach.

So even if 'fail fast' is okay for your application, it may not be okay for your users. The result: natural tension between the ideal of continuous delivery -- or even Agile more broadly, or even heavily iterative development in general -- and security.

And while one of the major insights of Agile is that the best refiner is the real world (as opposed to the limited imagination of the planners), one of the major embarrassments of InfoSec is that 95% of security breaches involve human error. For Agile, failure is falling until you can walk. For InfoSec, failure is letting the terrifying cat out of the poorly-designed bag. Post-breach, maybe you've started to salt your hashes (congrats, you're more cryptographically sophisticated than Julius Caesar) but your users' passwords are in the wild.

Problem 2: You Have Actual Human Enemies (So Something Smarter Than Chance Is Trying to Outsmart You)
On sheer randomness, the Internet is getting more dangerous (Akamai records crazy DDoS increases over the past year - 122% for application-level (OSI Layer 7) attacks alone??). But the really scary problem is that real, smart, often well-funded humans are trying to make your software do what you didn't design it to do. For most failures, the enemy is "imprecise requirements" or "poor algorithm design" or "inadequately scalable environment" (or even just 'blundering users'); for security failures, the enemy is malicious engineers.

This is the meatiest bit of the (otherwise slightly theatrical) Rugged Manifesto:

I recognize that my code will be attacked by talented and persistent adversaries who threaten our physical, economic and national security.

Yeah. So engineer.add(<malice, talent, persistence>), return ???? -- and multiply(????, world.get(amountEatenBySoftware) = ????!!!!!

If DevOps is a management practice, then a risk of ????!!!!! is pretty much unacceptable.


None of this, of course, means that Agile isn't an awesome idea. Nor am I suggesting that security can't be baked in to an iterative, continuously improving process - certainly it can, but on the face of it this seems to require a bit of finagling. And of course the proper way to address security will always be risk analysis, with a good lump of threat analysis included in any measure of technical debt.

I'd love to take some taxonomy of software errors (maybe regarding security in particular) and cross-tab cost per error type with cycle time (i.e. length of cycle during which each error that cost d dollars was introduced against cost d), normalizing by estimated technical debt accrued during each cycle (assuming somebody measured that at the time, which probably didn't happen). But maybe someone has done that (definitely seen lots of costs by error but not correlated with cycle time), and (since technical debt is kind of a guess anyway) maybe anecdotes are a better gauge of the security cost of "shift left" anyway.

Anyone have any experiences they'd like to share?

More Stories By John Esposito

John Esposito is Editor-in-Chief at DZone, having recently finished a doctoral program in Classics from the University of North Carolina. In a previous life he was a VBA and Force.com developer, DBA, and network administrator. John enjoys playing piano and looking at diagrams, and raises two cats with his wife, Sarah.

@CloudExpo Stories
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads. It’s worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
The Internet giants are fully embracing AI. All the services they offer to their customers are aimed at drawing a map of the world with the data they get. The AIs from these companies are used to build disruptive approaches that cannot be used by established enterprises, which are threatened by these disruptions. However, most leaders underestimate the effect this will have on their businesses. In his session at 21st Cloud Expo, Rene Buest, Director Market Research & Technology Evangelism at Ara...
Join us at Cloud Expo June 6-8 to find out how to securely connect your cloud app to any cloud or on-premises data source – without complex firewall changes. More users are demanding access to on-premises data from their cloud applications. It’s no longer a “nice-to-have” but an important differentiator that drives competitive advantages. It’s the new “must have” in the hybrid era. Users want capabilities that give them a unified view of the data to get closer to customers and grow business. The...
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"We focus on composable infrastructure. Composable infrastructure has been named by companies like Gartner as the evolution of the IT infrastructure where everything is now driven by software," explained Bruno Andrade, CEO and Founder of HTBase, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...