Welcome!

@CloudExpo Authors: Carmen Gonzalez, Liz McMillan, Don MacVittie, Shelly Palmer, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @BigDataExpo

@DevOpsSummit: Article

The DevOps Database | Part 3

Applying Feedback Loops to Database Change Management

In the third post in this series, I’d like to talk about the Second Way of DevOps: Amplifying Feedback Loops.  Here’s a refresher on The Second Way from my introductory post in this series:

The Second Way: Amplify Feedback Loops – This Way deals primarily with facilitating easier and faster communication between all individuals in a DevOps organization.  The goals of this step are to foster better understanding of all internal and external customers in the process and to develop an accessible body of knowledge to replace the dependence on expertise scattered across individuals.

I’ve stated before in this series that Database Change Management poses a unique challenge when your organization is shifting to an agile development methodology and implementing DevOps patterns.  Unlike other areas of your application stack, responsibility for managing application schema straddles two groups operating under somewhat opposed expectations. The development group is on the hook for producing more and more business critical features and releases at an ever increasing rate.  DBAs are tasked with providing a secure, highly available data platform and protecting the integrity of the organization’s priceless data.  The rate of schema change required by development to satisfy expectations can run head long into a database change process that is deliberate and metered by necessity to avoid downtime and data loss.  In organizations where these two groups are isolated from each other, you have the makings of a bottle neck in your release process.

The solution to this problem is embodied by The Second Way of DevOps. Communicate early, communicate often, communicate broadly, and prepare for what’s ahead. The tricky part is implementing the solution in a way that’s meaningful to every stakeholder in an organization’s application group.  At Datical, we’ve spent just as much time on how we organize and present the data associated with application schema changes as we have on automating the deployment of these changes.  We’ve rallied around the following key concepts to bring the The Second Way of DevOps to Database Change Management.

Proactive, Predictive Change Analysis
In an organization where development works independently of the database group, truly understanding the impact a stack of SQL scripts will have on downstream environments is a tedious and time consuming task.  Before these changes can be promoted, target environments must be meticulously evaluated for conflicts and dependencies that will impact the deployment process.  This often involves manual reviews and comparisons of diagrams and database dumps of complex environments. Achieving a high degree of confidence in the success of the proposed updates is difficult because it is so easy to overlook something.  Datical has developed a patent pending simulation feature called Forecast that automates this process.  The Forecast feature builds an in memory model of the target environment, simulates proposed changes on top of that model, and warns of potential error conditions, data loss and performance issues without touching the target database.  Because there is no impact to target environments, database administrators can Forecast changes several times during the development cycle to get ahead of issues that would normally be discovered much later in a pre-release review.  Development gets regular feedback on the changes they are proposing and can address issues that arise during the initial development phase when it is easier and safer to resolve them.   The two teams are working in unison to ensure a safe database deployment that works the first time without surprises.

Always Remember Where You Came From
Database changes are usually designed to address the immediate goals of an organization.  Once one set of requirements has been satisfied by a release, the motivations for the design decisions made for that release generally fades away as new requirements come along and new business initiatives take center stage.  Comments in SQL scripts and on the database objects themselves can be helpful in determining why things are the way they are, but these traces of the past are scattered everywhere. Making sense of the whole is an exercise in archaeology.   This was one of the driving forces behind our model based approach to database change management.  Our model is architected to provide a living history of your application schema.  Individual changes are tied to the specific requirement and release that necessitated them.  This data lives in the model so the information you need to make intelligent design decisions is right in front of you when you need it.

Know Where You Are
By tying the business reasons behind each schema change in the model, this information can be tracked in each database instance as it’s updated and included in Forecast, Deploy, and historical reports.  Tracking the changes in each instance and providing detailed reports allows you to easily disseminate information, effectively gate deployment steps, and quickly satisfy audit requirements. When everyone in your organization has access to thorough accounts of the Who, What, Where, When, and Why of any single database change in any environment, everyone is operating on the same level and can more effectively work towards a common goal.

Know Where You’re Headed
The model also facilitates concurrent development on multiple releases of a project.  By tracking changes made for several different releases in a single model, the development teams working on these releases are able to collaborate and stay ahead of changes made by other teams that may impact future releases.  Developers are able to unify redundant changes and eliminate conflicting changes as they implement instead of spending time on redesign later in the process when time is scarce and the cost of change is high.

More Stories By Pete Pickerill

Pete Pickerill is Vice President of Products and Co-founder of Datical. Pete is a software industry veteran who has built his career in Austin’s technology sector. Prior to co-founding Datical, he was employee number one at Phurnace Software and helped lead the company to a high profile acquisition by BMC Software, Inc. Pete has spent the majority of his career in successful startups and the companies that acquired them including Loop One (acquired by NeoPost Solutions), WholeSecurity (acquired by Symantec, Inc.) and Phurnace Software.

@CloudExpo Stories
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, discussed the best practices that will ensure a successful smart city journey.
Enterprise networks are complex. Moreover, they were designed and deployed to meet a specific set of business requirements at a specific point in time. But, the adoption of cloud services, new business applications and intensifying security policies, among other factors, require IT organizations to continuously deploy configuration changes. Therefore, enterprises are looking for better ways to automate the management of their networks while still leveraging existing capabilities, optimizing perf...
The pace of innovation, vendor lock-in, production sustainability, cost-effectiveness, and managing risk… In his session at 18th Cloud Expo, Dan Choquette, Founder of RackN, discussed how CIOs are challenged finding the balance of finding the right tools, technology and operational model that serves the business the best. He also discussed how clouds, open source software and infrastructure solutions have benefits but also drawbacks and how workload and operational portability between vendors an...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at Dell EMC, introduced a methodology for capturing, enriching and sharing data (and analytics) across the organization...
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at 20th Cloud Expo, Ed Featherston, director/senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is rele...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
Why do your mobile transformations need to happen today? Mobile is the strategy that enterprise transformation centers on to drive customer engagement. In his general session at @ThingsExpo, Roger Woods, Director, Mobile Product & Strategy – Adobe Marketing Cloud, covered key IoT and mobile trends that are forcing mobile transformation, key components of a solid mobile strategy and explored how brands are effectively driving mobile change throughout the enterprise.
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackIQ...
One of the hottest areas in cloud right now is DRaaS and related offerings. In his session at 16th Cloud Expo, Dale Levesque, Disaster Recovery Product Manager with Windstream's Cloud and Data Center Marketing team, will discuss the benefits of the cloud model, which far outweigh the traditional approach, and how enterprises need to ensure that their needs are properly being met.
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
Up until last year, enterprises that were looking into cloud services usually undertook a long-term pilot with one of the large cloud providers, running test and dev workloads in the cloud. With cloud’s transition to mainstream adoption in 2015, and with enterprises migrating more and more workloads into the cloud and in between public and private environments, the single-provider approach must be revisited. In his session at 18th Cloud Expo, Yoav Mor, multi-cloud solution evangelist at Cloudy...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
In his session at 18th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., and Logan Best, Infrastructure & Network Engineer at Webair, focused on real world deployments of DDoS mitigation strategies in every layer of the network. He gave an overview of methods to prevent these attacks and best practices on how to provide protection in complex cloud platforms. He also outlined what we have found in our experience managing and running thousands of Linux and Unix ...