Welcome!

@CloudExpo Authors: PagerDuty Blog, Liz McMillan, Elizabeth White, Jyoti Bansal, Jnan Dash

Related Topics: Containers Expo Blog, Agile Computing, @CloudExpo

Containers Expo Blog: Blog Post

Tech Primer: When and What to Move to Flash Storage | @CloudExpo #Cloud #Storage #DataCenter

Flash storage continues to evolve. Early adopters need to keep up on new developments and flash-related schools of thought

Tech Primer: When and What to Move to Flash Storage

Flash storage has become a mainstream technology, with 451 Research expecting the market to reach $9.6 billion by 2020. As the technology becomes less cost-prohibitive, and benefits such as its exponentially greater performance capabilities and simplified process for provisioning and optimizing systems become more sought after, it's clear that the future of storage is flash. But while some organizations may have taken advantage of the burgeoning technology's benefits early on, a significant number of companies have yet to make the transition. Your organization may very well fall into this category.

For you late adopters who have yet to integrate flash storage but are preparing for a move, here are a few key guidelines to follow when considering when and what to move to flash.

When to move (and how to do it)
If your organization hasn't moved to flash yet, you need to at least be considering it. The technology has become so prevalent in the industry that the cost to performance ratio, which used to be a significant hurdle for many businesses, is ideal. That ratio continues to improve each day, meaning the longer you continue to delay a move to flash, the more infrastructure costs you'll see wasted on disk storage.

Of course, like any other technology change or upgrade, there are several challenges that can complicate adoption. To ensure the integration of flash storage is as beneficial and seamless as possible, you must appropriately prepare prior to the purchase. This should revolve around a proof-of-concept exercise that encourages your organization to clearly detail the performance needs of any applications you will move to flash, which will help to provide an understanding of what performance quality will look like down the road.

A proof of concept is especially important if your organization will need to implement an entirely new infrastructure to support the addition of flash storage. If that's the case, the cost will extend beyond simply purchasing a flash device, so it's critical to have an understanding of how your business will benefit from flash to appropriately make the business case and see a worthwhile ROI.

Here are a few key questions you should ask yourself when considering a move to flash:

  • What are the application performance requirements?
  • Am I moving everything to flash, or something specific?
  • Is that application, code and infrastructure optimized for flash?
  • If I'm purchasing a solution from a new vendor, how will that work with what I already have?
  • Do I have the right management and monitoring tools?
  • Do I have or need data reduction technology?

Given that nearly all hardware vendors offer some kind of flash solution, many organizations struggle to decide where to look when purchasing a device - a legacy vendor or a startup? There are certainly pros and cons to each, and this is another facet of a flash migration that will be made smoother by a proof-of-concept exercise.

For example, if you're looking for a cutting-edge solution, startups like Nimble and Pure aren't burdened by legacy infrastructure and have a singular focus that lends itself to product innovation that you may not see from some of the mainstay players. On the other hand, if you need a globalized solution with mature storage monitoring and management capabilities, legacy vendors are often the best choice as they have been driving flash technology forward for several years, whereas startups are still playing catchup on some of what might be considered more traditional features.

What to move (and why)
Once your organization has done its due diligence in terms of understanding performance needs, how they can be met by a flash implementation and which vendor and device to work with, it's time to consider what data should be migrated.

Generally speaking, flash capacity should be allotted to any applications and associated data that have unpredictable, high I/O requirements (perhaps an employee-facing web application that could be accessed by hundreds of people at different times of day). Flash is especially good for things like VDI, which requires high IOPs but little actual storage space - without flash, you would historically buy a storage array with hundreds of hard drives, much of that space ultimately going unused - just to provide the performance needed to handle boot storms. Flash delivers that performance capacity and more, and without causing the business to waste resources by over-purchasing storage resources.

Flash should also be used to support things like virtualized server solutions. While applications are being consolidated to lower costs and maximize the investment your IT department is making in its servers, all of that data still needs to sit on a shared storage device. The performance capabilities of flash will ensure that data won't be subject to bottlenecks or latency. Similarly, business-critical applications and services like SQL Server, SharePoint, Oracle, and, as mentioned, VDI, to name a few, should also sit on a flash storage device to maintain productivity and efficiency.

Best practices
The above guidelines to successfully prepare for and implement a flash solution is a good starting point on your organization's journey to flash. To help ensure your business is able to realize the full benefits of a flash integration in the future, you should look to leverage several key best practices:

  • Understand your performance needs. Surprisingly, all too often, storage administrators and other IT professionals will rush to implement flash devices without considering what level of performance they actually need. The answer should go beyond, "My current storage solution is slow and I need something faster." Think about your block size requirements for each application and associated latency requirements, as well as the performance ranges for each (i.e., do you anticipate daily, weekly or monthly spikes?). This is why a proof of concept exercise is valuable - to show what performance quality will look like over a period of time to help inform configuration and provisioning decisions.
  • Prepare for growth. While the capacity and functionality of flash technology continues to grow, so too will your organization's amount of data to be stored. Data reduction technologies will certainly help, but your organization should plan to prioritize what data gets stored on flash, and what can remain on existing storage solutions. At the end of the day, you need to balance the ROI of a flash investment against everything else in the data center.
  • Know your bottlenecks. With flash storage, which delivers much-enhanced performance, storage will no longer be the first piece of infrastructure to shoulder the bottleneck blame. However, you will now need to find the new cause of a bottleneck and to do so, must have a fundamental understanding of your organization's infrastructure. Is it the network or the server? Is the application optimized? Could it stem from database code? A comprehensive, full application stack monitoring system should be implemented to provide deep visibility into the health of physical infrastructure and across applications and environments (on-premises and in the cloud) that will enable you to more easily find the source of a new bottleneck.

Conclusion
Flash storage continues to evolve. Early adopters need to keep up on new technology developments and flash-related schools of thought. In addition to larger capacity flash drives, there are still newer technologies to consider, such as NVMe and 3D Xpoint, that can potentially be implemented if/when performance or business requirements change. However, for late adopters that are preparing for a move, these guidelines around when, what and how to integrate flash, as well as the accompanying best practices for future management, will ideally position you to quickly ramp up and benefit quickly from a flash storage adoption.

More Stories By James Honey

James Honey is a senior product marketing manager for SolarWinds. He has more than 15 years of experience in the IT industry, focused specifically on storage technologies and virtualization solutions for SMBs to enterprise environments. His current role includes responsibility for all storage monitoring and management-related product marketing initiatives, including SolarWinds Storage Resource Monitor.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat...
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
Building custom add-ons does not need to be limited to the ideas you see on a marketplace. In his session at 20th Cloud Expo, Sukhbir Dhillon, CEO and founder of Addteq, will go over some adventures they faced in developing integrations using Atlassian SDK and other technologies/platforms and how it has enabled development teams to experiment with newer paradigms like Serverless and newer features of Atlassian SDKs. In this presentation, you will be taken on a journey of Add-On and Integration ...
There are 66 million network cameras capturing terabytes of data. How did factories in Japan improve physical security at the facilities and improve employee productivity? Edge Computing reduces possible kilobytes of data collected per second to only a few kilobytes of data transmitted to the public cloud every day. Data is aggregated and analyzed close to sensors so only intelligent results need to be transmitted to the cloud. Non-essential data is recycled to optimize storage.
"I think that everyone recognizes that for IoT to really realize its full potential and value that it is about creating ecosystems and marketplaces and that no single vendor is able to support what is required," explained Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Interoute has announced the integration of its Global Cloud Infrastructure platform with Rancher Labs’ container management platform, Rancher. This approach enables enterprises to accelerate their digital transformation and infrastructure investments. Matthew Finnie, Interoute CTO commented “Enterprises developing and building apps in the cloud and those on a path to Digital Transformation need Digital ICT Infrastructure that allows them to build, test and deploy faster than ever before. The int...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
Niagara Networks exhibited at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. Niagara Networks offers the highest port-density systems, and the most complete Next-Generation Network Visibility systems including Network Packet Brokers, Bypass Switches, and Network TAPs.
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you ...
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
China Unicom exhibit at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. China United Network Communications Group Co. Ltd ("China Unicom") was officially established in 2009 on the basis of the merger of former China Netcom and former China Unicom. China Unicom mainly operates a full range of telecommunications services including mobile broadband (GSM, WCDMA, LTE FDD, TD-LTE), fixed-line broadband, ICT, data communica...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...