Welcome!

Cloud Expo Authors: Pat Romanski, Christopher Campbell, Elizabeth White, Brad Anderson, Liz McMillan

Related Topics: Virtualization, Java, SOA & WOA, Cloud Expo, Security, SDN Journal

Virtualization: Blog Feed Post

Bare Metal Blog: Mean Time Between Failures

MTBF has meaning well beyond storage

If you are new to the Bare Metal Blog series, find them all here

When assembling a model – any model, from a highly detailed functional replica of an engine to a mass produced plastic model of an airplane – there are several places where things can go wrong. The final product is only as good as the model kit, the glue used, the tools used, and the skill of the craftsman. I’ve seen the same exact model assembled and painted by two different people that look completely different, simply because of the array of variables and how they interact.

This is true of high tech equipment also, and like modeling, it is often overlooked. Interestingly, in my entire IT career, MTBF has only been a measure that meant a ton in two circumstances: When designing hardware and scoping the parts to go in it, and when talking about storage. In all other endeavors, MTBF if mentioned was a side note.

And yet it matters. It can matter a lot. Like most hardware companies (because we spec our own parts and monitor our own quality), we track MTBF both computed from the sum of the parts with average environmental considerations, and actual tracking based upon support cases involving hardware and RMAs. For us, knowing helps us improve quality. For customers, knowing helps gauge the bounds of useful life for the equipment being purchased. Of course, MTBF is a mean, not a fact, and it is entirely possible for a device to last much longer than its MTBF, in fact the fact that it is a mean kind of implies that roughly half of the devices out there will last longer. But it’s the mean, not the median, and most IT shops do not want to plan like a device will last well beyond its MTBF value. MTBF can offer a bit of guidance when it is fairly calculated, and another tool in the evaluation toolbox never hurt an IT shop.

As mentioned earlier in this series, F5 sets quality standards for suppliers to meet, if they wish to continue supplying. This allows a bit better control over MTBF than doing something like “lowest bidder” or similar procurement, simply because the standards set include the quality of parts used, which all rolls into the MTBF calculations – and more importantly for most IT shops, the MTBF reality. While MTBF is a complex set of equations, you can generalize to “the MTBF of a device is as low as or lower than the MTBF of its weakest part”. That means supplier quality standards matter in a very real way. I had a RAID array fail on me once – several drives down all at the same time. The array vendor had to count that as a failure, since RAID no longer worked (thank heavens for backups!), but the failure was on the part of one of their suppliers. That’s how it is in the manufacturing world whomevers’ name is on the box gets the bad rep for quality, regardless of whose handiwork was slipshod. That is why F5’s non-stop quality monitoring program (devices are tested from before release until EOL is announced) matters a lot. It’s also why quality standards for parts suppliers matter more then getting the absolute cheapest part, as some manufacturers are wont to do.

I will not replicate our entire knowledge base article here, if you have an ask.f5.com account, you can click here to read it. I’ll just summarize and pull bits out for the readers’ enjoyment.

F5 gear runs the gauntlet from entry level to massive blade systems. As such, MTBF varies from device to device. The worst calculated MTBF for an F5 device is over three years. And our quality team tells me that the calculated value is far lower than the real-life-experience value they get from watching returns and such. The best calculated MTBF is over 21 years. It’s a rare piece of computer gear that is used that long, but Lori and I have got some pretty old F5 gear that’s still clipping away like it was new, so no surprises there. Most F5 devices fall somewhere in between.

Why the large variance in MTBFs if we control for quality? A valid question. The fact is that it is not all about the quality of parts. Airflow inside the device, number of redundant parts, number of removable parts… there are a zillion other things that go into MTBF, and they all tend to get better as the device gets physically larger. Entry level devices are small, restricting airflow and cutting down on available space for redundant power supplies, etc. While the top end blade servers have room for all of that, and since cards are replaceable, tend to less failures. You will find a similar spread with any other vendor that covers such a wide range of hardware. And all of those numbers are likely to beat out a COTS server running a software product.

So when looking at any electronic gear, ask about MTBF. Alone it simply gives you insight into the priorities for the device you’re looking at, when combined with the MTBF numbers from several different devices (the same manufacturer or multiple), it gives you an idea of what you are buying in terms of quality. Of course with a large chunk of any given appliance handled in software, MTBF is not as meaningful as it once was, but it is still the underlying bedrock for that software to run on.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is a Technical Marketing Manager at F5 Networks. In this role, he supports outbound marketing, education, and evangelism efforts around development, storage, and IT management topics related to F5 solutions. His role includes authoring technical materials, participating in social and community-based forums, and providing guidance for the development of marketing resources. As an industry veteran, MacVittie has extensive programming experience along with project management, IT management, and systems/network administration expertise.

Prior to joining F5, MacVittie was a Senior Technology Editor at Network Computing, where he conducted product research and evaluated storage and server systems, as well as development and outsourcing solutions. He has authored numerous articles on a variety of topics aimed at IT professionals. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

Cloud Expo Breaking News
Simply defined the SDDC promises that you’ll be able to treat “all” of your IT infrastructure as if it’s completely malleable. That there are no restrictions to how you can use and assign everything from border controls to VM size as long as you stay within the technical capabilities of the devices. The promise is great, but the reality is still a dream for the majority of enterprises. In his session at 14th Cloud Expo, Mark Thiele, EVP, Data Center Tech, at SUPERNAP, will cover where and how a business might benefit from SDDC and also why they should or shouldn’t attempt to adopt today.
MapDB is an Apache-licensed open source database specifically designed for Java developers. The library uses the standard Java Collections API, making it totally natural for Java developers to use and adopt, while scaling database size from GBs to TBs. MapDB is very fast and supports an agile approach to data, allowing developers to construct flexible schemas to exactly match application needs and tune performance, durability and caching for specific requirements.
APIs came about to help companies create and manage their digital ecosystem, enabling them not only to reach more customers through more devices, but also create a large supporting ecosystem of developers and partners. While Facebook, Twitter and Netflix were the early adopters of APIs, large enterprises have been quick to embrace the concept of APIs and have been leveraging APIs as a connective tissue that powers all interactions between their customers, partners and employees. As enterprises embrace APIs, some very specific Enterprise API Adoption patterns and best practices have started emerging. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will talk about the most common enterprise API patterns and will discuss how enterprises can successfully launch an API program.
The social media expansion has shown just how people are eager to share their experiences with the rest of the world. Cloud technology is the perfect platform to satisfy this need given its great flexibility and readiness. At Cynny, we aim to revolutionize how people share and organize their digital life through a brand new cloud service, starting from infrastructure to the users’ interface. A revolution that began from inventing and designing our very own infrastructure: we have created the first server network powered solely by ARM CPU. The microservers have “organism-like” features, differentiating them from any of the current technologies. Benefits include low consumption of energy, making Cynny the ecologically friendly alternative for storage as well as cheaper infrastructure, lower running costs, etc.
Next-Gen Cloud. Whatever you call it, there’s a higher calling for cloud computing that requires providers to change their spots and move from a commodity mindset to a premium one. Businesses can no longer maintain the status quo that today’s service providers offer. Yes, the continuity, speed, mobility, data access and connectivity are staples of the cloud and always will be. But cloud providers that plan to not only exist tomorrow – but to lead – know that security must be the top priority for the cloud and are delivering it now. In his session at 14th Cloud Expo, Kurt Hagerman, Chief Information Security Officer at FireHost, will detail why and how you can have both infrastructure performance and enterprise-grade security – and what tomorrow's cloud provider will look like.
Today, developers and business units are leading the charge to cloud computing. The primary driver: faster access to computing resources by using the cloud's automated infrastructure provisioning. However, fast access to infrastructure exposes the next friction point: creating, delivering, and operating applications much faster. In his session at 14th Cloud Expo, Bernard Golden, VP of Strategy at ActiveState, will discuss why solving the next friction point is critical for true cloud computing success and how developers and business units can leverage service catalogs, frameworks, and DevOps to achieve the true goal of IT: delivering increased business value through applications.
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.