Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

The Evolution of the Cloud Server Market

What makes a server suitable for a big cloud data center?

Bob Warfield's Blog

What features does a server absolutely positively have to have to be a candidate for a big cloud data center?  What features would put it ahead of other servers in the eyes of the manager writing the checks for that cloud computing data center?

There’s an interesting article out about how Rackable Systems (and presumably others) are building machines inpsired by Google that answer those questions better than ever before.  We’re talking about features like heat-resistant processors, motherboards that contain 2 servers, and that only need one power supply voltage instead of 2 or 3.

One thing the cloud does, is it will force standardization and penny shaving at the hardware (and software) end.  When Amazon, Google, or one of the others is building a big cloud data center, they want utility-grade computing.  It has to be dense on MIPS value, meaning it is really compact and cheap for the amount of cpu power delivered.  Designs that add 25% to the cost to deliver an extra 10% in power won’t cut it.  The Cloud will be too concerned about simply delivering more cores and enough memory, disk, and network speed to keep them happy.  Closing a deal to build standard hardware for a big cloud vendor will be hugely valuable, and in fact, Rackable started out life building systems for Google.

It’s going to be interesting to watch the Cloud Server market evolve.   Reading these articles reminds me of Southwest Airlines, which dramatically improved its cost savings by standardizing on just one kind of airplane, the 737.  Not only did that one-size-fit-all for Southwest, but it made their maintenance costs dramatically lower becaues they can standardize on spare parts for one aircraft, mechanics trained on one, and so on.

Cool beans!

More Stories By Bob Warfield

Bob Warfield is a successful repeat entrepreneur who has founded four startups and participated in all stages from founding through IPO and on to $500 million dollar industry leader. Currently he is Principal at SmoothSpan, but it's only a consultancy. He's on the hunt for his next real gig.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.