Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

What Makes a Cloud Server?

The hardware required for a ‘cloud server’ is very basic

With the massive push toward cloud computing in the enterprise, there are some considerations that hardware vendors will have to come to terms with in the long run.

Unlike the old infrastructure model with hardware bearing the brunt of fault tolerance, the new infrastructure model places all fault tolerance concerns within the software layer itself. I won’t say that this is a new concept as Google has been doing exactly this for a very long time (in IT time at least.) This accomplishes many things, but two particular benefits are that load balancing can now be more intelligent overall, and hardware can be reduced to the absolute most commodity parts available to cut cost.



Because the cloud does not need hardware to provide fault tolerance, the hardware required for a ‘cloud server’ is very basic. I like to think of these servers as netbook equivalents. Bargain bin motherboards, processors, RAM and hard drives can be thrown together to make a low cost commodity cloud server. A ‘Cloud OS’ and ‘Cloud FS’ handle the underpinnings as far as operating system and distributed file system. When combined in the right fashion, the Cloud Software Layer along with the underpinning Cloud OS and Cloud FS can literally allow one of these ‘cloud servers’ to be plugged in and auto-provision itself into the resource pool. When there is a failure of a component or an entire cloud server, the Cloud Software Layer can notify system administrators. Replacement is as simple as unplugging the bad server and plugging in a new one. The server will auto provision itself into the resource pool and it’s ready to go. Management and maintenance are simplified greatly.

Looking back at the hardware that will be used to make these cloud servers, last generation surplus parts are perfect for this type of implementation. Each individual server (or node in grid terms) has modest requirements similar to that of a netbook computer. The tasks that these servers will perform are well defined and it is the combination of hundreds or thousands of these servers that provide the real horse power behind the cloud. We see that netbooks can cost as little as $200 and I see no reason why these small cloud servers can not hit the $100 mark as they need no LCD display or peripheral ports, they can use cheaper standard 3.5″ hard drives and need no real casing to speak of (depending on the deployment method.) These units can even be racked in shelves of 4 units with direct DC power to each board. There would only need to be a single AC to DC inverter per rack with UPS to ensure power is flowing to the rack as a whole. The amount of heat being created will be far less than with a typical server, and it may even be possible to get the thermal thresholds down to the level where a bare heat-sink (without fans) can be used for the processors. This will also drastically reduce the amount of cooling needed in the data center. The possibilities are literally endless and it gets me excited just to think about this type of stuff.

Of course, all of this is dependent on the intelligence and robust fault tolerance built into the cloud software layer. As I said before, Google has already done this and has been using a similar infrastructure for a long time, so it is not a pipe dream. It is up to the individual hardware vendors such as Sun, HP and Dell to design and deliver a cloud server that will meed the needs of future cloud computing infrastructures. They will also need to deliver it at a cost that reflects the level of commodity the server now represents in the data center.

Oh, one more thing. I just wanted to note that it is not written anywhere that x86 has to be the processor architecture standard for this new breed of cloud servers. I can easily see a custom designed ARM processor fitting the bill.

Read the original blog entry...

More Stories By Ernest de Leon

Ernest is a technologist, a futurist and serial entrepreneur who aims to help those making IT related business decisions, from Administrators through Architects to CIOs. Having held just about every title in the IT field all the way up through CTO, he lends his industry experience and multi-platform thinking to all who need it. Creating a vision and executing it are two different things, and he is here to help with both. Seeing the forest and the trees at the same time is a special skill which takes years of experience to develop.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?