Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo

@CloudExpo: Article

Cloud Computing: How It Affects Enterprise and Performance Monitoring

New Technology, Same Old Problems

In recent times, cloud computing has played a dominant role in the industry. Whether you feel positively or negatively about it, it is undeniable that cloud monitoring, like any other component in your network, needs to be monitored – perhaps more than any other. To more old-fashioned solutions for monitoring, the cloud creates a number of obstacles: you do not have ownership of its hardware, it is not run on your network and when problems or glitches occur you have no control over them.



Today there is a wide range of utilities to help you to manage your cloud computing, and the majority of these are able to respond to the disappearance of instances by starting up new ones with just a little direction from you. But how can this be integrated into your existing monitoring?

More old-fashioned tools often need lengthy and difficult alterations to their settings, as well as total reboots of the service for these alterations to be acknowledged. It is not uncommon for an organization to restart and close down upward of 100 instances per day, equivalent to a brand-new instance every 5 minutes. This constant rebooting of the monitoring tools is unsustainable and means that systems may not have sufficient time to run necessary tests, making monitoring data less valuable. This wipes out much of the benefit that cloud computing brings to a company which is reliant upon the health of its IT infrastructure for the health of its business processes.

The tools you employ to keep track of the cloud instances must be as adaptable and customizable as the instance itself. Your systems should be able to respond to minute-by-minute alterations without rebooting. Additionally, you should remove the need for manual alterations: assessing the system too frequently will cause stress. Ideally, your system should offer a capable application programming interface with configuration of monitoring built into its central management.

As for moving information to the system from the cloud, the benefit of the cloud is additional processing without additional computing. A heavyweight agent will slow your applications or increase your cost per instance for data collection. A lighter system with an agent that can be tailored to your needs is better.

Data Exposure
Many service providers and apps expose data to allow it to be extracted remotely without an agent, in the form of a user-friendly REST application programming interface which pulls data using JSON or XML. This means that pulling your data has a minimal effect on your systems. When an agent is required to examine non-exposed or non-viewable data, you will need the option to run an agent or to create a script for data exposure – whichever suits your needs. What is sometimes unexamined is that cloud-mined data should be handled in the same manner as another data point. The data offered by cloud providers may not provide useful insight into your performance for your infrastructure.

Regardless of whether you operate via cloud computing or not, if your systems monitoring does not mine data according to your needs, you need to change it. Data enables better feedback and better performance, providing context to decisions and ensuring that goals are being achieved. It allows IT systems to be optimized for business.

For some organizations, cloud computing has disrupted business practice, which cloud utilities paving the way for the future. They force a rethink of administrative processes – you can examine your existing tools and discard or refine them for new tasks.

Cloud computing has prompted a great business evolution. However, merely slapping a SaaS interface onto ancient coding and declaring it 'the cloud' is not enough to achieve your goals. Infrastructure is vital – utilities and processes built specially for the cloud platform will bring flexibility and the ability to adapt to changes within and without. It is survival of the fittest. Minute-by-minute monitoring of systems brings up-to-the-moment responsiveness, so that your company will thrive in the brave new world of cloud computing.

More Stories By Anne Lee

Anne Lee is a freelance technology journalist, a wife and a mother of two.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.