Welcome!

@CloudExpo Authors: Liz McMillan, Elizabeth White, Yeshim Deniz, Pat Romanski, Maria C. Horton

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, @DXWorldExpo, SDN Journal, FinTech Journal, @DevOpsSummit

@CloudExpo: Article

Network Performance Monitoring | @CloudExpo #DataCenter #SDN #DevOps

Converged applications and network monitoring tools greatly enhance cross-team communication and collaboration.

Four Ways to Boost IT Performance with Application-Aware Network Performance Monitoring
By Amrutha Aprameya, Evangelist, ManageEngine

In an era of unified IT, you can no longer afford to take a silo-based approach to monitoring and troubleshooting IT problems. It's time for network engineers, server admins and application engineers to expand beyond their particular domains and department-specific tools. It's time to embrace a new, integrated approach to network and application monitoring that lets you view your entire IT infrastructure from a single console and resolve issues before they affect end users. It's time for applications-aware network performance monitoring (AANPM).

AANPM tools are network- and application-level data collectors, monitoring network devices as well as monitoring business-critical applications to establish cross-platform visibility. With AANPM tools, engineers can make better decisions while monitoring the applications and networks that, in turn, help maintain high performance of critical business applications. AANPM, though, must be used judiciously. Otherwise, you can easily drown in meaningless data and miss key factors.

So what's the best way to use AANPM to drive down your mean time to repair (MTTR), the metric most organizations use to measure the performance of IT service providers? By measuring the following four IT performance metrics:

  • Bandwidth utilization rate
  • End-user page response time
  • Network latency or round trip time (RTT)
  • Volume of transactions processed

Bandwidth utilization rate
This metric measures the total amount of traffic for a given period of time. Typically, bandwidth utilization is measured by analyzing packet flows, and it can be tracked for an entire organization, business unit or data center. In the most basic sense, this metric expresses link utilization as a percentage of network traffic. It's measured by using SNMP polling and flow packets (NetFlow, JFlow, SFlow, etc.).

The problems related to bandwidth utilization rates often occur when the personal, user-related activities consume excessive bandwidth, leaving very little bandwidth for business-critical applications. This can significantly lower the performance of business-critical applications and may even lead to network outages.

Benefits: AANPM tools provide engineers with capabilities such as real-time network visibility, bandwidth monitoring and traffic shaping. By using AANPM tools that collect data on both the network and the application, organizations can gain real-time visibility into where and when the network is busy. The real-time visibility helps network engineers monitor the network and prioritize which applications need additional bandwidth. These tools also help in shaping the network traffic at the network interface level with complete granularity. Therefore, by using AANPM tools, network engineers can enhance the bandwidth utilization rates for business-critical applications.

Data based on bandwidth and application usage patterns also helps managers plan and control their IT budgets.

End-user page response time
This metric identifies the time taken for the client system to process information related to the original page request. This is done by placing probes near the client system. These probes monitor turnaround time and validate if the page request has been processed in a timely manner.

Benefits: AANPM tools help you to evaluate the likely experience of users from multiple locations by identifying potential bottlenecks in resources. This way, the end-user experience of business-critical network services such as DNS, LDAP, DHCP and Mail servers can be monitored easily. Using the tools' data, engineers can reconstruct events, analyze flow forensics for identifying the traffic on key links, and replay VoIP calls. These statistics can be particularly useful for analyzing and solving historic problems related to application performance.

Using the end-user page response time statistics collected by these AANPM tools, system engineers can also monitor and track the SLAs of service providers. These same patterns of response times can further help system engineers plan and counter any overall application outages that may occur in the future.

Network latency or round trip time (RTT)
Network latency refers to the time elapsed between transferring a packet of data from the host system to the destination system, or vice-versa. Typically, it's measured using the metric round trip time (RTT), which refers to the amount of time taken for a packet to reach from source to destination and back again. ICMP ping and Cisco's IPSLA would come in handy for RTT calculations.

Ideally, the RTT should be as close to zero as possible. Excessive network latency creates bottlenecks, therefore reducing the bandwidth needed for critical applications. This network latency metric can majorly impact end-user experiences. The impact of network latency on bandwidth can be classified as intermittent (lasting a few seconds) or constant, depending on the source of the delays.

Benefits: AANPM provides performance data from both network and application perspectives that includes application response time analysis and SNMP, ICMP or CLI polling data. Network engineers can assess this data to identify and resolve problem areas, which will drastically reduce the RTT.

AANPM tools help network engineers establish a baseline for network flows under three categories: stable, degrading or unacceptable flow rate. These details can also support capacity planning, as they help network engineers determine where the bottlenecks occur and which applications require more bandwidth.

Volume of transactions processed
This metric refers to the total amount of Web transactions processed during a specific period of time. If the volume of transactions is too high, they can sit in the queue for too long. This causes client systems to reprocess the transaction requests, which may result in application outages.

Benefits: AANPM helps system engineers gain an application-centric view of events happening across the network by outlining the inter-dependencies between an application and the network. It also enables engineers to identify Web transaction data by providing insights into average end-user response times, throughput, and APDEX (application performance index) scores so that the most critical paths can be prioritized over less critical ones. System engineers can then monitor and optimize the end-user experience by assessing applications in terms of how they are deployed and how they perform. This technique can directly improve network uptime and the availability of critical business applications.

In Closing
Converged applications and network monitoring tools greatly enhance cross-team communication and collaboration. AANPM tools provide a single performance management interface that offers a holistic view of both network and application performance. Using this interface, network engineers gain deeper knowledge they need to tune their organizations' networks, servers and applications. Ultimately, that knowledge can help IT departments effectively plan and monitor network traffic, server loads, and transaction volumes as well as dramatically reduce MTTR.


Amrutha Aprameya is an IT management/marketing evangelist at ManageEngine by profession and a passionate blogger by choice. She writes extensively about technology, management consulting trends, and social causes.

More Stories By ManageEngine IT Matters

ManageEngine believes IT management can be simple and affordable. Our authors share insights and how-to tips for SMBs and large enterprises. Over 120,000 companies around the world – including three of every five Fortune 500 companies – trust our products to manage their networks, data centers, business applications, and IT services, and security. We take a straightforward, customer-centric approach to IT management software. Our customers' needs drive our product philosophy. And we've built a strong, in-house R&D team to support our product team and turn customer requests into product realities. We look forward to hearing from you.

@CloudExpo Stories
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D...
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"This week we're really focusing on scalability, asset preservation and how do you back up to the cloud and in the cloud with object storage, which is really a new way of attacking dealing with your file, your blocked data, where you put it and how you access it," stated Jeff Greenwald, Senior Director of Market Development at HGST, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Creating replica copies to tolerate a certain number of failures is easy, but very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number of failures it can tolerate. And the management is like herding cats (overseeing capacity, rebuilds, migrations, and degraded performance). In his general session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for the HGST Cloud Infrastructure Business Unit, discussed how a new approach is neces...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
"Software-defined storage is a big problem in this industry because so many people have different definitions as they see fit to use it," stated Peter McCallum, VP of Datacenter Solutions at FalconStor Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. Commvault can ensure protection, access and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his general session at 18th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Part...