Welcome!

@CloudExpo Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, Zakia Bouachraoui, Liz McMillan

Related Topics: Microservices Expo, Java IoT, Linux Containers, Containers Expo Blog, @CloudExpo, @DXWorldExpo, SDN Journal

Microservices Expo: Article

Understanding APM on the Network

TCP Window Size

In Part 6, we dove into the Nagle algorithm - perhaps (or hopefully) something you'll never see. In Part VII, we get back to "pure" network and TCP roots as we examine how the TCP receive window interacts with WAN links.

TCP Window Size
Each node participating in a TCP connection advertises its available buffer space using the TCP window size field. This value identifies the maximum amount of data a sender can transmit without receiving a window update via a TCP acknowledgement; in other words, this is the maximum number of "bytes in flight" - bytes that have been sent, are traversing the network, but remain unacknowledged. Once the sender has reached this limit and exhausted the receive window, the sender must stop and wait for a window update.

The sender transmits a full window then waits for window updates before continuing. As these window updates arrive, the sender advances the window and may transmit more data.

Long Fat Networks
High-speed, high-latency networks, sometimes referred to as Long Fat Networks (LFNs), can carry a lot of data. On these networks, small receive window sizes can limit throughput to a fraction of the available bandwidth. These two factors - bandwidth and latency - combine to influence the potential impact of a given TCP window size. LFNs networks make it possible - common, even - for a sender to transmit very fast (high bandwidth) an entire TCP window's worth of data, having then to wait until the packets reach the distant remote site (high latency) so that acknowledgements can be returned, informing the sender of successful data delivery and available receive buffer space.

The math (and physics) concepts are straightforward. As the network speed increases, data can be clocked out onto the network medium more quickly; the bits are literally closer together. As latency increases, these bits take longer to traverse the network from sender to receiver. As a result, more bits can fit on the wire. As LFNs become more common, exhausting a receiver's TCP window becomes increasingly problematic for some types of applications.

Bandwidth Delay Product
The Bandwidth Delay Product (BDP) is a simple formula used to calculate the maximum amount of data that can exist on the network (referred to as bits or bytes in flight) based on a link's characteristics:

  • Bandwidth (bps) x RTT (seconds) = bits in flight
  • Divide the result by 8 for bytes in flight

If the BDP (in bytes) for a given network link exceeds the value of a session's TCP window, then the TCP session will not be able to use all of the available bandwidth; instead, throughput will be limited by the receive window (assuming no other constraints, of course).

The BDP can also be used to calculate the maximum throughput ("bandwidth") of a TCP connection given a fixed receive window size:

  • Bandwidth = (window size *8)/RTT

In the not-too-distant past, the TCP window had a maximum value of 65535 bytes. While today's TCP implementations now generally include a TCP window scaling option that allows for negotiated window sizes to reach 1GB, many factors limit its practical utility. For example, firewalls, load balancers and server configurations may purposely disable the feature. The reality is that we often still need to pay attention to the TCP window size when considering the performance of applications that transfer large amounts of data, particularly on enterprise LFNs.

As an example, consider a company with offices in New York and San Francisco; they need to replicate a large database each night, and have secured a 20Mbps network connection with 85 milliseconds of round-trip delay. Our BDP calculation tells us that the BDP is 212,500 (20,000,000 x .085 *8); in other words, a single TCP connection would require a 212KB window in order to take advantage of all of the bandwidth. The BDP calculation also tells us that the configured TCP window size of 65535 will permit approximately 6Mbps throughput (65535*8/.085), less than 1/3 of the link's capacity.

A link's BDP and a receiver's TCP window size are two factors that help us to identify the potential throughput of an operation. The remaining factor is the operation itself, specifically the size of individual request or reply flows. Only flows that exceed the receiver's TCP window size will benefit from, or be impacted by, these TCP window size constraints. Two common scenarios help illustrate this. Let's say a user needs to transfer a 1GB file:

  • Using FTP (in stream mode) will cause the entire file to be sent in a single flow; this operation could be severely limited by the receive window.
  • Using SMB (at least older versions of the protocol) will cause the file to be sent in many smaller write commands, as SMB used to limit write messages to under 64KB; this operation would not be able to take advantage of a TCP receive window of greater than 64K. (Instead, the operation would more likely be limited by application turns and link latency; we discuss chattiness in Part 8.)

For more networking tips, click here for the full article.

More Stories By Gary Kaiser

Gary Kaiser is a Subject Matter Expert in Network Performance Analytics at Dynatrace, responsible for DC RUM’s technical marketing programs. He is a co-inventor of multiple performance analysis features, and continues to champion the value of network performance analytics. He is the author of Network Application Performance Analysis (WalrusInk, 2014).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on organizations of all sizes and in every line of business. Fintech is a constant battleground for this technology expanding trend and the lessons learned here can be applied anywhere. Digital transformation isn't going to go away and the need for greater understanding and skills around managing, guiding, and understanding the greater landscape of change is required for effective transformations.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the massive amount of information associated with these devices. Ed presented sought out sessions at CloudEXPO Silicon Valley 2017 and CloudEXPO New York 2017. He is a regular contributor to Cloud Computing Journal.
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 Cloud Computing Blogger for IT Integrators" by CRN (2015). Mr. Jackson's professional career includes service in the US Navy Space Systems Command, Vice President J.P. Morgan Chase, Worldwide Sales Executive for IBM and NJVC Vice President, Cloud Services. He is currently part of a team responsible for onboarding mission applications to the US Intelligence Community cloud computing environment (IC ...