Welcome!

Cloud Expo Authors: Pat Romanski, Elizabeth White, Saar Bitner, Liz McMillan, Roger Strukhoff

Related Topics: Linux, Web 2.0

Linux: Blog Feed Post

Dear Slashdot: You Get What You Pay For

Open Source SSL Accelerator solution not as cost effective or well-performing as you think

Open Source SSL Accelerator solution not as cost effective or well-performing as you think

o3 Magazine has a write up on building an SSL accelerator out of Open Source components. It's a compelling piece, to be sure, that was picked up by Slashdot and discussed extensively.

If o3 had stuck to its original goal - building an SSL accelerator on the cheap - it might have had better luck making its arguments. But it wanted to compare an Open Source solution to a commercial solution. That makes sense, the author was trying to show value in Open Source and that you don't need to shell out big bucks to achieve similar functionality. The problem is that there are very few - if any - commercial SSL accelerators on the market today. SSL acceleration has long been subsumed by load balancers/application delivery controllers and therefore a direct comparison between o3's Open Source solution and any commercially available solution would have been irrelevant; comparing apples to chicken is a pretty useless thing to do.

To the author's credit, he recognized this and therefore offered a complete Open Source solution that would more fairly be compared to existing commercial load balancers/application delivery controllers, specifically he chose BIG-IP 6900. The hardware platform was chosen, I assume, based on the SSL TPS rates to ensure a more fair comparison. Here's the author's description of the "full" Open Source solution:

The Open Source SSL Accelerator requires a dedicated server running Linux. Which Linux distribution does not matter, Ubuntu Server works just as well as CentOS or Fedora Core. A multi-core or multi-processor system is highly recommended, with an emphasis on processing power and to a lesser degree RAM. This would be a good opportunity to leverage new hardware options such as Solid State Drives for added performance. The only software requirement is Nginx (Engine-X) which is an Open Source web server project. Nginx is designed to handle a large number of transactions per second, and has very well designed I/O subsystem code, which is what gives it a serious advantage over other options such as Lighttpd and Apache. The solution can be extended by combining a balancer such as HAproxy and a cache solution such as Varnish. These could be placed on the Accelerator in the path between the Nginx and the back-end web servers.

o3 specs out this solution as running around $5000, which is less than 10% of the listed cost of a BIG-IP 6900. On the surface, this seems to be quite the deal. Why would you ever purchase a BIG-IP, or any other commercial load balancer/application delivery controller based on the features/price comparison offered?

Turns out there are quite a few reasons; reasons that were completely ignored by the author.

CHAINING PROXIES vs INTEGRATED SOLUTIONS
While all of the moving parts cited by the author (Nginx, Apache, HAproxy, Varnish) are all individually fine solutions, he suggests combining them to assemble a more complete application delivery solution that provides caching, Layer 7 inspection and transformation, and other advanced functionality. Indeed, combining these solutions does provide a deployment that is closer to the features offered by a commercial application delivery controller such as BIG-IP.

Unfortunately, none of these Open Source components are integrated. This necessitates an architecture based on chaining of proxies, regardless of their deployment on the same hardware (as suggested by the author) or on separate devices; in path, of course, but physically separated.

Chaining proxies incurs latency at every point in the process. If you chain proxies, you are going to incur latency in the form of:

  • TCP connection setup and teardown processing
  • Inspection of application data (layer 7 inspection is rarely computationally inexpensive)
  • Execution of functionality (caching, security, acceleration, etc...)
  • Transfer of data between proxies (when deployed on the same device this is minimized)
  • Multiple log files

This network sprawl degrades response time by adding latency at every hop and actually defeats the purposes for which they were deployed. The gains in performance achieved by offloading SSL to Nginx is almost immediately lost when multiple proxies are chained in order to provide the functionality required to match a commercial application delivery controller.

A chained proxy solution adds complexity, obscures visibility (impacts ability to troubleshoot) and makes audit paths more difficult to follow. Aggregated logging never mentioned, but this is a serious consideration, especially where regulatory compliance enters the picture. The issue of multiple log files is one that has long plagued IT departments everywhere, as they often require manual aggregation and correlation - which incurs time and costs. A third party solution is often required to support troubleshooting and transactional monitoring, which incurs additional costs in the form of acquisition, maintenance, and management not considered by the author.

Soft costs, too, are ignored by the author. The configuration of the multiple Open Source intermediaries required to match a commercial solution often require manual editing of configuration files; and must be configured individually. Commercial solutions - and specifically BIG-IP - reduce the time and effort required to configure such solutions by offering myriad options for management - standards-based API, scripting, command line, GUI, application templates and wizards, central management system, and integration as part of other standard data center management systems.

COMPRESSION SHOULD NEVER BE A BINARY CONFIGURATION
The author correctly identifies that offloading compression duties from back-end servers to an intermediary can result in improved performance of the application and greater efficiencies of the servers. NGinx supports industry-standard gzip compression.

The problem with this - and there is a problem - is that it is not always beneficial to apply compression. Years of extensive experience and testing prove that the use of compression can actually degrade performance. Factors such as size of application payload, type of content, and the speed of the network on which the application data will be transferred should all be considered when making the decision to compress or not compress.

This intelligence, this context-awareness, is not offered by this Open Source solution. o3's solution is on or off, with nothing in between. In situations where images are being delivered over a LAN, for example, this will not provide any significant performance benefit and in fact will likely degrade performance. Certainly NGinx could be configured to ignore images, but this does not solve the problem of the inherent uselessness of trying to compress content traversing a LAN and/or under a specific length.

SECURITY
Another overlooked item is security. Not just application security, but full TCP/IP stack security. The Open Source solution could easily add mod_security to the list to achieve parity with the application security features available in commercial solutions. That does not address the underlying stack security. The author suggests running on any standard Linux platform. To be sure, anyone building such a solution for deployment in a production environment will harden the base OS; potentially using SELinux to further lock down the system. No need to argue about this; it's assumed good administrators will harden such solutions.

But what will not be done - and can't be done - is securing the system against network and application attacks. Simple DoS, ARP poisoning, SYN floods, cookie tampering. The potential attacks against a system designed to sit in front of web and application servers are far more lengthy than this, but even these commonly seen attacks will not be addressed by o3's Open Source solution. By comparison, these types of attacks are part and parcel of BIG-IP; no additional modules or functionality necessary.

Furthermore, the performance numbers provided by o3 for their solution seem to indicate that testing was accomplished using 512-bit key certificates.  A single Opteron core can only process around 1500 1024-bit RSA operations per second. This means an 8-core CPU could only perform approximately 12,000 1024-bit RSA ops per second - assuming that's all they were doing. 512-bit keys run around five times faster than 1024-bit. The author states: "The system had no problems handling over 26,590 TPS" which seems to indicate it was not using the industry standard 1024-bit key based on the core capabilities of the processors to process RSA operations. In fact, 512-bit key certificates are no longer supported by most CAs due to their weak key strength.

Needless to say, if the testing used to determine the SSL TPS for BIG-IP were to use 512-bit keys, you'd see a marked increase in the number of SSL TPS in the data sheet.

YOU GET WHAT YOU PAY FOR
Look, o3 has a put together a fairly cool and cheap solution that accomplishes many of the same tasks as a commercial application delivery controller. That's not the point. The point is trying to compare a robust, integrated application delivery solution with a cobbled together set of components designed to mimic similar functionality is silly.

Not only that, the logic that claims it is more cost efficient is flawed.

Is the o3 solution cheaper? Sure- as long as we look only at acquisition. If we look at cost to application performance, to maintain the solution, to troubleshoot, and to manage it then no, no it isn't. You're trading in immediate CAPEX cost savings for long-term OPEX cost outlays.

And as is always the case, in every market, you get what you pay for. A $5000 car isn't going to last as long or perform as well as the $50,000 car, and it isn't going to come with warranties and support, either. It will do what you want, at least for a while, but you're on your own when you take the cheap route.

That said, you are welcome to do so. It is your data center, after all. Just be aware of what you're sacrificing and the potential issues with choosing the road less expensive.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs and articles:

Technorati Tags: ,,,,,
,,,,,,,,
,,,,,
Categories:  ,  ,  

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
“DevOps is really about the business. The business is under pressure today, competitively in the marketplace to respond to the expectations of the customer. The business is driving IT and the problem is that IT isn't responding fast enough," explained Mark Levy, Senior Product Marketing Manager at Serena Software, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session a...
"Ulunsoft is a start-up that focuses on how enterprises build cloud-based IT infrastructure for business," explained Haibo Zhu, President of Ulunsoft Corp, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"At our booth we are showing how to provide trust in the Internet of Things. Trust is where everything starts to become secure and trustworthy. Now with the scaling of the Internet of Things it becomes an interesting question – I've heard numbers from 200 billion devices next year up to a trillion in the next 10 to 15 years," explained Johannes Lintzen, Vice President of Sales at Utimaco, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in San...
As enterprises engage with Big Data technologies to develop applications needed to meet operational demands, new computation fabrics are continually being introduced. To leverage these new innovations, organizations are sacrificing market opportunities to gain expertise in learning new systems. In his session at Big Data Expo, Supreet Oberoi, Vice President of Field Engineering at Concurrent, Inc., discussed how to leverage existing infrastructure and investments and future-proof them against e...
“The year of the cloud – we have no idea when it's really happening but we think it's happening now. For those technology providers like Zentera that are helping enterprises move to the cloud - it's been fun to watch," noted Mike Loftus, VP Product Management and Marketing at Zentera Systems, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"For over 25 years we have been working with a lot of enterprise customers and we have seen how companies create applications. And now that we have moved to cloud computing, mobile, social and the Internet of Things, we see that the market needs a new way of creating applications," stated Jesse Shiah, CEO, President and Co-Founder of AgilePoint Inc., in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"Desktop as a Service is emerging as a very big trend. One of the big influencers of this – for Esri – is that we have a large user base that uses virtualization and they are looking at Desktop as a Service right now," explained John Meza, Product Engineer at Esri, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Mobile commerce traffic is surpassing desktop, yet less than 20% of sales in the U.S. are mobile commerce sales. In his session at 15th Cloud Expo, Dan Franklin, Segment Manager, Commerce, at Verizon Digital Media Services, defined mobile devices and discussed how next generation means simplification. It means taking your digital content and turning it into instantly gratifying experiences.
SYS-CON Events announced today that Gridstore™, the leader in hyper-converged infrastructure purpose-built to optimize Microsoft workloads, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Gridstore™ is the leader in hyper-converged infrastructure purpose-built for Microsoft workloads and designed to accelerate applications in virtualized environments. Gridstore’s hyper-converged infrastructure is the ...
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"Application monitoring and intelligence can smooth the path in a DevOps environment. In a DevOps environment you see constant change. If you are trying to monitor things in a constantly changing environment, you're going to spend a lot of your job fixing your monitoring," explained Todd Rader, Solutions Architect at AppDynamics, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile ...
"SOASTA built the concept of cloud testing in 2008. It's grown from rather meager beginnings to where now we are provisioning hundreds of thousands of servers on a daily basis on behalf of customers around the world to test their applications," explained Tom Lounibos, CEO of SOASTA, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
“We are a managed services company. We have taken the key aspects of the cloud and the purposed data center and merged the two together and launched the Purposed Cloud about 18–24 months ago," explained Chetan Patwardhan, CEO of Stratogent, in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core...
"Verizon Digital Media Services is responsible for the broadcast, video and content delivery network that accelerates, scales and helps our customers reach end users with all kinds of video and web content," stated James Segil, CMO of Verizon Digital Media Services, in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Creative Business Solutions will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Creative Business Solutions is the top stocking authorized HP Renew Distributor in the U.S. Based out of Long Island, NY, Creative Business Solutions offers a one-stop shop for a diverse range of products including Proliant, Blade and Industry Standard Servers, Networking, Server Options and...
You use an agile process; your goal is to make your organization more agile. But what about your data infrastructure? The truth is, today's databases are anything but agile - they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver new features and capabilities needed to make your organization competitive. As your application an...
“We are strong believers in the DevOps movement and our staff has been doing DevOps for large enterprise environments for a number of years. The solution that we build is intended to allow DevOps teams to do security at the speed of DevOps," explained Justin Lundy, Founder & CTO of Evident.io, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.