Welcome!

Cloud Expo Authors: Roger Strukhoff, Elizabeth White, Liz McMillan, Jason Bloomberg, Pat Romanski

Blog Feed Post

5 reasons your website might slow down this holiday season (or anytime)

The National Retail Federation (NRF) predicts that this year’s holiday sales will increase 4.1 percent to $586.1 billion. But here’s a wrinkle in the data that nobody really records: The companies that are making money are those that have fast, responsive websites. Companies with slow websites won’t be cashing in this season.

In fact, a Kissmetrics report on shopping cart abandonment found that 40 percent of people abandon a website that takes more than three seconds to load, and a less forgiving group of almost 50 percent of users expect a website to load in two seconds or less. This is just the latest in a slew of similar studies that have been produced since the dawn of the e-commerce era that concludes website performance has a direct correlation to revenue performance.

So what can you do to ensure your web pages load in two seconds or less? Avoid the following faux pas. These are the most common problems we see that slow e-commerce sites down to the point of depressing sales.

1. Unforeseen traffic spikes. Heavy traffic is one of the most obvious reasons a website slows down, and most IT departments provision for this. But what if IT doesn’t know what’s coming, or when it’s coming? A surge of users to a site for a specific reason that IT doesn’t know about is a big risk that is easily preventable.

Historically, there’s always been delineation between IT and marketing. To help bridge this gap, many organizations have hired a chief web officer (CWO), who oversees an organization’s web presence, including all Internet and intranet traffic. The CWO helps communicate marketing’s website performance needs to the IT department in enough time for them to prepare for any big promotional events.

As soon as marketing suspects that the website might receive heavier-than-normal traffic, IT and marketing should start working together on a schedule that will help avoid any last minute problems. The most important thing marketing should be communicating is how many users they are expecting and how long they expect core page load to take.

While not all situations are the same, don’t despair if your website goes down during a time when you expected to rake in huge online sales. There are a few things you can do to remedy the situation. A common strategy is to throw more bandwidth or more CPU at the site to resolve the issues — but it’ll cost you. Before doing this, organizations should conduct a quick cost-benefit analysis.

A business with an overloaded site will need to decide if the revenue they will bring in from their site staying up will break even with or surpass the amount they put into extra bandwidth or CPU.

2. Inadequate infrastructure and code base measurement and testing. This problem can be avoided during the software development lifecycle by using tools that realistically measure your website’s performance from an external perspective, as well as having benchmarks associated with testing. During the software development lifecycle, the following factors affect your site’s speed:

  • Where the infrastructure is located geographically. If you’re selling to the Asian market but planning to host your infrastructure in Amazon East, you’re going to experience latency delays right off the bat.
  • Whether to cache or use CDNs. There is a subtle difference between the two, but front-end caching will help you avoid taxing your web servers, something that will cause your website to slow down. Front-end caching allows the cache version of the data to sit right in front of the web server and can be done relatively inexpensively with freeware technology. CDNs will come at a more significant cost, but will ensure localized delivery of content, saving you the latency that networks might provide.
  • Image size. If the graphics on your site are not optimized and efficient, the page will take longer to download. You need a way to analyze graphical development throughout your site, find those that are suboptimal, and redeploy them.
  • Whether you are using standalone or shared hosting environments. Standalone services allow for improved control and understanding of your environment and performance. A shared environment is like an apartment complex — you don’t know much about your neighbors or how their application/environment could be affecting your performance. While shared environments might be cheaper in the short-term, they could very well cost more over time.
  • Whether you are using virtualized instances or traditional servers. Depending on the application requirements, virtualized instances could be more convenient for deployment and backup purposes. However, they could cause performance issues. As a result, evaluate the overhead associated with your application on a virtualized instance versus a non-virtualized environment.
  • What type of database you chose. Whether it’s MySQL or Cassandra, SQL vs. NoSQL, we repeatedly see underutilized or misconfigured setups that cause performance significant issues. Additionally, we see organizations make interesting database solution selections that don’t take into consideration real benefits. Often a database is chosen based solely on the available in-house or outsourced expertise rather than the actual needs of the application.
  • What type of OS you chose. Costs and technical expertise are the two most common drivers behind operating system architecture and design. But the success of the OS ultimately comes down to optimization. Fine-tuning can be performed according to best practices; however, running a load test against your environment will allow you to truly optimize it.
  • If this site will be hosted in your own data center, co-located, or in a cloud hosting environment. Many organizations today begin by hosting their application in the cloud for rapid deployment, short term wins, and proof of concept to investors. As the application grows or the user base increases, organizations often will consider and migrate to their own data center or at least out of the cloud. There are appealing solutions today that allow for applications to continue to scale in an effort to mimic many popular cloud environments. Regardless of the environment, it’s imperative to learn your performance numbers and ensure that you meet or exceed performance metrics as you migrate.

3. Lack of maintenance. Conducting incremental performance tests with each new update or change to your environment might sound like a lot of extra work for your IT department. But, there are several subtle efficiencies you can perform that solve multiple problems. Spriting, for example, combines multiple images or CSS files.

You can continue to tweak your environment by optimizing your code with each update of the site. Implementing cache management will regulate which and how many objects to keep in memory. Regular patch management maintenance can prevent memory leaks within the code base that cause slowness.

Most organizations find that their maintenance works best on a regular schedule, and is performed whether the environment has changed or not. Microsoft, for example, has Patch Tuesday. Every Tuesday is dedicated to making sure their apps are updated with the latest and greatest patches, as well as reviewing the code base to figure out how to best optimize as environments change.

4. Inability to scale. A lot of organizations will develop sites that are not built to scale to the level they need, even though this is such a fundamental component of the software development lifecycle. We talk to a lot of web developers whose strategy is to simply buy more resources — hardware/software, bandwidth, CPU, memory, servers, etc. — than they need, and then assume that the extra will help them handle any heavy traffic that comes down the pike.

A more practical strategy (that will also save you money) is to take the time to develop an adaptive environment that you know can scale. Again, the sure-fire way to avoid this is to test and test often, so that you know every part of the stack can scale. And I mean to test everything — the front and back end web servers, databases, and application servers.

5. Quality measurements. Some IT teams are afraid to shine a light on their own work for fear of exposing errors they might have made during the development process. This is a common internal, political problem for most organizations.

The bottom line is that if the website is slow, revenue is lost, so it needs to be confronted. If an IT team finds errors in its website after it goes live, they are often hesitant to draw attention to it right away, or even at all.

It’s important to say that I don’t believe internal IT teams can’t detect errors or are incapable of fixing them. All I’m saying is that our customers are often relieved to receive help from a third party that will objectively identify errors and are guaranteed to have the time and resources to fix them.

How much of this holiday season’s expected $586 billion will you be generating? Hopefully, a lot. Especially if you take the time now to pay attention to your website’s performance and do what it takes to make sure your customers get the best experience. Yes, the competition for customers will be fierce, but sticking to these five simple tips will keep your website up and running through January.

Read the original blog entry...

More Stories By Sven Hammar

Sven Hammar is Co-Founder and CEO of Apica. In 2005, he had the vision of starting a new SaaS company focused on application testing and performance. Today, that concept is Apica, the third IT company I’ve helped found in my career.

Before Apica, he co-founded and launched Celo Commuication, a security company built around PKI (e-ID) solutions. He served as CEO for three years and helped grow the company from five people to 85 people in two years. Right before co-founding Apica, he served as the Vice President of Marketing Bank and Finance at the security company Gemplus (GEMP).

Sven received his masters of science in industrial economics from the Institute of Technology (LitH) at Linköping University. When not working, you can find Sven golfing, working out, or with family and friends.

Cloud Expo Latest Stories
SYS-CON Events announced today that TechXtend (formerly Programmer’s Paradise), a leading value-added provider of server and storage virtualization, and r-evolution will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TechXtend (formerly Programmer’s Paradise) is a leading value-added provider of software, systems and solutions for corporations, government organizations, and academic institutions across the United States and Canada. TechXtend is the Exclusive Reseller in the United States for r-evolution
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, will discuss this shifting dynamic with an example of a top European Telco provider. Find out how they are leveraging the power of acloud-based consumption model services to offer more value to the mass market and enable a new revenue model that embraces the true meaning of the Third Industrial Revolution.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
The emergence of cloud computing and Big Data warrants a greater role for the PMO to successfully manage enterprise transformation driven by these powerful trends. As the adoption of cloud-based services continues to grow, a governance model is needed to orchestrate enterprise cloud implementations and harness the power of Big Data analytics. In his session at 15th Cloud Expo, Mahesh Singh, President of BigData, Inc., to discuss how the Enterprise PMO takes center stage not only in developing the appropriate governance model but also in collaborating with key stakeholders to ensure a successful transformation.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
Come learn about what you need to consider when moving your data to the cloud. In her session at 15th Cloud Expo, Skyla Loomis, a Program Director of Cloudant Development at Cloudant, will discuss the security, performance, and operational implications of keeping your data on premise, moving it to the cloud, or taking a hybrid approach. She will use real customer examples to illustrate the tradeoffs, key decision points, and how to be successful with a cloud or hybrid cloud solution.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.