@CloudExpo Authors: Ashish Nanjiani, Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Cloud-Based Load Simulation Platform

Gain cost and competitive advantages by leveraging cloud infrastructure

Harness the power of Cloud to gain cost and competitive advantages by leveraging cloud infrastructure for simulating high volume of user load and complex processing for near real time user simulation.

It has become the  norm for current enterprises to rollout their systems to cater global market or users. While building these high scalable and high performing systems is a complex exercise, it is equally challenging exercise to test these systems effectively by simulating near real-time user load and volume. This simulation has to address various aspects such as geographic distribution, network bandwidth, volume of transactions, and combination of different personas and use case combinations.

This article provides an approach and reference framework to build near real-time high volume load simulation for testing the large scaled high performance based services or solutions. This suggested high load simulation framework can leverage any public cloud infrastructure of choice and any robust load testing tools of choice of an enterprise to meet specific aspects of target test application or services. This solution approach can be leveraged in performance testing and capacity planning of any on-premise services or cloud deployed services by cutting down the testing cost and the actual beta test duration to achieve more accurate capacity planning and better performance.

Testing an enterprise application is an important part of software development life cycle and so is the case with application deployment environment. Application has to be tested for correctness, completeness, security and quality. It also has to be tested for the performance of the system under varied load, check when it fails so that the application can be constantly improvised. With respect to Software engineering, performance testing is determining the system behaviour in terms of responsiveness, stability, scalability, reliability and resource usage under a particular workload [1].

While developing and testing these complex systems itself poses enormous challenges, it is important to be able to economize on overall capital costs, operational costs and time to market, so as to beat the competition. Apart from the cost, key criteria of the services such as usage of network, compute and storage resources, and security  should also be considered with equal priority. Cloud computing with its infinite computational, storage resources, elasticity and data centers spread across globe provides an ideal infrastructure for enterprise applications which needs additional resource to meet the demand on service hosting side as well as just in time need of high amount of infrastructure for testing side.[6]

Performance failures result in damaged customer relations, poor productivity for users, lost revenue, cost overruns due to rework and redesign/tuning and missed market windows [2]. For example, Aberdeen Research conducted an poll and reported that nearly 60% of the organizations said that they were not happy with the performance of business critical applications and also the responders said that the application performance issues are causing corporate revenue by 9% [3].To address the following, it is important to ensure that systems being rolled out in the market are ready to handle real- time loads and scale to proportions it is expected to. This scale cannot ideally and accurately be simulated in existing known methodologies by using one's own data canter, lab or rented data centers/labs. By leveraging the capabilities of cloud service provider and in concert with the cloud based testing framework, the above applications can be quickly and efficiently tested by deploying the load injection test servers in the cloud, which can simulate more closely the near-production scenarios.

Apart from quality of testing and meeting performance requirements, enterprises are looking for optimizing testing cost [7]. Infrastructure cost is one of the major components in testing costs and this can be reduced and optimized well, by leveraging cloud infrastructure instead of legacy data center or legacy test labs approach. As pointed out by Jeffrey Rayport and Andrew Heyward, cloud computing has the potential to produce "an explosion in creativity, diversity, and democratization predicated on creating ubiquitous access to high-powered computing resources." [5][7][8] [10]

Subsequent section describes each of the elements of ‘cloud-based load simulation framework', including the function and role of the framework. Additionally, architectural details on the implementation of the framework and its interaction with core web services components will be described. Finally, details are provided on what are required for the users new to cloud-based load simulation to gain real benefits of this solution.

Current Solution and its Limitations

A high load simulation testing demands a high amount of infrastructure for simulating the load. Also this infrastructure is needed for a prolong duration throughout performance testing phase and also for each release phases.[5][8][9]

Currently these infrastructures are provisioned within the own data centers/Lab or from the partners data centers/Labs. Also considerable effort and time are spent in procurement, setup and configuration. Testers use suitable open-source or commercial load testing tools to simulate the load.

In addition there are a few options to partner with some of the testing service providers who help high volume load testing (Load Testing Services in SaaS model). But this option is not cost effective (compared to the Cloud Providers' base infrastructure cost) and not flexible for the customization needed (as they primarily target only Web Applications) for enterprise needs.

  • Data centers that currently host the dedicated servers or shared hosting to test the application doesn't have a governing mechanism that can govern the infrastructural resources used or service opted for. Most of the time complete infrastructural resources requested by any enterprise customer for deployment of their test applications are allocated. And the service provider will charge for complete resources allocated irrespective of whether resources have been fully utilized or not. [9]
  • Testing of the application is a time bound activity. Once started, a series of tests are run and the test reports are generated to understand the health of the application and performance of the application. In addition, the infrastructure needs for the performance testing are fluctuating and the current solutions lack the elasticity and the flexibility to setup the infrastructure quickly and optimally as per the usage needs.[8][18]. This makes the current options limited to simulate actual high load which can identify actual product issues in advance.
  • Mostly the load testing servers are in the same location as the test application. This won't simulate the real time test scenario where the user requests have to be initiated from different geographical location/data centers and the user behaviour has to be analysed from those different physical locations, by capturing application's various user side performance metrics.

Existing load testing tools available in the market are complex and are not flexible to adopt cloud infrastructure easily.. An integrated real time testing of application in cloud as a unified functionality is missing with many service providers as most commercial products are focused for large enterprises with limited functionalities. Overall integrated cloud based real time testing solution and its automated management framework is needed with high level of granularity, customization, near real-time testing and flexibility to support complex scenarios .[9][10][11][13][14][17]

Solution Approach - Platform for Real time testing
Cloud-based load simulation platform showcases simulation of near real time loads in a manner that is economical, quick to start, deploy and satisfies the diverse needs of load, stress testing of small, medium or very large user base services/applications. This platform can be leveraged irrespective of the hosting model of the ‘Application Under Test (AUT)' (i.e AUT hosted in on-premise or hosted in the Cloud. Following are the high level features of the proposed cloud based load simulation platform.

  • Real time testing of application using the cloud infrastructure and configurable to meet the needs of specific application
  • Setting up and configuring the load test infrastructure in multiple Geographic's data centers (public or private clouds) and testing the various performance metrics (eg. response time, failure rate etc) simulating closer to real-time production scenario.
  • Unified management platform to provision, configure, execute, monitor and operate multiple cloud service provider
  • Simulating the load generation for near real-time production scenario across various platforms, technologies, geographic locations, Network bandwidth and Internet backbones
  • Simulating high user volume with configurable usage pattern and use case combinations
  • Optimizing load test infrastructure cost by leveraging usage/on-demand based cloud infrastructure
  • Built in components and reports for analyzing and monitoring performance parameters on server side and client side.

Note: Diagram depicts a sample set-up. Actual infrastructure can be much more granular as per chosen number of cloud providers.

Core Components of the Platform
The implementation of a cloud-based testing solution involves four Components: a compute service, a Queuing service, a storage system and a comprehensive framework to interconnect with these components and ensure proper message flow. To demonstrate this platform, the capabilities of Cloud service provider's services are used, any public, private or hybrid cloud can be easily plugged with this platform. This platform uses queuing service (for eg: SQS of Amazon, Azure Queues in Azure etc), Storage service (for eg: S3 Services in Amazon and Azure Blobs in Azure etc) and Compute service (for eg: EC2 services in Amazon and Azure Compute in Azure etc) for providing the required components for successful implementation of cloud-based testing solution.

Brief overview on each of the components used by the platform is as below, and details on the interrelations of these components and with the cloud service providers to implement cloud-based testing is detailed in the next section.

Compute Service: The Elastic Compute Cloud service provides compute capacity in the cloud, enabling users more flexibility in easily processing computationally intensive applications. Elasticity of this service provides great benefit in implementing scalable test servers, which expand and contract based on dynamic traffic patterns.[15][16][19][20][21]

Platform: The platform coordinates the test job and test results as they orchestrated through the compute, storage, and messaging processes. Platform interlinks all the core components and provides a mechanism to implement elasticity of the cloud depending on the application needs. The input queue of the platform is continually monitored, and additional test instances are launched to handle the increased load. When the number of test job in the input queue decreases, these test instances are terminated, taking full advantage of "utility computing" pay only for the resources used.[12]

Queuing Service: Queue Service offers a reliable, scalable and easy retrieval of test job as they are passed from one test instance to another for processing of the job. There can be specific limit on message size and storage duration depending upon the cloud providers. Messages are queued and dequeued via simple API calls

Storage Service: Storage Service provides a storage mechanism for the test-server template, image and application configuration data. Storage files are limited to few GBs in size, but there is no upper limit on the total volume of data that can be stored in storage repository. But depending on the cloud service provider there is a practical limit, but can be thought as a limitless storage bucket. [4][20]

Anatomy of the Testing an Application from Cloud
This section describes end-to-end flow of a testing an application through the platform, detailing the management and orchestration components. Cloud based testing platform includes managing the cloud infrastructure components, the test manager processes, and the test jobs. The target test Application (Customer application) and its components along with the test scripts are shown in while in the figure below.  Application can be hosted in the Customer enterprise data centers or provisioned in the cloud.

Prior to creating a test environment, the two queues indicated in Figure 4 (input and output) need to be created. This is performed by running the test macro within the Cloud Management dashboard to create the queues automatically. The platform also provides predefined macro to assist in various systems configuration tasks.

The test script details are submitted to the input job queue by the test admin, it contains the input files needed to test the Target Test Application.  The test manager execute the script as a job via a test server node(s) dedicated per application, and finally it also performs any post-processing operations required specific for the given test job, the results ( Error / Reports) are submits to the output queue for the test teams consumption. Steps below depicting the chain of events performed by the platform

  1. Test job message is sent to the cloud test platform with the test script and application details. All the required test server(s), script details are uploaded to the appropriate location in the cloud repository
  2. After message is retrieved from the queue, the test manager checks the validity of the message
  3. The test manager creates the necessary test server environment and passes the required information to the setup manger.
  4. Set up manager processes the message and moves the required files from repository to an input directory on the local file system
  5. Setup Manager creates the compute Instance of the Test Node(s) for the test environment with the application configuration details
  6. The Test server runs the test script on the application running inside the enterprise / cloud and places the result in an output directory on the file system or error files.
  7. Test manager moves the test results to the appropriate location in the repository for the test admin to download them.
  8. On completion of the job successfully, a test result is sent to the Output Queue along with status
  9. Test admin views the status and the test result

Figure 4: Overall cloud based load simulation platform architecture in detail

Deploying the test servers in the cloud by the platform

Additional details of the testing process and the flow of event between the Enterprise, Cloud platform and Cloud Fabric is detailed below.

This flow of events is repeated for every new test message that has to be tested by the platform. The test script can be of a Test scripts created for any load testing tools such as JMeter, Grinder or Load runner etc. The platform can run single or multiple scripts parallel to test the application. In addition the platform provides a mechanism to run a test scripts at a pre- schedule time interval as a batch job.

The platform contains a scheduler whose function is to monitor the input queue, and to launch test manager instance to process the job from the queue. Different scaling metrics are used to determine the number of test manager instances to be launch. If the number of test jobs in the queue is higher additional test instances are launched to handle the test jobs. Within the Cloud dashboard, the admin can specify a new test manager instance should be launched for every N jobs or it can be determined based on the complexity of particular test and application scenario.

Once the scheduler has launched the required number of test manager instances, a call is made within the platform to initiate the allocation of server resources for testing. Setup manager will setup the required test servers in the compute environment. It will execute the test server installation scripts as defined by the test environment. Prior to launching a test server, a server template for the setup manager is created indicating the test server details, such as the characteristics of the test server instance, the test image to use with the base operating system, the region to deploy and number of test instance per availability zone to be launched, along with application configuration information.

For every test application the test server template can be configured manually or by selecting a pre-built template macro. Another key aspect of the server template is that, it can perform the installation of test server and any additional components needed to run the test. Any test specific configuration code can be downloaded from a secure file share repository, and installed on the instance as specified in the test installation script at the end of the instance's boot cycle.

Once the test server(s) are ready, the configuration scripts are run to build the required test environment. If the application under test (AUT) is in the cloud, these instances are created similar to the test server instance. If the AUT is in the remote/enterprise data centers then the test server node(s) instance is configured with the application details. Test scripts are placed in appropriate location so that node can identify the scripts and run them. Test manager triggers the scripts on the test node(s) to perform the required test operations. Each test node is responsible for managing and executing a set of test scripts on the application and uploading the test results or error details to repository.

The test server environment is deployment in the cloud service provider Infrastructure environment as a service. While deploying the platform specify the number of servers needed with appropriate application configuration information (Start-up, test-script, post-script, clean-up operation etc.,) and connection details of the application under test. In this model IT administrator have the greatest degree of control and a familiar operating network topology. The platform handles the elasticity by ensuring that the number of test servers and network elements are adequate provisioned, configured and connected in the specified network topology. On-demand resource addition and removal are also provided by the platform. Complete control is with the IT administrator for security, application usage and management.

Figure 5: Flow of event for deploying the test server(s) on the cloud infrastructure by the platform

End users, Enterprise IT administrator, test admin, Project team members and the client team interact with the platform through browser based unified dashboards.

Testing discrete applications using the Cloud platform helps not only to host and run the test infrastructure and test applications but also to use test-bed which can help projects/products reduce the overheads in setting up world class test facility. Cloud infrastructure along with Cloud test platform benefit in terms of reduced time, effort and cost in setting up the various test environments. Automated provisioning of test infrastructure on cloud enables to achieve instantaneous high scale as needed for the target test application helping in more accurate capacity planning and improved user experiences.

A Cloud Based Load Simulation platform offers to the enterprises a full service catalogue to test a range of real-time production scenarios. This platform with provisioning and de-provisioning on-demand cloud infrastructure shrinks test cycles from months to weeks by eliminating the procurement time and infrastructure setup time drastically. Configurable tools using macros, simplifies the testing process, early stage analysis of weak links and ensures business continuity. This Cloud platform reduces the complexity of using cloud infrastructure for a developer/tester by providing those as part of the feature of platform itself.


  1. Connie U Smith and Lloyd G Williams, Software Performance Engineering, http://www.springerlink.com/content/g311888355nh7120
  2. AN SPE APPROACH; By:CONNIE U. SMITH, AND LLOYD G. WILLIAMS http://www.perfeng.com/papers/pdcp.pdf
  3. Aberdeen Research on performance of business critical applications http://www.aberdeen.com/Aberdeen-Library/5807/RA-application-performance-management.aspx
  4. Ryan Roop, Deliver cloud network control to the user, http://www.ibm.com/developerworks/cloud/library/cl-cloudvirtualnetwork/
  5. Qiyang Chen and Rubin Xin, Montclair State University, Montclair, NJ, USA, Optimizing Enterprise IT Infrastructure through Virtual Server Consolidation, http://informingscience.org/proceedings/InSITE2005/P07f19Chen.pdf
  6. LJUBOMIR LAZICa, NIKOS MASTORAKISba Technical Faculty, University of Novi Pazar, Vuka Karadžića bb, 36300 Novi Pazar, SERBIA http://www.jameslewiscoleman.info/jlc_stuff/project_research/CostEffectiveSoftwa reTestMetrics_2008.pdf
  7. Darrell M. West, Saving Money Through Cloud Computing,
  8. Filippos I. Vokolos, Elaine J. Weyuker, AT&T Labs, Performance Testing of Software Systems, http://dl.acm.org/citation.cfm?id=287337
  9. Scott Tilley Florida Institute of Technology, 3rd International Workshop Software Testing in the Cloud (STITC 2011)
  10. Sidharth Subhash Ghag, Divya Sharma, Trupti Sarang, Infosys Limited, Software alidation of application deployed on Windows Azure, http://www.infosys.com/cloud/resource-center/Documents/software-validation-applications.pdf
  11. Shyam Kumar Doddavula, Raghuvan Subramanian, Brijesh Deb, Infosys Limited, Cloud Computing, What beyond operational Efficency?, http://www.infosys.com/cloud/resource-center/Documents/beyond-operational-efficiency.pdf
  12. Sumit Bose, Anjaneyulu Pasala, Dheepak RA, Sridhar Murthy, Ganesan Malaiyandisamy, Infosys Limited, SLA Management in Cloud Computing, Cloud Computing Principles and Paradigms 2011, pp. 413-436
  13. S. Bose and S. Sudarrajan, Optimizing migration of virtual machines across datacenters, in Proceeding of the 38th International Conference on Parallel Processing (ICPP) Workshops, Vienna, Austria, September 22-25 2009, pp. 306-313
  14. B. Van Halle, Business Rules Applied: Building Better Systems Using Business Rules Approach, John Wiley & Sons, Hoboken, NJ, 2002.
  15. Open Virtualization Format Specification, DMTF standard version 1.0.0, Doc. no.DSP0243, February 2009, http://www.dmtf.org/standards/published_documents/DSP0243_1.0.0.pdf, accessed on April 16, 2010.
  16. D. Mensce and V. Almeida, Capacity Planning for Web Performance: Metrics, Models and Methods, Prentice-Hall, Englewood Cliffs, NJ, 1998.
  17. E. de Souza E. Silva, and M. Gerla, Load balancing in distributed systems with multiple classes and site constraints, in Proceedings of the 10th International REFERENCES 435 Symposium on Computer Performance Modeling, Measurement and Evaluation, Paris, France, December 19-21 1984, pp. 17-33.
  18. J. Carlstrom and R. Rom, Application-aware admission control and scheduling in web servers, in Proceedings of the 21st IEEE Infocom, New York, June 23-27 2002, pp. 824-831.
  19. S. Bose, N. Tiwari, A. Pasala, and S. Padmanabhuni, SLA Aware "on-boarding" of applications on the cloud, Infosys Lab briefings, 7(7):27-32, 2009.
  20. Amazon, Amazon Elastic Compute Cloud. http://aws.amazon.com/ec2
  21. Microsoft, Azure, http://www.windowsazure.com/en-us/

More Stories By Sridhar Murthy

Krishna Markande is a Principal Architect with Infosys. He works as part of Engineering Services unit within Infosys. He has around 14 years of Software industry experience and involved in architecting and designing Software solutions for various customers with varying scale and size. He can be contacted at [email protected] Sridhar Murthy is a Senior Architect with Infosys. He works as part of Engineering Services unit within Infosys. He has around 13 years of Software industry experience and involved in Enterprise architecting for various customers in Cloud computing and virtualization. He can be contacted at [email protected]

@CloudExpo Stories
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
SYS-CON Events announced today that CHEETAH Training & Innovation will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CHEETAH Training & Innovation is a cloud consulting and IT training firm specializing in improving clients cloud strategies and infrastructures for medium to large companies.
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
SYS-CON Events announced today that TMC has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo and Big Data at Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Global buyers rely on TMC’s content-driven marketplaces to make purchase decisions and navigate markets. Learn how we can help you reach your marketing goals.
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organi...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
It is ironic, but perhaps not unexpected, that many organizations who want the benefits of using an Agile approach to deliver software use a waterfall approach to adopting Agile practices: they form plans, they set milestones, and they measure progress by how many teams they have engaged. Old habits die hard, but like most waterfall software projects, most waterfall-style Agile adoption efforts fail to produce the results desired. The problem is that to get the results they want, they have to ch...
"We focus on composable infrastructure. Composable infrastructure has been named by companies like Gartner as the evolution of the IT infrastructure where everything is now driven by software," explained Bruno Andrade, CEO and Founder of HTBase, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. ...
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads. It’s worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...