|By Peter Silva||
|July 28, 2014 02:30 PM EDT||
Application delivery is always evolving. Initially, applications were delivered out of a physical data center, either dedicated raised floor at the corporate headquarters or from some leased space rented from one of the web hosting vendors during the late 1990’s to early 2000’s or some combination of both. Soon global organizations and ecommerce sites alike, started to distribute their applications and deploy them at multiple physical data centers to address geo-location, redundancy and disaster recovery challenges. This was an expensive endeavor back then even without adding the networking, bandwidth and leased line costs.
When server virtualization emerged and organizations had the ability to divide resources for different applications, content delivery was no longer tethered 1:1 with a physical device. It could live anywhere. With virtualization technology as the driving force, the cloud computing industry was formed and offered yet another avenue to deliver applications.
Application delivery evolved again.
As cloud adoption grew, along with the Softwares, Platforms and Infrastructures enabling it, organizations were able to quickly, easily and cost effectively distribute their resources around the globe. This allows organizations to place content closer the user depending on location, and provides some fault tolerance in case of a data outage.
Today, there is a mixture of options available to deliver critical applications. Many organizations have on-premises private, owned data center facilities, some leased resources at a dedicated location and maybe even some cloud services. In order to achieve or even maintain continuous application availability and keep up with the pace of new application rollouts, many organizations are looking to expand their data center options, including cloud, to ensure application availability. This is important since 84% of datacenters had issues with power, space and cooling capacity, assets, and uptime that negatively impacted business operations according to IDC. This leads to delays in application rollouts, disrupted customer service or even unplanned expenses to remedy the situation.
Operating in multiple data centers is no easy task, however, and new data center deployments or even integrating existing data centers can cause havoc for visitors, employees and IT staff alike. Critical areas of attention include public web properties, employee access to corporate resources and communication tools like email along with the security and required back end data replication for content consistency. On top of that, maintaining control over critical systems spread around the globe is always a major concern.
A combination of BIG-IP technologies provides organizations the global application services for DNS, federated identity, security, SSL offload, optimization & application health/availability to create an intelligent cost effective, resilient global application delivery infrastructure across a hybrid mix of data centers. Organizations can minimize downtime, ensure continuous availability and have on demand scalability when needed.
Simplify, secure and consolidate across multiple data centers while mitigating impact to users or applications.
Feb. 5, 2016 09:00 PM EST Reads: 743
Feb. 5, 2016 07:00 PM EST Reads: 147
Feb. 5, 2016 04:00 PM EST Reads: 109
Feb. 5, 2016 03:45 PM EST Reads: 761
Feb. 5, 2016 03:00 PM EST
Feb. 5, 2016 02:45 PM EST
Feb. 5, 2016 02:30 PM EST Reads: 685
[session] From Build to Scale: Lifecycle of Microservices By @fortyfivan | @CloudExpo #Microservices
Feb. 5, 2016 02:00 PM EST Reads: 113
Feb. 5, 2016 01:30 PM EST Reads: 325
Feb. 5, 2016 01:30 PM EST Reads: 306
Feb. 5, 2016 01:27 PM EST
Feb. 5, 2016 01:15 PM EST Reads: 317
Feb. 5, 2016 01:00 PM EST Reads: 487
Feb. 5, 2016 12:00 PM EST
Feb. 5, 2016 12:00 PM EST Reads: 511
Feb. 5, 2016 12:00 PM EST Reads: 661
Feb. 5, 2016 11:45 AM EST Reads: 251
Feb. 5, 2016 11:00 AM EST Reads: 290
Feb. 5, 2016 11:00 AM EST Reads: 108
Feb. 5, 2016 10:00 AM EST