Welcome!

Cloud Expo Authors: Elizabeth White, Roger Strukhoff, Jason Bloomberg, Liz McMillan, Pat Romanski

Related Topics: IoT Expo, Java, Linux, Cloud Expo, Big Data Journal, DevOps Journal

IoT Expo: Blog Feed Post

@ThingsExpo | Cloud, Internet of Things (#IoT) and Big Operational Data

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services

Cloud and Things and Big Operational Data

Software-defined architectures are critical for achieving the right mix of efficiency and scale needed to meet the challenges that will come with the Internet of Things

If you've been living under a rock (or rack in the data center) you might not have noticed the explosive growth of technologies and architectures designed to address emerging challenges with scaling data centers. Whether considering the operational aspects (devops) or technical components (SDN, SDDC, Cloud), software-defined architectures are the future enabler of business, fueled by the increasing demand for applications.

The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services fueled by a converging digital-physical world. Applications, whether focused on licensing, provisioning, managing or storing data for these "things" will increase the already significant burden on IT as a whole. The inability to scale from an operational perspective is really what software-defined architectures are attempting to solve by operationalizing the network to shift the burden of provisioning and management from people to technology.

But it's more than just API-enabling switches, routers, ADCs and other infrastructure components. While this is a necessary capability to ensure the operational scalability of modern data centers, what's really necessary to achieve the next "level" is collaboration.

That means infrastructure integration.

it is one thing to be able to automatically provision the network, compute and storage resources necessary to scale to meet the availability and performance expectations of users and businesses alike. But that's the last step in the process. Actually performing the provisioning is the action that's taken after it's determined not only that it's necessary, but where it's necessary.

Workloads (and I hate that term but it's at least somewhat universally understood so I'll acquiesce to using it for now) have varying characteristics with respect to the compute, network and storage they require to perform optimally. That's means provisioning a "workload" in a VM with characteristics that do not match the requirements is necessarily going to impact its performance or load capability. If one is making assumptions regarding the number of users a given application can support, and it's provisioned with a resource profile that impacts that support, it can lead to degrading performance or availability.

What that means is the systems responsible for provisioning "workloads" must be able to match resource requirements with the workload, as well as understand current (and predicted) demand in terms of users, connections and network consumption rates.

Data, is the key. Measurements of performance, rates of queries, number of users, and the resulting impact on the workload must be captured. But more than that, it must be shared with the systems responsible for provisioning and scaling the workloads.

Location Matters

This is not a new concept, that we should be able to share data across systems and services to ensure the best fit for provisioning and seamless scale demanded of modern architectures. A 2007 SIGMOD paper, "Automated and On-Demand Provisioning of Virtual Machines for Database Applications" as well as a 2010 IEEE paper, "Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center" discuss the need for such provisioning models and the resulting architectures rely heavily on the collaboration of the data center components responsible for measuring, managing and provisioning workloads in cloud computing environments through integration.

The location of a workload, you see, matters. Not location as in "on-premise" or "off-premise", though that certainly has an impact, but the location within the data center matters to the overall performance and scale of the applications composed from those workloads. The location of a specific workload comparative to other components impacts availability and traffic patterns that can result in higher incidents of north-south or east-west congestion in the network. Location of application workloads can cause hairpinning (or tromboning if you prefer) of traffic that may degrade performance or introduce variable latency that degrades the quality of video or audio content.

Location matters a great deal, and yet the very premise of cloud is to abstract topology (location) from the equation and remove it from consideration as part of the provisioning process.

Early in the life of public cloud there was concern over not knowing "who your neighbor tenant" might be on a given physical server, because there was little transparency into the decision making process that governs provisioning of instances in public cloud environments. The depth of such decisions appeared to - and still appear to - be made based on your preference for the "size" of an instance. Obviously, Amazon or Azure or Google is not going to provision a "large" instance where only a "small" will fit.

But the question of where, topologically, that "large" instance might end up residing is still unanswered. It might be two hops away or one virtual hop away. You can't know if your entire application - all its components - have been launched on the same physical server or not. And that can have dire consequences in a model that's "built to fail" because if all your eggs are in one basket and the basket breaks... well, minutes of downtime is still downtime.

The next evolutionary step in cloud (besides the emergence of much needed value added services) is more intelligent provisioning driven by better feedback loops regarding the relationship between the combination of compute, network and storage resources and the application. Big (Operational) Data is going to be as important to IT as Big (Customer) Data is to the business as more and more applications and services become critical to the business.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Cloud Expo Latest Stories
SYS-CON Events announced today that TechXtend (formerly Programmer’s Paradise), a leading value-added provider of server and storage virtualization, and r-evolution will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TechXtend (formerly Programmer’s Paradise) is a leading value-added provider of software, systems and solutions for corporations, government organizations, and academic institutions across the United States and Canada. TechXtend is the Exclusive Reseller in the United States for r-evolution
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, will discuss this shifting dynamic with an example of a top European Telco provider. Find out how they are leveraging the power of acloud-based consumption model services to offer more value to the mass market and enable a new revenue model that embraces the true meaning of the Third Industrial Revolution.
The emergence of cloud computing and Big Data warrants a greater role for the PMO to successfully manage enterprise transformation driven by these powerful trends. As the adoption of cloud-based services continues to grow, a governance model is needed to orchestrate enterprise cloud implementations and harness the power of Big Data analytics. In his session at 15th Cloud Expo, Mahesh Singh, President of BigData, Inc., to discuss how the Enterprise PMO takes center stage not only in developing the appropriate governance model but also in collaborating with key stakeholders to ensure a successful transformation.
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
Come learn about what you need to consider when moving your data to the cloud. In her session at 15th Cloud Expo, Skyla Loomis, a Program Director of Cloudant Development at Cloudant, will discuss the security, performance, and operational implications of keeping your data on premise, moving it to the cloud, or taking a hybrid approach. She will use real customer examples to illustrate the tradeoffs, key decision points, and how to be successful with a cloud or hybrid cloud solution.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.