|By Jörg-Peter Elbers, Achim Autenrieth||
|February 1, 2013 10:30 AM EST||
Using OpenFlow to extend software-defined networking (SDN) to the optical layer is a compelling prospect for enterprises seeking to achieve joint orchestration of information technology (IT) and network resources for cloud services, to virtualize the network and to more simply manage interconnections of distributed data centers that require synchronization.
Today's fragmented, specialized management and control approaches are fraught with proprietary protocols and management systems, limited scalability and configuration complexities. With an OpenFlow-enabled transport network, an enterprise could instead engage in a kind of "one-stop shopping" for control of cloud computing, storage and networking resources - all via one, unified application programming interface (API). The benefits could include significantly simplified configuration, management and scaling of large-scale enterprise infrastructures through integration and automation.
That's a new role for OpenFlow, demanding strategic tailoring of the protocol for the optical transport domain. Demonstration and development of the capability are closely watched by enterprises that are under incessant pressure to cost-effectively meet ever-increasing demand for bandwidth and services.
Virtualization's New Frontier
Servers and storage have been virtualized in the enterprise; the next great frontier for virtualization is the network.
Because of the substantial cost savings and performance benefits that it can deliver, SDN-based virtualization is of prime interest to enterprises for a wide range of applications. OpenFlow has emerged as one of the most popular SDN protocols. Web 2.0 network operators and national research and education network (NREN) operators, especially, like OpenFlow.
With OpenFlow, an abstraction of the network's packet switches can be generated and flow-forwarding behavior can be specified across an infrastructure via an external controller. Operations can be substantially automated and streamlined by breaking up the monolithically integrated control and forwarding paradigm of today's switches.
Using OpenFlow, could SDN be extended across layers and create a scenario in which - with a single instruction - the controller could jointly create virtual machines and enable enterprise network administrators to reserve computing, networking and storage resources in one stroke?
It is an obviously compelling notion for enterprise network staffs who desperately need to simplify operations. However, the problem is OpenFlow deployment and development has largely been limited to the electrical packet layer, whereas the interconnection beyond the data center is typically comprised of optical transport technology. Furthermore, the optical domain is where things get hazy for many enterprise network administrators. Their comfort zone tends to be packets - not wavelengths and optics.
The result is that cloud computing is currently decoupled from the transport networking control and operation. The network exists as a static, separated entity in today's cloud implementations. There is no interaction between cloud computing processes and the statically configured network. The two are not in any way interoperable; they speak different languages.
Converging cloud computing and networking requires a more dynamic mode of control and operation, but enterprises largely have judged integrating management of the optical network into the data-center environment to be too complex.
To extend OpenFlow from its established role in the electrical packet domain to the optical layer (and, thereby, extend SDN across multiple network layers), a range of optical-specific concerns must be tackled.
Crafting and Experimenting
Within the European Commission's FP7 ICT Work Programme is a collaborative project, "OpenFlow in Europe - Linking Infrastructure and Applications" (OFELIA), that provides researchers with a test bed in which to experiment with SDN applications and virtual multi-layer networks over shared network infrastructure.
Via standardized, secure interfaces through GÉANT, a high-bandwidth interconnection of European R&E networks, researchers develop, run and control experiments using packet switches and application servers at the University of Essex and seven other test-bed facilities throughout Europe.
OFELIA hosts a prototype implementation of dynamic control of wavelength-switched optical networks via OpenFlow. Bandwidth, latency and power consumption can be adjusted to meet the specific requirements of specific applications.
To make it happen, key OpenFlow additions had to be engineered in order for the protocol to effectively control the optical domain. Optical-specific considerations were required to adapt OpenFlow from the packet world. A packet can travel from any ingress to any egress port in an electrical switch or from any time slot in a time-division multiplexing (TDM) device. The optical domain, however, introduces strict switching constraints, with regard to wavelength continuity, optical impairments, optical power leveling on the line side, etc.
Augmenting OpenFlow to address those optical-specific concerns has resulted in an OFELIA prototype that demonstrates a truly transparent, wavelength-switched optical network. The research community is able to experiment with the capability via a flexible, Web-services approach; commercial enterprises, too, are interested in trialing the capability for their specific applications and environments.
OpenFlow is not sufficient in itself to enable the complete transformation that enterprise network administrators envision, to SDN-enable virtualization across all layers of their infrastructures. The additions to OpenFlow that were engineered for the OFELIA test bed provide only the bridge between the optical layer and packet layer and allow integration into a cloud operating system such as OpenStack.
But that is one very important bridge, and the promise for enterprise network administrators is considerable. The OpenFlow innovation could seamlessly integrate the optical transport network under a common management umbrella with an enterprise's routers and switches - all via one familiar interface. Management of the optical domain could become as simple as the management of Ethernet boxes - using an encapsulation of virtual resources that enterprise network administrators could manage via typical and familiar infrastructure. That's a significant breakthrough. With many enterprises already considering usage of an OpenFlow-based control for their packet networks, extending the framework to the wavelength-switched optical layer would be a natural migration.
Virtualization has developed over phases in enterprise networking. First, resource virtualization inside data centers delivered economic savings through enhanced utilization, scalability and redundancy. Data-center virtualization conveyed greater infrastructure flexibility, higher availability and better workload balancing. The next frontier, network virtualization, promises true platform agility and, with it, a host of long-sought-after enterprise capabilities: capacity on-demand, adaptive infrastructure and dynamic service automation, among them. Adapting OpenFlow and extending SDN to the optical transport domain comprise an important step toward that vision.
You use an agile process; your goal is to make your organization more agile. But what about your data infrastructure? The truth is, today's databases are anything but agile - they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver new features and capabilities needed to make your organization competitive. As your application an...
May. 30, 2015 06:00 PM EDT Reads: 3,953
Want to enable self-service provisioning of application environments in minutes that mirror production? Can you automatically provide rich data with code-level detail back to the developers when issues occur in production? In his session at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how to accomplish this and more utilizing technologies such as Microsoft Azure, Visual Studio online, and Application Insights in this demo-heavy session.
May. 30, 2015 06:00 PM EDT Reads: 5,672
As cloud gives an opportunity to businesses to buy services externally – how is cloud impacting your customers? In his General Session at 15th Cloud Expo, Fabio Gori, Director of Worldwide Cloud Marketing at Cisco, provided answers to big questions: Do you see hybrid cloud as where the world is going? What benefits does it bring? And how does Cisco connect all of these clouds? He also discussed Intercloud and Cisco’s investment on it.
May. 30, 2015 05:00 PM EDT Reads: 6,224
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover ...
May. 30, 2015 04:00 PM EDT Reads: 6,945
Health care systems across the globe are under enormous strain, as facilities reach capacity and costs continue to rise. M2M and the Internet of Things have the potential to transform the industry through connected health solutions that can make care more efficient while reducing costs. In fact, Vodafone's annual M2M Barometer Report forecasts M2M applications rising to 57 percent in health care and life sciences by 2016. Lively is one of Vodafone's health care partners, whose solutions enable o...
May. 30, 2015 03:45 PM EDT Reads: 2,672
In her General Session at 15th Cloud Expo, Anne Plese, Senior Consultant, Cloud Product Marketing, at Verizon Enterprise, focused on finding the right mix of renting vs. buying Oracle capacity to scale to meet business demands, and offer validated Oracle database TCO models for Oracle development and testing environments. Anne Plese is a marketing and technology enthusiast/realist with over 19+ years in high tech. At Verizon Enterprise, she focuses on driving growth for the Verizon Cloud platfo...
May. 30, 2015 03:00 PM EDT Reads: 5,885
Andi Mann has been serving as Conference Chair of the DevOps Summit since its inception. He is one of the world's recognized leaders in DevOps, and continues to be one of its most articulate advocates. Here are some recent thoughts of his in an interview we conducted in the run-up to the DevOps Summit to be held June 9-11 at the Javits Center in New York City. When did you first start thinking about DevOps and its potential impact on enterprise IT? Andi: I first started thinking about DevOps b...
May. 30, 2015 03:00 PM EDT Reads: 1,178
The most often asked question post-DevOps introduction is: “How do I get started?” There’s plenty of information on why DevOps is valid and important, but many managers still struggle with simple basics for how to initiate a DevOps program in their business. They struggle with issues related to current organizational inertia, the lack of experience on Continuous Integration/Delivery, understanding where DevOps will affect revenue and budget, etc. In their session at DevOps Summit, JP Morgentha...
May. 30, 2015 02:30 PM EDT Reads: 1,219
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
May. 30, 2015 02:15 PM EDT Reads: 2,158
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
May. 30, 2015 02:00 PM EDT Reads: 2,241
How does one bridge the gap between traditional enterprise storage infrastructures and the private, hybrid, and public cloud? In his session at 15th Cloud Expo, Dan Pollack, Chief Architect of Storage Operations at AOL Inc., examed the workload differences and required changes to reuse existing knowledge and components when building and using a cloud infrastructure. He also looked into the operational considerations, tool requirements, and behavioral changes required for private cloud storage s...
May. 30, 2015 02:00 PM EDT Reads: 3,291
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed...
May. 30, 2015 02:00 PM EDT Reads: 5,975
The speed of product development has increased massively in the past 10 years. At the same time our formal secure development and SDL methodologies have fallen behind. This forces product developers to choose between rapid release times and security. In his session at DevOps Summit, Michael Murray, Director of Cyber Security Consulting and Assessment at GE Healthcare, examined the problems and presented some solutions for moving security into the DevOps lifecycle to ensure that we get fast AND ...
May. 30, 2015 02:00 PM EDT Reads: 5,293
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
May. 30, 2015 02:00 PM EDT Reads: 2,634
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In this session, James Kirkland, Red Hat's Chief Architect for the ...
May. 30, 2015 01:00 PM EDT Reads: 1,222
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Speaker Bios Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has b...
May. 30, 2015 01:00 PM EDT Reads: 2,304
While there are hundreds of public and private cloud hosting providers to choose from, not all clouds are created equal. If you’re seeking to host enterprise-level mission-critical applications, where Cloud Security is a primary concern, WHOA.com is setting new standards for cloud hosting, and has established itself as a major contender in the marketplace. We are constantly seeking ways to innovate and leverage state-of-the-art technologies. In his session at 16th Cloud Expo, Mike Rivera, Seni...
May. 30, 2015 12:45 PM EDT Reads: 1,610
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
May. 30, 2015 12:30 PM EDT Reads: 5,813
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
May. 30, 2015 12:15 PM EDT Reads: 2,546
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
May. 30, 2015 12:00 PM EDT Reads: 2,770