@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Blog Feed Post

Four Attributes of Openness in Modern Communications


Openness is one of the most-cited advantages of cloud-based applications, Software-Defined Networking (SDN) and Network Functions Virtualization (NFV).  In this context, what does open really mean?  I posit that there are four key attributes that define openness in this context: documented, standardized, open sourced and inclusive. Any claims of openness by suppliers should be evaluated against these criteria if for no other reason than to clarify their position.

A Beginning of Openness – Documented and Supported

The most narrow and basic criteria for openness is that the interfaces to a software application are well-documented and accessible by someone other than the author or publisher.  For example, a proprietary application like an email program might have application programming interfaces (APIs) that allow another application to send an email.  This aspect of openness usually comes as a result of the popularity of a proprietary software program leading to a desire to interact with it programmatically. A good example of this type of openness is the Microsoft Outlook Messaging API (MAPI).  This API is documented on the Microsoft web site, including a getting started guide, a concepts document, sample programs, and programming reference.  Many software suppliers provide this kind of open API as a means to drive the adoption of their software. However, the API is subject to change by the supplier, and users of such APIs are then stuck with ongoing maintenance challenges that may affect compatibility.

Moving Forward – Standardized Openness

The desire for stable multi-vendor interoperability often leads a consortium or a standards body defining a standardized interface or protocol.  This effort may take years, but if successful, drastically increases the likelihood of smooth interoperability among different implementations of the interface.  Such interoperability can also lead to an ecosystem of compatible and complementary programs. An example of a standardized protocol is the Simple Mail Transfer Protocol (SMTP).  SMTP was originally defined by the IETF in RFC 821 as a standard for e-mail transmission across Internet Protocol (IP) networks.  SMTP has been adopted not only by standardized email programs, but also by proprietary email programs such as Microsoft Outlook. Because it is standardized by the IETF, changes to SMTP take place in a controlled and technically sound manner.

A Milestone – Open Source

Driven by market need, industry momentum or the success of a standardized protocol or interface, a group of developers may choose to create an open source implementation of a design in order to multiply the efforts.  Such an implementation has the advantages of being free (usually) and suitable for augmentation and/or incorporation into larger projects.  Software developers will also cite an added benefit of open source – an improved ability to understand the operation and interfaces of a piece of software, as well as the possibility of looking inside the software to analyze and fix issues. There are numerous examples of open source software, but the most important is Linux and its family of drivers, utilities and applications.  The popularity of Linux has driven a virtuous cycle of innovation followed by increased usage, which continues to drive the innovation cycle.

OpennessThe End Goal – All of the Above, and Adoption by an Inclusive Ecosystem

The proof of success for any open system is its adoption and use by an ecosystem of suppliers that seeks to include, rather than to isolate or exclude participants.  Again, Linux is a prime example. Thousands of products, components and applications have been developed around the Linux operating system. Its success has been a key enabler of cloud-based computing, which itself has led to other developments such as OpenStack and OpenFlow. One might argue that Linux is open but not standardized.  However, its popularity and open source nature have led to de facto standardization under the auspices of linux.org.  For most people, the most important factor in standardization is the adoption and control by a neutral organization rather than the pedigree of the organization. What does all of this mean for a classic communications equipment supplier such as Overture?  This is a very timely question for us we roll out our Ensemble Open Service Architecture, which features a set of a software components designed to optimize service creation, activation, and assurance.  Our customers are demanding open, standard and interoperable solutions.  The challenge for us is to support that demand, and to work with an inclusive ecosystem of partners and industry forums to do so.  Of course, that is exactly what we are doing.

Join The Conversation

See what others are saying about this topic on LinkedIn.

Read the original blog entry...

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder of Crucial Point and publisher of CTOvision.com

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.