Welcome!

@CloudExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Pat Romanski, William Schmarzo

Related Topics: @CloudExpo, FinTech Journal, @ThingsExpo

@CloudExpo: Blog Post

Real-time AI for Smart Networks – Building an OpenStack ‘Policy Brain’ | @CloudExpo #AI #Cloud

All of this data needs to be processed as part of a collective view indeed a collective intelligence of the network as a whole

Computer networkDCAE – Real-Time Analytics and Cloud Management
Another of the main foundations of the AT&T Domain 2.0 program is the ‘DCAE’ framework: Data Collection, Analytics and Events. In short their Big Data platform for enabling smart management.

“In the D2 vision, virtualized functions across various layers of functionality are expected to be instantiated in a significantly dynamic manner that requires the ability to provide real-time responses to actionable events from virtualized resources, ECOMP applications, as well as requests from customers, AT&T partners and other providers.

In order to engineer, plan, bill and assure these dynamic services, DCAE within the ECOMP framework gathers key performance, usage, telemetry and events from the dynamic, multi-vendor virtualized infrastructure in order to compute various analytics and respond with appropriate actions based on any observed anomalies or significant events. These significant events include application events that lead to resource scaling, configuration changes, and other activities as well as faults and performance degradations requiring healing.

The collected data and computed analytics are stored for persistence as well as use by other applications for business and operations (e.g., billing, ticketing).

More importantly, DCAE has to perform a lot of these functions in real-time.

This real-time ingredient is the secret sauce, as telcos begin establishing the platform for a monumental shift of all IT to a Cloud-enabled real-time mode, not just telecoms.

A&AI – Active and Available Inventory
Defining how this will integrate with their existing legacy OSS platforms to achieve this real-time capability is ‘A&AI’ – Active and Available Inventory:

“A&AI is the ECOMP component that provides realtime views of Domain 2.0 Resources, Services, Products and their relationships. The views provided by Active and Available Inventory relate data managed by multiple ECOMP Platforms, Business Support Systems (BSS), Operation Support Systems (OSS), and network applications to form a “top to bottom” view ranging from the Products customers buy to the Resources that form the raw material for creating the Products.

Active and Available Inventory not only forms a registry of Products, Services, and Resources, it also maintains up-to-date views of the relationships between these inventory items. To deliver the vision of the dynamism of Domain 2.0, Active and Available Inventory will manage these multi-dimensional relationships in real-time.”

Cloud computing concept

Policy Brain

All of this data needs to be processed as part of a collective view indeed a collective intelligence of the network as a whole, and AT&T describe how they intend to build a “Policy Brain” for this purpose, setting the scene for an exciting exploration of this capacity evolving through AI techniques.

“D2 Policy will utilize rather than replace various technologies; examples of possible policy areas are shown in the following table. These will be used, e.g., via translation capabilities, to achieve the best possible solution that takes advantage of helpful technologies while still providing in effect a single D2.0 Policy “brain”.”

achieved through uniting:

  • Policy standards, like XACML, TOSCA and YANG
  • Implementation technologies – The Openstack policy modules like Congress, Heat and OpenDaylight GBP
  • Programmed business rules – Via apps like Drools and Ruby
  • Integration with other systems, such as IDAM and the ‘Astra‘ security system, for processing security related events

The post Real-time AI for Smart Networks – Building an Openstack ‘Policy Brain’ appeared first on Cloud Best Practices.

Read the original blog entry...

More Stories By Cloud Best Practices Network

The Cloud Best Practices Network is an expert community of leading Cloud pioneers. Follow our best practice blogs at http://CloudBestPractices.net

CloudEXPO Stories
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properly managed, is poised to bring about a digital transformation to enterprise IT. We will discuss the trend, the technology and the timeline for adoption.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.