Welcome!

@CloudExpo Authors: Elizabeth White, Pat Romanski, Harry Trott, Liz McMillan, Mamoon Yunus

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing, Apache

@CloudExpo: Blog Feed Post

APIs: Essential for Delivering Storage in Enterprise Cloud Infrastructures

As new storage platforms evolve, native API support is a must

It’s pretty easy to pick holes in the current legacy storage products, especially when it comes to integration within both public and private cloud deployments.  However it’s worth discussing exactly what is required when implementing cloud frameworks, as the way in which storage is deployed is radically different from the traditional model of storage operations.  In this post we will look at why traditional methods of storage management need to change and how that affects the way in which the hardware itself is used.  This leads to a discussion on APIs and how they are essential to drive cloud deployments effectively.

The Legacy View

Legacy Provisioning Process
For the last 10 years or so, the traditional view of  storage management has consisted of a number of Storage Administrators using a GUI, CLIs and/or scripts to process storage requests as they are generated by the business user.  The process is highly manual, with lots of interactions between the requestor, the storage admin delivering the work and other intermediaries to cover things like billing, change control, capacity management and workload scheduling.  This made the overall process pretty people intensive and not surprisingly elongated the delivery time.  Many end users will also have recollections of asking for their specific requirement to be told they can only have something “off the shelf” – i.e. storage to a standard LUN size and with a specific RAID protection.

This was done for obvious reasons; firstly the configuration of large arrays was predicated on pre-planning and a fixed design, usually created at hardware installation time.  Once defined and in use, it couldn’t be changed (or at least couldn’t be changed without significant impact and cost).  Second, it makes sense to reduce requirements into a smaller subset to make the provisioning process easier.  As well as being rigid in configuration, many legacy arrays assume the creation and provisioning of LUNs is an infrequent task.  Many require requests to be packaged and executed in batch and certainly can’t cope easily with concurrent requests.  Although it is possible to automate some provisioning processes using CLIs and scripts, this doesn’t address the real requirements in creating an on-demand model.

The New World

API Provisioning Process
As we scale up to ever large IT deployments and especially within service-based or “cloud” configurations, the idea of having large amounts of human intervention in the provisioning process simply doesn’t work.  Instead, we need to move to a model of “storage on demand” where an external agent – user or orchestration software – can request storage as part of a portal and see the request actioned in real-time or at least within a matter of minutes or hours.  This kind of operation can only be delivered where the hardware has been designed for the purpose.  Where previously storage administrators were involved in every provisioning request, those requests will be actioned within a provisioning framework, defined by the administrator or a storage architect.

Framework
What do we mean by framework?  Well, it’s all about setting a set of parameters around which allocations take place.  This could include:

  • LUN Size
  • Resiliency/Availability
  • Performance
  • Security credentials
  • Snapshot policy
  • Capacity on demand LUN
The architect chooses which specific hardware components are used to meet the requirements.  There are also operational limitations:
  • Maximum number of concurrent requests
  • Maximum number of provisioning requests per hour
  • Ability to suspend or reject provisioning requests by array
  • Restrict requests by array capacity
  • Restrict requests by user based on capacity guidelines
The provisioning framework also needs the ability to work asynchronously and autonomously; that is, to accept, process and acknowledge provisioning requests without the requestor having to maintain a permanent session to the array.  Once requests are completed, the requestor is alerted via a callback mechanism or by manually checking whether a request has completed.  Obviously there is a need for integration into monitoring frameworks, in order to track hardware and performance issues.

Designed for API
Of course there’s a big question around whether APIs can be retro-fitted to existing storage.  In classic IT tradition the answer is “it depends”.  Without a doubt no new storage array should be released without native API functionality.  However fitting an API to existing technology will depend on how flexible the existing configuration process is.  It’s possible to create an API wrapper and build automation into a middleware layer.  This is how products such as iWave’s Storage Automator work.  However adding these features to existing storage products could be costly and still be an imperfect solution.

The Storage Architect Take
As new storage platforms evolve, native API support is a must. The Storage Administrator will simply be required to deploy the infrastructure and plug it into a higher framework from where provisioning will be entirely automated.  Vendors offering this level of functionality will be the most attractive to service providers, looking to make the cost of acquiring and managing storage as cheap as possible. We’re about to see a paradigm shift in the way in which storage is managed and possibly an end to the storage administrator.

Read the original blog entry...

@CloudExpo Stories
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, will introduce two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a...
Any startup has to have a clear go –to-market strategy from the beginning. Similarly, any data science project has to have a go to production strategy from its first days, so it could go beyond proof-of-concept. Machine learning and artificial intelligence in production would result in hundreds of training pipelines and machine learning models that are continuously revised by teams of data scientists and seamlessly connected with web applications for tenants and users.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Secure Channels, a cybersecurity firm, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Secure Channels, Inc. offers several products and solutions to its many clients, helping them protect critical data from being compromised and access to computer networks from the unauthorized. The company develops comprehensive data encryption security strategie...
SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
Cloud resources, although available in abundance, are inherently volatile. For transactional computing, like ERP and most enterprise software, this is a challenge as transactional integrity and data fidelity is paramount – making it a challenge to create cloud native applications while relying on RDBMS. In his session at 21st Cloud Expo, Claus Jepsen, Chief Architect and Head of Innovation Labs at Unit4, will explore that in order to create distributed and scalable solutions ensuring high availa...
For financial firms, the cloud is going to increasingly become a crucial part of dealing with customers over the next five years and beyond, particularly with the growing use and acceptance of virtual currencies. There are new data storage paradigms on the horizon that will deliver secure solutions for storing and moving sensitive financial data around the world without touching terrestrial networks. In his session at 20th Cloud Expo, Cliff Beek, President of Cloud Constellation Corporation, d...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.