Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Dana Gardner, Yeshim Deniz, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Feed Post

Weighing the Options for Onboarding Data into the Cloud

One of the questions we hear most frequently is “how do I get my data into the cloud?”

One of the questions we hear most frequently is “how do I get my data into the cloud?” For many organizations, the benefits of expanding on-premise data storage to include hybrid cloud storage have begun to resonate, but they struggle to get started as they determine how to get move data into the cloud. The decision on how to onboard initial data to the cloud, or what we call the initial ingest, is one that cannot be overlooked.

Cloud-truck

While there is more than one way to perform the initial ingest, it shouldn’t be a surprise that the best solution can vary on an individual case basis. Relevant factors influencing the decision include: amount of data intended for ingestion, amount of available bandwidth, timeframe in which you want to load the data. Typically, most organizations will decide on one of the following three methods for the initial ingest:

  • Use existing bandwidth to perform the transfer over time
  • Increase or “burst” bandwidth for the duration of the transfer
  • Ship media directly to a cloud provider

Use existing bandwidth
Calculating how long it takes to upload a large amount of data across a WAN involves a bit of straightforward arithmetic. For instance, an uplink speed of 100Mbit/sec should be able to push nearly 1TB per day.

While this approach sounds cut and dry, in practice, organizations need to consider a few additional factors:

  • Subtract typical WAN usage to more accurately calculate available bandwidth
  • Employ bandwidth throttling and scheduling to minimize impact on existing applications
  • Cache/buffer the data so they can continue to access data during the ingest process – sometimes starting with a large buffer and shrinking it over time

Temporarily increase bandwidth
For circumstances where existing bandwidth will not onboard data in the cloud in a timely manner, another option is to temporarily increase bandwidth during the upload process. Some telcos and internet providers offer bursting capability for short durations lasting weeks or months. Once the ingest completes, bandwidth can be restored as before to accommodate the normal course of data accesses and updates

An alternative to increasing bandwidth is using a temporary colocation or data center facility that has higher-bandwidth access to the cloud provider. This adds the additional costs of transportation, equipment setup and leasing but may offer a cost-effective compromise.

Physically ship media
Ultimately, if data cannot be onboarded in a timely manner via network (let’s say it’s a few PB in size), shipping physical media to a cloud provider is the next option. While this option may seem deceptively easy, it’s  important not to ignore best practices when physical shipping media.

Whereas many organizations have adopted a “zero trust” model for their data already stored in the cloud (meaning all data is encrypted with a set of keys maintained locally), transporting data requires similar safeguards.

This week, TwinStrata announced the latest release of CloudArray, which includes a secure import process that encrypts and encapsulates data into object format stored in the cloud prior to shipping the data. Following the same security practice used for storing data online in the cloud eliminates security compromises that may lead to possible data breaches.

The bottom line
While there are benefits to expanding on-premise storage infrastructure with a secure, hybrid cloud strategy, often the starting point involves answering the question of how to get initial data there. Choosing the right option can both satisfy the need for timeliness while mitigating risks around security and disruption.

The post Weighing the options for onboarding data into the cloud appeared first on TwinStrata.

Read the original blog entry...

More Stories By Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & Co-Founder of TwinStrata. He has spent over 20 years in enterprise data storage, both as a business manager and as an entrepreneur and founder in startup companies.

Prior to TwinStrata, he served as VP of Product Strategy and Technology at Incipient, Inc., where he helped deliver the industry's first storage virtualization solution embedded in a switch. Prior to Incipient, he was General Manager of the storage virtualization business at Hewlett-Packard. Vekiarides came to HP with the acquisition of StorageApps where he was the founding VP of Engineering. At StorageApps, he built a team that brought to market the industry's first storage virtualization appliance. Prior to StorageApps, he spent a number of years in the data storage industry working at Sun Microsystems and Encore Computer. At Encore, he architected and delivered Encore Computer's SP data replication products that were a key factor in the acquisition of Encore's storage division by Sun Microsystems.

CloudEXPO Stories
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust that they are being taken care of.