|By Shathabheesha .||
|February 11, 2013 08:00 AM EST||
2011 ended with the popularization of an idea: bringing VMs (virtual machines) onto the cloud. Recent years have seen great advancements in both cloud computing and virtualization. On the one hand there is the ability to pool various resources to provide Software as a Service, Infrastructure as a Service and Platform as a Service. At its most basic, this is what describes cloud computing. On the other hand, we have virtual machines that provide agility, flexibility, and scalability to the cloud resources by allowing the vendors to copy, move, and manipulate their VMs at will. The term virtual machine essentially describes sharing the resources of one single physical computer into various computers within itself. VMware and virtual box are commonly used virtual systems on desktops. Cloud computing effectively stands for many computers pretending to be one computing environment. Obviously, cloud computing would have many virtualized systems to maximize resources.
Keeping this information in mind, we can now look into the security issues that arise within a cloud computing scenario. As more and more organizations follow the "Into the Cloud" concept, malicious hackers keep finding ways to get their hands on valuable information by manipulating safeguards and breaching the security layers (if any) of cloud environments. One issue is that the cloud computing scenario is not as transparent as it claims to be. The service user has no clue about how his information is processed and stored. In addition, the service user cannot directly control the flow of data/information storage and processing. The service provider is usually not aware of the details of the service running on his or her environment. Thus, possible attacks on the cloud-computing environment can be classified into:
- Resource attacks: Include manipulating the available resources into mounting a large-scale botnet attack. These kinds of attacks target either cloud providers or service providers.
- Data attacks: Include unauthorized modification of sensitive data at nodes, or performing configuration changes to enable a sniffing attack via a specific device etc. These attacks are focused on cloud providers, service providers, and also on service users.
- Denial of Service attacks: The creation of a new virtual machine is not a difficult task and, thus, creating rogue VMs and allocating huge spaces for them can lead to a Denial of Service attack for service providers when they opt to create a new VM on the cloud. This kind of attack is generally called virtual machine sprawling.
- Backdoor: Another threat on a virtual environment empowered by cloud computing is the use of backdoor VMs that leak sensitive information and can destroy data privacy. Having virtual machines would indirectly allow anyone with access to the host disk files of the VM to take a snapshot or illegal copy of the whole system. This can lead to corporate espionage and piracy of legitimate products.
With so many obvious security issues (a lot more can be added to the list), we need to enumerate some steps that can be used to secure virtualization in cloud computing.
The most neglected aspect of any organization is its physical security. An advanced social engineer can take advantage of weak physical security policies an organization has put in place. Thus, it's important to have a consistent, context-aware security policy when it comes to controlling access to a data center. Traffic between the virtual machines needs to be monitored closely by using at least a few standard monitoring tools.
After thoroughly enhancing physical security, it's time to check security on the inside. A well-configured gateway should be able to enforce security when any virtual machine is reconfigured, migrated, or added. This will help prevent VM sprawls and rogue VMs. Another approach that might help enhance internal security is the use of third-party validation checks, performed in accordance with security standards.
In the above figure, we see that the service provider and cloud provider work together and are bound by the Service Level Agreement. The cloud is used to run various instances, whereas the service end users pay for each use the instant the cloud is used. The following section tries to explain an approach that can be used to check the integrity of virtual systems running inside the cloud.
Checking virtual systems for integrity increases the capabilities for monitoring and securing environments. One of the primary focuses of this integrity check should be the seamless integration of existing virtual systems like VMware and virtual box. This would lead to file integrity checking and increased protection against data losses within VMs. Involving agentless anti-malware intrusion detection and prevention in one single virtual appliance (unlike isolated point security solutions) would contribute greatly towards VM integrity checks. This will reduce operational overhead while adding zero footprints.
A server on a cloud may be used to deploy web applications, and in this scenario an OWASP top-ten vulnerability check will have to be performed. Data on a cloud should be encrypted with suitable encryption and data-protection algorithms. Using these algorithms, we can check the integrity of the user profile or system profile trying to access disk files on the VMs. Profiles lacking in security protections can be considered infected by malwares. Working with a system ratio of one user to one machine would also greatly reduce risks in virtual computing platforms. To enhance the security aspect even more, after a particular environment is used, it's best to sanitize the system (reload) and destroy all the residual data. Using incoming IP addresses to determine scope on Windows-based machines and using SSH configuration settings on Linux machines will help maintain a secure one-to-one connection.
Lightweight Directory Access Protocol (LDAP) and Cloud Computing
LDAP is an extension to DAP (directory access protocol), as the name suggests, by use of smaller pieces of code. It helps by locating organizations, individuals, and other files or resources over the network. Automation of manual tasks in a cloud environment is done using a concept known as virtual system patterns. These virtual system patterns enable a fast and repeatable use of systems. Having dedicated LDAP servers is not typically necessary, but LDAP services have to be considered when designing an efficient virtual system pattern. Extending LDAP servers to cloud management would lead to a buffering of existing security policies and cloud infrastructure. This also allows users to remotely manage and operate within the infrastructure.
Various security aspects to be considered:
1. Granular access control
2. Role-based access control
The directory synchronization client is a client-residential application. Only one instance of DSC can be run at a time. Multiple instances may lead to inconsistencies in the data being updated. If any new user is added or removed, DSC updates the information on its next scheduled update. The clients then have the option to merge data from multiple DSCs and synchronize. For web security, the clients don't need to register separately if they are in the network, provided that the DSC used is set up for NTLM identification and IDs.
Host-Side Architecture for Securing Virtualization in Cloud Environment
The security model described here is purely host-side architecture that can be placed in a cloud system "as is" without changing any aspect of the cloud. The system assumes the attacker is located in any form within the guest VM. This system is also asynchronous in nature and therefore easier to hide from an attacker. Asynchronicity prevents timing analysis attacks from detecting this system. The model believes that the host system is trustworthy. When a guest system is placed in the network, it's susceptible to various kinds of attacks like viruses, code injections (in terms of web applications), and buffer overflows. Other lesser-known attacks on clouds include DoS, keystroke analysis, and estimating traffic rates. In addition, an exploitation framework like metasploit can easily attack a buffer overflow vulnerability and compromise the entire environment.
The above approach basically monitors key components. It takes into account the fact that the key attacks would be on the kernel and middleware. Thus integrity checks are in place for these modules. Overall, the system checks for any malicious modifications in the kernel components. The design of the system takes into consideration attacks from outside the cloud and also from sibling virtual machines. In the above figure the dotted lines stand for monitoring data and the red lines symbolize malicious data. This system is totally transparent to the guest VMs, as this is a totally host-integrated architecture.
The implementation of this system basically starts with attaching a few modules onto the hosts. The following are the modules along with their functions:
Interceptor: The first module that all the host traffic will encounter. The interceptor doesn't block any traffic and so the presence of a third-party security system shouldn't be detected by an attacker; thus, the attacker's activities can be logged in more detail. This feature also allows the system to be made more intelligent. This module is responsible for monitoring suspicious guest activities. This also plays a role in replacing/restoring the affected modules in case of an attack.
Warning Recorder: The result of the interceptor's analysis is directly sent to this module. Here a warning pool is created for security checks. The warnings generated are prioritized for future reference.
Evaluator and hasher: This module performs security checks based on the priorities of the warning pool created by the warning recorder. Increased warning will lead to a security alert.
Actuator: The actuator actually makes the final decision whether to issue a security alert or not. This is done after receiving confirmation from the evaluator, hasher, and warning recorder.
This system performs an analysis on the memory footprints and checks for both abnormal memory usages and connection attempts. This kind of detection of malicious activity is called an anomaly-based detection. Once any system is compromised, the devious malware tries to affect other systems in the network until the entire unit is owned by the hacker. Targets of this type of attack also include the command and control servers, as in the case of botnets. In either case, there is an increase in memory activity and connection attempts that occur from a single point in the environment.
Another key strategy used by attackers is to utilize hidden processes as listed in the process list. An attacker performs a dynamic data attack/leveraging that hides the process he is using from the display on the system. The modules of this protection system perform periodic checks of the kernel schedulers. On scanning the kernel scheduler, it would detect hidden structures there by nullifying the attack.
This approach has been followed by two of the main open source cloud distributions, namely Eucalyptus and OpenECP. In all implementations, this system remains transparent to the guest VM and the modules are generally attached to the key components of the architecture.
The system claims to be CPU-free in nature (as it's asynchronous) and has shown few complex behaviors on I/O operations. It's reasoned that this characteristic is due to constant file integrity checks and analysis done by the warning recorder.
In this article, we have seen a novel architecture design that aims to secure virtualization on cloud environments. The architecture is purely host integrated and remains transparent to the guest VMs. This system also assumes trustworthiness of the host and assumes attacks originate from the guests. As in security, the rule of thumb says: anything and everything can be penetrated with time and patience. But an intelligent security consultant can make things difficult for an attacker by integrating transparent systems so that they remain invisible and that it takes time for hackers to detect these systems under normal scenarios.
T-Mobile has been transforming the wireless industry with its “Uncarrier” initiatives. Today as T-Mobile’s IT organization works to transform itself in a like manner, technical foundations built over the last couple of years are now key to their drive for more Agile delivery practices. In his session at DevOps Summit, Martin Krienke, Sr Development Manager at T-Mobile, will discuss where they started their Continuous Delivery journey, where they are today, and where they are going in an effort ...
May. 26, 2015 09:30 AM EDT Reads: 1,198
SYS-CON Media named Andi Mann editor of DevOps Journal. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. DevOps Journal brings valuable information to DevOps professionals who are transforming the way enterprise IT is done. Andi Mann, Vice President, Strategic Solutions, at CA Technologies, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, communicator, and thought lea...
May. 26, 2015 09:00 AM EDT Reads: 1,263
Even though it’s now Microservices Journal, long-time fans of SOA World Magazine can take comfort in the fact that the URL – soa.sys-con.com – remains unchanged. And that’s no mistake, as microservices are really nothing more than a new and improved take on the Service-Oriented Architecture (SOA) best practices we struggled to hammer out over the last decade. Skeptics, however, might say that this change is nothing more than an exercise in buzzword-hopping. SOA is passé, and now that people are ...
May. 26, 2015 09:00 AM EDT Reads: 3,417
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
May. 26, 2015 09:00 AM EDT Reads: 1,339
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. Acco...
May. 26, 2015 09:00 AM EDT Reads: 5,059
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, will cover the union between the two topics and why this is important. He will cover an overview of Immutable Infrastructure then show how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He will end the session with some interesting case study examples.
May. 26, 2015 09:00 AM EDT Reads: 1,599
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. 8th International Big Data Expo, co-located with 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. As advanced data storage, access and analytics technologies aimed at handling high-volume and/or fast moving data all move center stage, aided by the cloud computing bo...
May. 26, 2015 08:45 AM EDT Reads: 1,381
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
May. 26, 2015 08:30 AM EDT Reads: 3,939
The Internet of Things promises to transform businesses (and lives), but navigating the business and technical path to success can be difficult to understand. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, demonstrated how to approach creating broadly successful connected customer solutions using real world business transformation studies including New England BioLabs and more.
May. 26, 2015 08:00 AM EDT Reads: 5,852
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile ...
May. 26, 2015 08:00 AM EDT Reads: 5,365
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
May. 26, 2015 07:45 AM EDT Reads: 1,042
There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that a...
May. 26, 2015 07:00 AM EDT Reads: 5,108
It's time to face reality: "Americans are from Mars, Europeans are from Venus," and in today's increasingly connected world, understanding "inter-planetary" alignments and deviations is mission-critical for cloud. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, discussed cultural expectations of privacy based on new research across these elements
May. 26, 2015 06:00 AM EDT Reads: 3,462
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, discussed how a user-centric Application Performance Management solution can help inspire your users with every applicati...
May. 26, 2015 04:00 AM EDT Reads: 4,672
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, discussed this shifting dynamic with an ...
May. 26, 2015 02:00 AM EDT Reads: 3,151
As enterprises engage with Big Data technologies to develop applications needed to meet operational demands, new computation fabrics are continually being introduced. To leverage these new innovations, organizations are sacrificing market opportunities to gain expertise in learning new systems. In his session at Big Data Expo, Supreet Oberoi, Vice President of Field Engineering at Concurrent, Inc., discussed how to leverage existing infrastructure and investments and future-proof them against e...
May. 26, 2015 02:00 AM EDT Reads: 3,047
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackI...
May. 26, 2015 01:00 AM EDT Reads: 4,403
Once the decision has been made to move part or all of a workload to the cloud, a methodology for selecting that workload needs to be established. How do you move to the cloud? What does the discovery, assessment and planning look like? What workloads make sense? Which cloud model makes sense for each workload? What are the considerations for how to select the right cloud model? And how does that fit in with the overall IT transformation?
May. 26, 2015 12:00 AM EDT Reads: 4,200
The recent trends like cloud computing, social, mobile and Internet of Things are forcing enterprises to modernize in order to compete in the competitive globalized markets. However, enterprises are approaching newer technologies with a more silo-ed way, gaining only sub optimal benefits. The Modern Enterprise model is presented as a newer way to think of enterprise IT, which takes a more holistic approach to embracing modern technologies.
May. 25, 2015 11:00 PM EDT Reads: 5,934
Every day we read jaw-dropping stats on the explosion of data. We allocate significant resources to harness and better understand it. We build businesses around it. But we’ve only just begun. For big payoffs in Big Data, CIOs are turning to cognitive computing. Cognitive computing’s ability to securely extract insights, understand natural language, and get smarter each time it’s used is the next, logical step for Big Data.
May. 25, 2015 08:00 PM EDT Reads: 1,876