Welcome!

Cloud Expo Authors: Elizabeth White, Aria Blog, Liz McMillan, Brian Lavallée, Kevin Benedict

Related Topics: .NET, Virtualization, Silverlight, Web 2.0, Cloud Expo, Security

.NET: Blog Post

Integrating Active Directory into Windows Azure Virtual Machines

Active Directory Deployment Considerations and Implementation Steps in Windows Azure

Migrating traditional client/server applications to Windows Azure Virtual Machines is what Don Noonan does every day. The majority of these workloads use Active Directory Domain Services as their authentication provider, or in other words, classic Windows authentication. In this post Don walks us through the best practices high level architecture and the basic building blocks of creating a private forest within Windows Azure.

If Active directory is not available, you better be
As we all know, if AD is down so is your app. Imagine setting up a single domain controller responsible for both name resolution (DNS) and authentication. You just created another synonym for single-point-of-failure. At a minimum you should deploy two (2) domain controllers, and they should be created as part of an Availability Set. This will ensure that at least one (1) domain controller is always available for authentication and name resolution requests. If you’re considering saving a few bucks by deploying a single domain controller in non-production environments, let me save you a few more. The first call you get from development or QA will cost you at least 6 months of compute. Telling a dozen upset people on a conference call that you wanted to save the company $50/month will sound pretty bad…

A private forest for me? oh you shouldn’t have
There are currently two major scenarios for providing Windows authentication in Windows Azure Virtual Machines:

  • Deploy a new private forest
  • Extend an existing on-premise forest

In this blog we’ll cover deploying a new private forest. Here is a quick Visio of a classic 3-tier application (using Windows Azure features) to get us started:

As you can see, we have a management subnet that contains our domain controllers, as well as separate database and application “tiers”.

Stop Talking and Start Deploying
As with any new deployment to Windows Azure Virtual Machines, you will perform the following high-level steps:

  1. Create an affinity group (See Bob Hunt’s Article in the Series)
  2. Create a virtual network (See Bob Hunt’s Article in the Series)
  3. Create a storage account (See Kevin Remde’s Article in the Series)
  4. Create virtual machines (See Tommy Patterson’s Article in the Series)

While creating the virtual network, you will need to specify that the domain controllers will also be providing name resolution for all of the servers in your deployment. You can do this in the Windows Azure management portal as well as through the management web service. Here is how you do this via PowerShell:

Specifying custom DNS servers using PowerShell

Example command line:

Set-AzureVNetConfig –ConfigurationPath “C:\networkConfiguration.xml”

Contents of C:\networkConfiguration.xml:

<NetworkConfiguration>
<VirtualNetworkConfiguration>
<Dns>
<DnsServers>
<DnsServer name="skydc01" IPAddress="10.1.1.4" />
<DnsServer name="skydc02" IPAddress="10.1.1.5" />
</DnsServers>
</Dns>
<VirtualNetworkSites>
<VirtualNetworkSite name="skyvn" AffinityGroup="skyag">
<AddressSpace>
<AddressPrefix>10.1.0.0/16</AddressPrefix>
</AddressSpace>
<Subnets>
<Subnet name="Management">
<AddressPrefix>10.1.1.0/24</AddressPrefix>
</Subnet>
<Subnet name="Database">
<AddressPrefix>10.1.2.0/24</AddressPrefix>
</Subnet>
<Subnet name="Middleware">
<AddressPrefix>10.1.3.0/24</AddressPrefix>
</Subnet>
<Subnet name="Application">
<AddressPrefix>10.1.4.0/24</AddressPrefix>
</Subnet>
</Subnets>
<DnsServersRef>
<DnsServerRef name="skydc01" />
<DnsServerRef name="skydc02" />
</DnsServersRef>
</VirtualNetworkSite>
</VirtualNetworkSites>
</VirtualNetworkConfiguration>
</NetworkConfiguration>

In the example above, the IP addresses used assume the domain controllers are the first virtual machines created on the Management subnet. Let’s make sure that’s true by creating them now:

Creating Highly Available Domain Controllers using PowerShell

Relevant excerpts from createService.ps1:

$instanceSize = 'Small'
$imageName = 'MSFT__Win2K8R2SP1-Datacenter-201210.01-en.us-30GB.vhd'
$subnetName = 'Management'
$availabilitySetName = 'skydc'

$password = '@skyDc01'
$vmName = 'skydc01'
$skydc01 = New-AzureVMConfig -Name $vmName -AvailabilitySetName $availabilitySetName -ImageName $imageName -InstanceSize $instanceSize |
Add-AzureProvisioningConfig -Windows -Password $password |
Set-AzureSubnet $subnetName

$password = '@skyDc02'
$vmName = 'skydc02'
$skydc02 = New-AzureVMConfig -Name $vmName -AvailabilitySetName $availabilitySetName -ImageName $imageName -InstanceSize $instanceSize |
Add-AzureProvisioningConfig -Windows -Password $password |
Set-AzureSubnet $subnetName

Once you’ve created the servers, you will need to make them domain controllers, also known as promotion.

Promoting a Server to a Domain Controller using DCPROMO or PowerShell

Depending on what operating system you have chosen, you can automate forest creation via command line. In the following examples, be sure to replace DOMAIN_HERE with the desired domain name, and replace passwords with those corresponding to temporary password you assigned to the local administrator account on the first (primary) server.

Windows Server 2008 R2 – Create a new forest using DCPROMO

dcpromo.exe /unattend:C:\primaryDomainController.txt

Contents of C:\primaryDomainController.txt:

[DCInstall]
; New forest promotion
ReplicaOrNewDomain=Domain
NewDomain=Forest
NewDomainDNSName=[DOMAIN_HERE].com
ForestLevel=4
DomainNetbiosName=DOMAIN_HERE
DomainLevel=4
InstallDNS=Yes
ConfirmGc=Yes
CreateDNSDelegation=No
DatabasePath="C:\Windows\NTDS"
LogPath="C:\Windows\NTDS"
SYSVOLPath="C:\Windows\SYSVOL"
SafeModeAdminPassword=@skyDc01
RebootOnCompletion=Yes

Windows Server 2012 – Create a new forest using PowerShell

C:\primaryDomainController.ps1

Contents of C:\primaryDomainController.ps1:

Import-Module ADDSDeployment
Install-ADDSForest `
-CreateDnsDelegation:$false `
-DatabasePath "C:\Windows\NTDS" `
-DomainMode "Win2012" `
-DomainName "[DOMAIN_HERE].com" `
-DomainNetbiosName "DOMAIN_HERE" `
-ForestMode "Win2012" `
-InstallDns:$true `
-LogPath "C:\Windows\NTDS" `
-NoRebootOnCompletion:$false `
-SysvolPath "C:\Windows\SYSVOL" `
-Force:$true

Part of your homework will be to create the second domain controller in the new forest. There will need to be slight changes made to the answer files above.

What's Next?

Creating the rest of servers required by your application seems like the logical next step. However, there are a handful of important tasks I like to do prior to creating ANY additional virtual machines:

Create domain user accounts that will be used for future system administration.

Create containers for major objects such as server computer accounts.

Create core group policies for significant items such as:

  • Remote Desktop Services – Enable Keep-Alives (article posted previously at Skylera)
  • User Account Control
  • Windows Firewall
  • Windows Update

Important Considerations

When creating a private forest, consider the amount of administrative overhead involved vs. level of isolation. For example, you may want to have a single forest for all pre-production environments so that you only need to perform user account tasks in one place. This is easy to do in Windows Azure.

Written by Don Noonan (Don's Blog at Skylera)

Edited by Tommy Patterson (Tommy's Blog on Virtuallycloud9.com)

Be Sure to Read Up on the Rest of the Series for 31 Days of Servers in the Cloud!

http://virtuallycloud9.com/index.php/31-days-of-servers-in-the-cloud-series/

More Stories By Tommy Patterson

Tommy Patterson began his virtualization adventure during the launch of VMware's ESX Server's initial release. At a time when most admins were only adopting virtualization as a lab-only solution, he pushed through the performance hurdles to quickly bring production applications into virtualization. Since the early 2000s, Tommy has spent most of his career in a consulting role providing assessments, engineering, planning, and implementation assistance to many members of the Fortune 500. Troubleshooting complicated scenarios, and incorporating best practices into customer's production virtualization systems has been his passion for many years. Now he share his knowledge of virtualization and cloud computing as a Technology Evangelist in the Microsoft US Developer and Platform Evangelism team.

Cloud Expo Latest Stories
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.
Until recently, many organizations required specialized departments to perform mapping and geospatial analysis, and they used Esri on-premise solutions for that work. In his session at 15th Cloud Expo, Dave Peters, author of the Esri Press book Building a GIS, System Architecture Design Strategies for Managers, will discuss how Esri has successfully included the cloud as a fully integrated SaaS expansion of the ArcGIS mapping platform. Organizations that have incorporated Esri cloud-based applications and content within their business models are reaping huge benefits by directly leveraging cloud-based mapping and analysis capabilities within their existing enterprise investments. The ArcGIS mapping platform includes cloud-based content management and information resources to more widely, efficiently, and affordably deliver real-time actionable information and analysis capabilities to your organization.
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. In his session at Internet of @ThingsExpo, Mac Devine, Distinguished Engineer at IBM, will discuss bringing these three elements together via Systems of Discover.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Solgenia, the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between personal and professional social, mobile and cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing collaboration and infrastructure solutions to the cloud.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
Enterprises require the performance, agility and on-demand access of the public cloud, and the management, security and compatibility of the private cloud. The solution? In his session at 15th Cloud Expo, Simone Brunozzi, VP and Chief Technologist(global role) for VMware, will explore how to unlock the power of the hybrid cloud and the steps to get there. He'll discuss the challenges that conventional approaches to both public and private cloud computing, and outline the tough decisions that must be made to accelerate the journey to the hybrid cloud. As part of the transition, an Infrastructure-as-a-Service model will enable enterprise IT to build services beyond their data center while owning what gets moved, when to move it, and for how long. IT can then move forward on what matters most to the organization that it supports – availability, agility and efficiency.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.