Automatic Static IP Addresses for Azure VMs

Azure Infrastructure

EDIT: Updated the hyperlink to GitHub, this is now published here: https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-automatic-static-ip

During a recent project, I had a need to use only static IP addresses for my virtual machines. However, having to look up the next available IP address seemed counter-intuitive.

What’s the problem anyway

For systems that require a static IP addresses (like Active Directory), or systems that rely on external (non-self-updating) DNS, the default behavior in Azure is problematic because by default a dynamic private IP address is assigned. You can absolutely assign a static IP address by specifying it in your Azure Resource Manager (ARM) template or PowerShell script:

{
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Static",
              "privateIPAddress": "192.168.0.4"

However, there are a couple of issues with this:

1. I need to know the IP address ahead of time. Unlike the ASM model where we had Test-AzureStaticVNetIP cmdlet, we don’t have an ARM equivalent.

2. Azure’s DHCP system isn’t always aware that this IP address is taken.

What’s the solution

I’m going to detail one approach, the one I have used to solve this and would be happy to hear about other approaches to this. What we can do is let the Azure Virtual Network’s DHCP system allocate the IP address and then switch it over to a static IP. This is simple to do from the Azure portal as shown in several articles such as this one. However, we need a more automated approach. By using linked templates, we can create a Network Interface Card (NIC) with a dynamic IP address and then update that NIC with its own IP address, setting it to static.

The ARM template to create the NIC will look as follows:

 

    {
      "apiVersion": "2015-06-15",
      "type": "Microsoft.Network/networkInterfaces",
      "name": "[variables(‘nicName’)]",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[concat(‘Microsoft.Network/virtualNetworks/’, variables(‘virtualNetworkName’))]"
      ],
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "subnet": {
                "id": "[variables(‘SubnetRef’)]"
              }
            }
          }
        ]
      }
    },
{
      "type": "Microsoft.Resources/deployments",
      "name": "[concat(‘updateip’)]",
      "apiVersion": "2015-01-01",
      "dependsOn": [
        "[concat(‘Microsoft.Network/networkInterfaces/’, variables(‘nicName’))]"
      ],
      "properties": {
        "mode": "Incremental",
        "templateLink": {
          "uri": "[variables(‘updateip_templateUri’)]",
          "contentVersion": "1.0.0.0"
        },
        "parameters": {
          "nicName": {
            "value": "[variables(‘nicName’)]"
          },
          "SubnetRef": {
            "value": "[variables(‘SubnetRef’)]"
          },
          "privateIp": {
            "value": "[reference(concat(‘Microsoft.Network/networkInterfaces/’, variables(‘nicName’))).ipConfigurations[0].properties.privateIPAddress]"
          }
        }
      }
    }
  ],
  "outputs": {
    "privateIp": {
        "type": "string",
        "value": "[reference(variables(‘nicName’)).ipConfigurations[0].properties.privateIPAddress]"
    }
  }

 

I’m showing two resources. First the NIC and it’s being created as your normally would with a dynamic IP address. The other resource is a deployment (linked template) that is dependent on creation of the NIC. In this resource we are passing a private IP address as the parameter. The value of this parameter is coming from the existing NIC resource. The linked template is simply specifying the creation of the NIC again, but with a static IP. Here’s how that looks:

 

    {
      "type": "Microsoft.Network/networkInterfaces",
      "name": "[parameters(‘nicName’)]",
      "apiVersion": "2015-06-15",
      "location": "[resourceGroup().location]",
      "tags": {
        "Role": "Web Server"
      },
      "dependsOn": [
      ],
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Static",
              "privateIPAddress": "[parameters(‘privateIp’)]",
              "subnet": {
                "id": "[parameters(‘SubnetRef’)]"
              }
            }
          }
        ]
      }
    }

Note that we are using “Static” allocation and specifying the parameter for the privateIPAddress. When called, this template is overwriting the original NIC properties and we can include other items such as tags.

Try it out

The complete example is available on GitHub. Please note, you will need to customize the parameters and also update the location of the linked template (it currently points to a non-existent location). For example, you could update it to point directly to my GitHub repository or use your own storage account.

2 comments

Profile Import fails when running Central Administration over ssl

SharePoint

I’m documenting this issue I discovered last week because I haven’t seen it mentioned elsewhere.

The issue occurs only when using the new SharePoint Active Directory Import in SharePoint 2013 when your Central Administration site is running over SSL. When you first setup your profile sync, you have three choices under “Configure Synchronization Settings”:

  1. Use SharePoint Profile Synchronization (This one leverages the built-in FIM)
  2. Use SharePoint Active Directory Import (This is the direct AD Import)
  3. Enable External Identity Provider

 

I haven’t tested with #1 and #3 so this issue may not apply to those. I originally configured Central Administration to run over port 443 (the default SSL port) and my IIS bindings only have the “https” type installed. By selecting my certificate in IIS, it gives me the proper host header: https://spadmin.mynetwork.com.

I did test using alternate ports without success. When using http, any port will work, whether you use the default port 80 or some other port like 8888. I’d like to mention if everything is setup correctly and working, Spence Harbar (@harbars) has a great blog post which explains how to set this up: http://www.harbar.net/archive/2012/07/23/sp13adi.aspx. Refer to that post to see how good ULS logs look. What if you have a similar configuration to the one I have?

You’ll see something like this first:

CreateProfileImportExportId: Could not InitializeProfileImportExportProcess for end point 'direct://spadmin.mynetwork.com:443/_vti_bin/profileimportexportservice.asmx?ApplicationID=03063851-9e17-4f94-8db6-a71f7b967bd3', retries left '0': Exception 0

And then:

ActiveDirectory Import failed for ConnectionForectName 'mynetwork.com', ConnectionSynchronizationOU 'DC=mynetwork,DC=com', ConnectionUserName 'mynetwork\spupssync'. Error message: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> Microsoft.Office.Server.UserProfiles.UserProfileApplicationNotAvailableException: No User Profile Application 

The Workaround

As far as I know, there is no true fix, its a bug. I hope this will be addressed by an update. To get this working you must use a non-SSL Central Administration. However, as a workaround, you can put a load balancer in front of  Central Administration so that client to server communications are encrypted but they’re unencrypted from the load balancer to Central Administration.

0 comments

SharePoint on Windows Azure – 10 Tips

SharePoint

Tip #1: Growing Your SharePoint Farm

So you have a bunch of servers in your farm but you need one or two more. The steps I outlined before will work with some modification. Now that you’ve already created your Cloud Service, you have to do things a bit differently. If you try to do the New-AzureVM command like before, you’ll likely see "DNS name already taken" error that I’ve highlighted further down. To add additional servers to your Cloud Service, follow the guidance from the previous articles about connecting to Windows Azure, specifying your Cloud Service, storage account, and other pertinent information. Then define your new server or servers as we did before.

## Create SP App3 
$size = "Small"
$vhdname = "Arch-SPApp3.vhd" 
$vmStorageLocation = $mediaLocation + $vhdname
$spwebnew = New-AzureVMConfig -Name 'Arch-SPApp3' -AvailabilitySetName $avsetwfe -ImageName $spimage -InstanceSize $size -MediaLocation $vmStorageLocation | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 -Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol TCP -PublicPort (get-random -max 65000 -min 20000) -LocalPort 3389 | Set-AzureSubnet $subnetName | Add-AzureProvisioningConfig -Password 'pass@word1' -Windows

The example I’m showing would create a new Windows server without SharePoint. Then here’s the part to really pay attention to, the New-AzureVM command:

New-AzureVM -DeploymentName "NewSharePointServer" -ServiceName $serviceName -VNetName $vnetname -VMs $spwebnew

Here, we only need to give the -ServiceName, -VNetName, and -VMs parameters. If you put in more information, especially the AffinityGroup, you’ll get the DNS already taken error.

Tip #2: DNS name already taken

image

If you get this, it usually means this is the second time you’ve run the New-AzureVM for the Cloud Service. If everything is OK and you just want to add a server, see tip #1. If something is wrong, remove the Cloud Service and try it (the New-AzureVM cmdlets) again:

Remove-AzureService -ServiceName "ArchSharePoint" -Force

Tip #3: Finish Up

Remember to create the healthcheck file. In our examples, we specified the -ProbePath ‘/healthcheck/iisstart.htm’. So, we need to go to each server that we want our Azure load balancer to use and add that file. To do so, open IIS Manager and expand the Default Web Site. Right-click on the Default Web Site and select Add Virtual Directory. Give it a name (Alias) of “healthcheck” and choose a path (I used C:\Inetpub\wwwroot). Finally copy the “iisstart.htm” from the root of Default Web Site to the healthcheck folder, or just create an empty file called “iisstart.htm” Here’s how it looks when its finished.

image

Tip #4: Automate

If you’re going to be doing this repeatedly (for example, to perform a bunch of tests), automate. Lucky for us, Ram Gopinathan has written a script to do just that: http://gallery.technet.microsoft.com/PowerShell-Script-for-f43bb414 Ram’s script using an input file so you can easily document what you’re doing, then it provisions all your VMs. After you’re done, it’ll tear everything down, nice and neatly.

Tip #5: CSUpload Error: Too many arguments for command Add-Disk

This error occurred when using Add-Disk:

image

The problem was the – (the dashes). I used notepad, PowerGUI and other tools, but the dash doesn’t translate to the command prompt. To resolve this, just re-type the dashes.

Tip #6: CSUpload Error: CSUpload is expecting a page blob

I had copied down a vhd to make some modifications and then tried to upload it later. Here’s what I kept getting:

image

In fact, CloudXplorer and other tools I tried reported the same thing. What’s the fix? Use AzCopy. See the link to read up on how to get it. AzCopy is great because it can multi-thread. My uploads using AzCopy were very fast. By specifying the /BlobType:page parameter, you can upload a block blob as a page blob. Here’s how my command looks: azcopy E:\ftproot\vhd http://storageacct.blob.core.windows.net/vhds/ /blobtype:page /destkey:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxx/== /Z /V

Tip #7: No RemoteDesktop Endpoint

I also realized that none of my VMs had an endpoint for Remote Desktop. We can add that by using a command like this:

$vmname = "Arch-SQL1"
Get-AzureVM -ServiceName "ArchSharePoint" -Name $vmname | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol "TCP" -PublicPort 53223 -LocalPort 3389 | Update-AzureVM

Remember to use a different PublicPort for each VM in a particular Cloud Service. I just incremented by one (53223, 53224, etc.). Or, this should work too:

Get-AzureVM -ServiceName $servicename | ForEach-Object {Add-AzureEndpoint -Name $endpointname -Protocol TCP -PublicPort (get-random -max 65000 -min 20000) -LocalPort 3389} | Update-AzureVM

I understand that it’s possible, but not likely, Get-Random could pick the same random number. I supposed you could script it further to check if a port is being used but that wasn’t worth the effort here.

Tip #8: Office Web Apps Tip

I published my SharePoint farm publicly, by creating an endpoint at 443. This allows me to browse SharePoint from anywhere using https://.cloudapp.net. However, I couldn’t use my Office Web App servers since they were only accessible from within Azure. The Office Web Apps were configured to use https and run on port 443. I could publish an endpoint, but I had already used 443 for SharePoint (and I don’t want to use an insecure port, like 80). So, I picked a random port, 9443. Now, how do we get SharePoint to know about this? SharePoint will PULL information from Office Web Apps, so first we need to configure Office Web Apps with this information (do this on any Office Web Apps server):

# Set the ExternalUrl on any Office Web Apps Server
New-OfficeWebAppsFarm -ExternalUrl https://archsharepoint.cloudapp.net:9443

Then, on the SharePoint server, New-SPWopiBinding will bind to the public address on the public port (https://archsharepoint.cloudapp.net:9443). Create an endpoint in the WAC server for a TCP connection on public port 9443 to private port 443.

Tip #9: Blocked RDP in Firewall

During testing of ports and protocols, I blocked all connections on one of my servers. Obviously, that included the RDP port. If you do that, the only thing you can really do is download the VHD for that server, mount it on-premises somewhere (which will give you console access) and change it. Then, upload it back to Windows Azure. It’s a tedious process, so if you really don’t need machine or data, re-create it.

Tip #10: Reduce costs

UPDATE: 6/10/2013

As of June 3rd, we are no longer billed for VMs in a stopped state, great news! See Scott Guthrie’s Blog for more info.

If you’re testing scenarios over a period of week or months and you don’t necessarily use your VM’s every day – you can cut some costs. For some of my work, I might go a week or two without ever logging in to my SharePoint farms in Windows Azure. All the while though, you’re being charged for compute hours. The only way to stop that is to delete your VMs. Powering down doesn’t cut your costs, you’re still charged for compute hours. You can use tip #4 to do a complete “tear down,” or use some PowerShell snippets to do a partial tear down. I’ll show you an example that should be pretty easy to follow. First, we’ll backup our VM configs:

#Region Func: BackupVMConfigs
function BackupVMConfigs () 
{
	param
    (
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$CloudService,
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$Folder
	)
If (Get-AzureVM -ServiceName $cloudservice) 
	{
		Write-Host "Backing up SharePoint VM Configurations..."
		Try {
			Get-AzureVM -ServiceName $cloudservice | foreach {
			$path = $folder + $_.Name + '.xml'
			Export-AzureVM -ServiceName $cloudservice -Name $_.Name -Path $path 
			}
			}
		Catch 
			{
			Write-Output $_
			Throw " - Error occurred. Not continuing since VM configuration may not have been saved and we don't want to delete them."
			}	
	}
Else { Write-Host " - Skipping $cloudservice because it doesn't seem to exist." -ForegroundColor Yellow}
Write-Host "- Done backing up. Removing VM's from Cloud Service..." -ForegroundColor Yellow
}
#EndRegion

To use that function, you’d type in BackupVMConfig -CloudService " -Folder "" I’ve used that function in the next snippet, where we actually remove the Cloud Service/Deployment:

#Region Func: Remove SP Deployment
function RemoveSP {
$CloudService = "ArchSharePoint"
BackupVMConfigs -CloudService $CloudService -Folder "C:\Scripts\Azure\SharePoint\"
#Removing all VMs while keeping the cloud service/DNS name
Remove-AzureDeployment -ServiceName $cloudservice -Slot Production -Force
Write-Host "Removed all SharePoint VMs"
}
#EndRegion

Once you run the “RemoveSP”; function, you’re no longer incurring compute charges. The actual VHD’s are still there in your Windows Azure storage account though. The charges for storage are very small, almost insignificant so we keep those there. Later, when you’re ready to fire up your farm you can do the inverse. First, I have a function to do the actual import of the VMs:

#Region Func: ImportVMs ($CloudService, $Folder)
function ImportVMs ()
{
	param
    (
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$CloudService,
        [Parameter(Mandatory=$True)]
		[ValidatePattern({\\$})]
		[ValidateScript({Test-Path $_ -PathType 'Container'})] 
        [String]$Folder
	)
$cs = Get-AzureVM -ServiceName $cloudservice
If ($cs.Count -ge 1) 
	{
	Write-Host "It looks like (at least some) of the VMs are up. Skipping restore..." -ForegroundColor Green
	}
Try 
	{
	Write-Host " - Importing VMs..." -ForegroundColor Yellow
	$script:vms = @()
	Get-ChildItem $folder | foreach {
		$path = $folder + $_
		$script:vms += Import-AzureVM -Path $path
		}
	}
Catch
	{
	Write-Output $_
	Throw " - Error occurred."
	}	
}
#EndRegion

And here’s where I’m using it:

#Region Restore SharePoint Deployment
function RestoreSharePoint 
{
$cloudservice = "ArchSharePoint"
$folder = "C:\Scripts\Azure\SharePoint\" #must include trailing \
ImportVMs -CloudService $CloudService -Folder $Folder
Write-Host " - Restoring takes about 30 minutes..." -ForegroundColor Yellow
Measure-Command {New-AzureVM -ServiceName $cloudservice -VNetName "vNet-Arch" -VMs $vms}
}
<# 
#This is used if the Cloud Service was deleted
## Cloud Service Paramaters  
$serviceLabel = $cloudservice
$serviceDesc = "Architecture SharePoint Farm" 
$ag = 'AG-SharePoint-Arch'
Measure-Command {New-AzureVM -ServiceName $cloudservice -ServiceLabel $serviceLabel -ServiceDescription $serviceDesc -AffinityGroup $ag -VNetName $vNet -VMs $vms}
#>
#EndRegion

Running the “RestoreSharePoint” function will call the ImportVMs function with the right parameters and attach your VHDs to some shiny new VMs based on the backups you took before.

Conclusion

That’s the end of our series. I hope I’ve achieved my goal of consolidating all the information needed to understand and setup SharePoint farms on Windows Azure IaaS. If you haven’t started playing around yet, go back to the introduction to learn how to sign up for a free account. A big thanks to my colleagues who helped me throughout the series: Paul Stubbs, Len Cardinal, Kosma Zygouras, Shane Laney.

0 comments

SharePoint on Windows Azure – Part 4: SharePoint Farm

SharePoint
This entry is part 4 of 4 in the series SharePoint on Windows Azure

clip_image002Let’s review the previous articles in this series:

Part 1: Introduction – We showed how to set up tools to interact with Windows Azure.

Part 2: Storage – We showed how to setup Windows Azure storage, how to upload vhd’s and some tools to manage storage.

Part 3: Networking – We showed how virtual networking works, the advantages of using Cloud Services and how to setup the first virtual machine.

Endpoints

Let’s talk about endpoints, load balancing and probes for a minute since we’ll use this concept a bit later on. By default, when you create a VM an endpoint is automatically created. This is what allows you to use RDP (Remote Desktop Protocol) to connect to your VM. Here’s how that looks in the management portal:
clip_image004

Windows Azure assigns a random high port number for the public port that maps to the RDP port (3389). If you’re familiar with NAT in networking, this is similar in concept. Note, this one is not load balanced.

So, how do we use this for our SharePoint Farm? To get a load balanced port that’s publically accessible, we simply define a local port and public port that’s the same for each VM. For example, we can open port 80 (public) and map it to port 80 (private) for our SharePoint Web Servers. If we do this a second time, Windows Azure recognizes that we want this port to be load balanced against all the virtual machines we’ve specified.

Let’s also take a minute to talk about probes. A probe is a method used to check the health of a server, especially in a load balanced set. For example, you may have 3 SharePoint Web Servers but one of them is down. The probe can detect this and prevent users from hitting that server. This is also useful in cases where you just don’t want users accessing one of the servers (to apply updates, let’s say).

Following the sample from Paul Stubbs, we’ll use this snippet:

Add-AzureEndpoint -Name 'http' -LBSetName 'lbhttp' -LocalPort 80 -PublicPort 80 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm'

In this example, we’re asking Windows Azure to use the “http” protocol on port 80 and target the iisstart.htm page. For this to work, the machine much have a website at that path that’s alive. If not, the probe will fail and users won’t be routed to that machine. For our scenario, we’ll keep the IIS “Default Web Site” to use for probing purposes.

SharePoint Servers

Now, it gets much easier. We’re going to provision the rest of the SharePoint Farm. Once again, let’s do a quick connection test. Open Windows Azure PowerShell and type in the following:

Get-AzureSubscription | Select SubscriptionName, CurrentStorageAccount

Hopefully you can still connect. Next, we’ll assign some values to our account variables as we did before. Remember to use the values you got when you ran the last command.

# your imported subscription name
$subscriptionName = "Windows Azure Internal Consumption"
$storageAccount = "wahidstore"
AzureSubscription $subscriptionName
Set-AzureSubscription $subscriptionName -CurrentStorageAccount 
$storageAccount

Now, we’re going to specify parameters for a new Cloud Service. In my previous example, I used CS-ArchSP2013. For this one, I’m going to call it ArchSharePoint. Use any name that’s available, to test for available names, use Test-AzureName –Service ‘<your chosen name>’

# Cloud Service Paramaters
$serviceName = "ArchSharePoint"
$serviceLabel = "ArchSharePoint"
$serviceDesc = "Cloud Service for SharePoint Farm"

Now, I’ll specify the networking parameters using the same $vnetname, $ag, and $primaryDNS as before but the other $subnetName I created.:

# Network Paramaters
$vnetname = 'vNet-Arch'
$subnetName = 'SharePoint'
$ag = 'AG-SharePoint-Arch'
$primaryDNS = '192.168.1.4'

If you’re creating a brand new farm, rather than using uploaded images, specify these two parameters too:

#VM Image names taken from: Get-AzureVMImage | select Label, ImageName | 
fl
$spimage= 'MSFT__Win2K8R2SP1-120612-1520-121206-01-en-us-30GB.vhd'
$sqlimage = 'MSFT__Sql-Server-11EVAL-11.0.2215.0-05152012-en-us-30GB.vhd'

Next, we’ll specify a few availability sets. Think of availability sets as a way to configure high availability. We’re telling Windows Azure that VMs in an availability set have the same role. This instructs Windows Azure to put these VMs on different racks and/or different servers to avoid any single points of failure.

# Availability Sets
$avsetwfe = 'avset-arch-web'
$avsetapps = 'avset-arch-apps'
$avsetsql = 'avset-arch-sql'

Specify a location for these VMs. I’ve created a folder under my “vhds” container called “Arch” and I want these VMs to go there.

clip_image006

# MediaLocation
$mediaLocation = "http://wahidstore.blob.core.windows.net/vhds/Arch/"

The next thing we’re going to specify is the VM configuration. Since there’s a lot going on in the next set of PowerShell scripts, let me explain. The $size variable is self-explanatory. I’m going to use “Large” which gives me 4 cores and 7 GB of RAM. The recommendation here is to use smaller VM’s that you can scale out (my adding more), rather than Extra Large VMs. You pay for compute hours based on the size, so many times several smaller ones are better than fewer larger ones.

For $vmStorageLocation, we’re going to specify the $mediaLocation above, plus a name for the vhd. If you uploaded VM’s use the name of the vhd you uploaded.

I’m going to skip it below, but you can use the following command to automate joining the domain by piping it after $vmLocaction:

$vmStorageLocation | 
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd `
-Domain $domain -DomainUserName $domuser -DomainPassword $dompwd `
-MachineObjectOU $advmou -JoinDomain $joindom

So, let’s define our VM configuration:

## Create SP Web1
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SPWeb1.vhd"
$spweb1 = New-AzureVMConfig -Name 'Arch-SPWeb1' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 
-PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

Note: If you’re creating new VMs, rather than using uploaded VMs, make the following changes.

§ Change –DiskName $vmStorageLocation to –ImageName $spimage (or $sqlimage for the SQL servers).

§ After –InstanceSize $size, type in –MediaLocation $vmStorageLocation

§ After Set-AzureSubnet $subnetName, add the following line:

Add-AzureProvisioningConfig -Password ‘pass@word1’ -Windows

We can copy the above set of commands as a template to build out our farm. In my case, I’ve got another SharePoint Web Server, a SharePoint App Server, and an Office Web Apps server.

## Create SP Web2
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SPWeb2.vhd"
$spweb2 = New-AzureVMConfig -Name 'Arch-SPWeb2' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

## Create SP App1
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SPApp1.vhd"
$spapp1 = New-AzureVMConfig -Name 'Arch-SPApp1' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 
-PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

## Create WAC
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-WAC1.vhd"
$wac1 = New-AzureVMConfig -Name 'Arch-WAC1' -AvailabilitySetName $avsetwfe 
`
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

Similarly, I’m going to specify the configuration for the SQL Server. There’s not much different here except that I don’t need to create an endpoint. However, I do want to create a data disk. Refer to Part 2 on storage for an explanation:

## Create SQL Server1 
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SQL1.vhd"
$spsql1 = New-AzureVMConfig -Name 'Arch-SQL1' -AvailabilitySetName 
$avsetsql -DiskName $vmStorageLocation -InstanceSize $size | 
Add-AzureDataDisk -CreateNew -DiskSizeInGB 100 -DiskLabel 'data' -LUN 0 –HostCaching “None” | 
Set-AzureSubnet $subnetName

Now, DOUBLE-CHECK everything before you move forward. Undoing this configuration can be complicated. We have just created our configurations, nothing has actually happened just yet, that’s what’s the following commands do:

$dns1 = New-AzureDns -Name 'DNS1' -IPAddress $primaryDNS
New-AzureVM -ServiceName $serviceName -ServiceLabel $serviceLabel `
-ServiceDescription $serviceDesc `
-AffinityGroup $ag -VNetName $vnetname -DnsSettings $dns1 `
-VMs $spweb1,$spweb2,$spapp1,$wac1, $spsql1

clip_image008Hit Enter and away it goes. If you run into any problems, I’ll try to cover the ones I’ve encountered in the next article. If everything went smooth, you’ll see something like the screenshot below. This will take a few minutes, sit back and enjoy some coffee or something. In my demo, this was taking a while on the second VM and I really wanted to hit CTRL+C or something, DON’T – just be patient.

0 comments

SharePoint on Windows Azure – Part 3: Networking

SharePoint
This entry is part 2 of 4 in the series SharePoint on Windows Azure

By now, I hope you’ve read Part 1: Introduction to get everything set up and ready.

Much of the documentation on Windows Azure is confusing when it comes to networking. There are a few reasons for this:

  • Networking, specifically virtual networks were released later in the lifecycle.
  • Since Windows Azure has its beginnings in PaaS, the concepts don’t necessarily apply to IaaS.
  • Terminology changed, for example “Cloud Service” was first called “Hosted Service,” and some API’s still use the term.

Networking Definitions

If you keep those things in mind, and pay attention to the date of publication for whatever article you’re reading, it should help you understand what’s being written about. In this article, I’m going to highlight just a few networking concepts as they relate to IaaS and then we’ll start creating our network.

There are 3 definitions we need to know to start:

Virtual Private Network (VPN) – A VPN creates a private network that operates like a local network but spans greater distances. We’re not going to create a VPN for our scenarios, however you could do so to protect your Windows Azure virtual machines from external access.

Virtual Network – A virtual network in Windows Azure is a container where you define the IP address ranges your virtual machines may use. Windows Azure uses infinite-lease DHCP addresses and you can’t assign static IP addresses.

  • By defining your IP addresses with a virtual network, you can control which IP’s are given to your virtual machines.
  • This is also where you can define a DNS server. Any virtual machine in a specific virtual network will be automatically assigned the DNS server specified.
  • Finally, you must assign an Affinity Group to your virtual network. This way, all VMs in your virtual network can be hosted closely together (i.e., same datacenter).

Cloud Service – Formerly called Hosted Service. A Cloud Service comes from the PaaS world but allow me describe how it’s useful in the IaaS world. A Cloud Service is also another container. You can assign a subnet to a Cloud Service, which allows you better control of how your IP addresses are assigned. We’ll use this concept later for our domain controllers. Cloud Services have other benefits as well:

  • Separation of “roles,” which allow you to start up one Cloud Service before others start. For example, our domain controllers will be in their own Cloud Service and we want this one to start up first since everything else depends on it.
  • Separation of “roles” also permits us to have different external entry points. For example, we’ll open up ports 80 and 443 for SharePoint but we don’t want these open for our domain controllers.
  • By creating two or more endpoints within a Cloud Service that have the same port number, Windows Azure recognizes a need for a load balancer and automatically load balances those endpoints.
  • You have the ability to export and import configurations of a Cloud Service. We’ll look at this in more detail later and why it’s so important.
  • Virtual Machines within a Cloud Service can communicate freely over any port and protocol. Cloud Services within a Virtual Network can also communicate freely over any port and protocol.

So with that, I hope I’ve convinced you to use Virtual Networks and Cloud Services wisely to design your network and SharePoint Farms. Without using these concepts, you could have problems with farms communicating with each other, DNS issues, security issues due to exposing too many endpoints and complicated setup of load balancing.

Paul Stubbs (@paulstubbs) was the source again for most of this information and I’ve added a link to his blog in the resources section.

Virtual Network

We want to create a virtual network and an associated Affinity Group. We’ll do this using PowerShell because it’s easier, repeatable, and it documents what we’re doing. First, we need to create an XML-based configuration file. Use your favorite text editor (mine is Notepad++) and create a new file with the following contents:




 
   
     
   
 
   
     
       
         192.168.0.0/16
       
     
       
         192.168.1.0/29
       
       
         192.168.2.0/24
       
     
         
             
         
       
     
   

Name this file SharePointFarmVNET.xml. Feel free to change the network range and subnets to your desire. However, keep the following things in mind:

  • By default, Windows Azure assigns the first VM an x.x.x.4 address. We’ll spin up our domain controller first, which will be our DNS server so it will have that .4 address. You should probably keep this as is.
  • We want our domain controllers to have a small IP range. In CIDR notation, a /30 gives you just 2 usable host addresses but Windows Azure doesn’t allow this. The next smallest is a /29, which gives 6 usable host addresses so that’s what we’ll use.
  • You can add as many DNS addresses as you want.
  • All the Subnets must be a part of the overall Address Prefix. Search for “subnet calc” online to find calculators if you want more specific IP address ranges and read up on “CIDR notation.”

Now that we’ve defined our network, let’s create it. First, let’s do a quick connection test. Open Windows Azure PowerShell and type in the following:

Get-AzureSubscription | Select SubscriptionName, CurrentStorageAccount 

Next, we’ll assign those values to some variable. I’m going to use my values, be sure to substitute these for your own.

$subscriptionName = "Windows Azure Wahid"
$storageAccount = "wahidstore" 
Select-AzureSubscription $subscriptionName 
Set-AzureSubscription $subscriptionName -CurrentStorageAccount $storageAccount

Next, we need to pick an Affinity Group. To see a list, use:

Get-AzureLocation | Select Name 

Select one and specify it below for $AGLocation. The Affinity Group is one of the most important decisions you can make. Everything will be tied to this. I made a mistake once and chose the wrong Affinity Group and to change it, I had to delete my virtual network, storage account, VMs and a bunch of other things. The Affinity Group you choose here must be in the same datacenter as your disks (storage account).

# Affinity Group parameters 
$AGLocation = "East US"
$AGDesc = "SharePoint 2013 Architecture Affinity Group"
$AGName = "AG-Arch-SharePoint" 
$AGLabel = "AG-Arch-SharePoint" 

Now we’ll create the Affinity Group:

# Create a new Affinity Group 
New-AzureAffinityGroup -Location $AGLocation -Description $AGDesc ` 
-Name $AGName -Label $AGLabel 

We shouldn’t have a virtual network configuration but if you do and want to clear it, use:

Remove-AzureVNetConfig -ErrorAction SilentlyContinue 

Finally, we apply the network configuration

# Apply new network. Either use the full path or run PowerShell from the location of the XML file.
$configPath = "C:\Scripts\Azure\SharePointFarmVNET.xml" 
Set-AzureVNetConfig -ConfigurationPath $configPath

That’s it, we’ve created an Affinity Group and virtual network. To verify and check our results, type in:

Get-AzureVNetConfig | Select -ExpandProperty XMLConfiguration 

Domain Controllers

Next, we’ll create our Cloud Service container and domain controller. Most of this should be self-explanatory.

  • If you’ve uploaded your vhds, just follow along.
  • If you’re creating brand new vhds, pay special attentions to the notes.

For $diskname, put in the value of the disk that has been uploaded. And for $subnet, $vnetname, and $ag, use the values from the previous steps, when you created them.

Remember, csupload adds the disks to the repository for you but CloudXPlorer or other tools may not. First, check to see if the disk is in the repository by typing:

Get-AzureDisk | select diskname 

If it’s not there, you’ll have to add it now. For example,

Add-AzureDisk -DiskName "Arch-DC-1.vhd" -MediaLocation "https://wahidstore.blob.core.windows.net/vhds/Arch-DC-1.vhd" -OS Windows 

Let’s continue. The commands below setup the variables, create a VM configuration and setup the Cloud Service variable.

Note: If you don’t have an existing vhd in Azure and just want to create a new one, comment out the $diskname and uncomment the last 3 lines.

## Domain Controller 1 Parameters
$vmName = 'Arch-DC-1'
$diskName = "Arch-DC-1.vhd"
$size = "ExtraSmall"
$deploymentName = "SP2013-DC1-Deployment"
$deploymentlabel = "SharePoint 2013 DC1 Deployment"
$subnet = "DomainControllers"
#$imageName = 'MSFT__Win2K8R2SP1-120612-1520-121206-01-en-us-30GB.vhd'
#$vmStorageLocation = "http://wahidstore.blob.core.windows.net/vhds/Arch-DC-1.vhd"
#$password = "pass@word1"

Now we create the VM configuration.

Note: If you’re creating a new vhd instead of using one that was uploaded, uncomment and use the second set of commands instead of the one that follows.

## Create VM Configuration if using uploaded vhd
$dc1 = New-AzureVMConfig -Name $vmName -InstanceSize $size -DiskName $diskName | Add-AzureEndpoint -Name 'RemoteDesktop' -LocalPort 3389 –PublicPort (Get-Random –Min 10000 –Max 65000) -Protocol tcp
Set-AzureSubnet -SubnetNames $subnet -VM $dc1

I found that when creating VMs this way, no endpoints are created. If you want to be able to use Remote Desktop to administer the machine, you need to add the endpoint like I’ve done above (using Add-AzureEndpoint).

## Create VM Configuration if creating new VM
#$dc1 = New-AzureVMConfig -Name $vmName -InstanceSize $size -ImageName $imageName -MediaLocation $vmStorageLocation 
#Add-AzureProvisioningConfig -Windows -Password $password -VM $dc1
#Set-AzureSubnet -SubnetNames $subnet -VM $dc1 

The rest is the same whether you’re just attaching an existing vhd or creating from from an image:

## Cloud Service Paramaters 
$serviceName = "ArchDC"
$serviceLabel = "ArchDC"
$serviceDesc = "Architecture Domain Controllers"
$vnetname = 'vNet-Arch'>
$ag = 'AG-Arch-SharePoint'

When we do the next part, New-AzureVM, it will first create the Cloud Service for us and then create the VM using the disk we specified.

## Create the VM(s)
New-AzureVM -ServiceName $serviceName -ServiceLabel $serviceLabel -ServiceDescription $serviceDesc -AffinityGroup $ag -VNetName $vnetname -VMs $dc1

My output looks like this:

So let’s wrap up. We’ve created a virtual network with 2 subnets, we’ve created a Cloud Service in that virtual network, and we just created a virtual machine in that Cloud Service. Here’s how our little deployment looks right now:

What’s next? We’ll create our SharePoint Farm, including the SQL Servers.

Resources

Paul Stubbs Blog: http://blogs.msdn.com/b/pstubbs


				
0 comments