Infamous “Object reference not set to an instance of an object” for Azure Disk Encryption

Azure Infrastructure

I’m working on encrypting RedHat 7.2 VM using Managed Disks. Keep in mind, to work with Managed Disks in PowerShell, you should upgrade to the latest AzureRM module (version 3.7.0 as of this writing). The command to start the encryption process is the same for Windows as it is for Linux:

?View Code POWERSHELL
1
2
3
Set-AzureRmVMDiskEncryptionExtension -ResourceGroupName $resourceGroupName -VMName $vmNameForEncryption `
-AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl `
-DiskEncryptionKeyVaultId $keyVaultResourceId -VolumeType OS

However, when executing this command for a Linux VM which uses Managed Disks, it fails:

?View Code POWERSHELL
1
2
3
4
Set-AzureRmVMDiskEncryptionExtension : Object reference not set to an instance of an object.
At line:1 char:1
+ Set-AzureRmVMDiskEncryptionExtension -ResourceGroupName $resourceGrou ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

[click to continue…]

1 comment

Automatic Static IP Addresses for Azure VMs

Azure Infrastructure

EDIT: Updated the hyperlink to GitHub, this is now published here: https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-automatic-static-ip

During a recent project, I had a need to use only static IP addresses for my virtual machines. However, having to look up the next available IP address seemed counter-intuitive.

What’s the problem anyway

For systems that require a static IP addresses (like Active Directory), or systems that rely on external (non-self-updating) DNS, the default behavior in Azure is problematic because by default a dynamic private IP address is assigned. You can absolutely assign a static IP address by specifying it in your Azure Resource Manager (ARM) template or PowerShell script:

{
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Static",
              "privateIPAddress": "192.168.0.4"

However, there are a couple of issues with this:

1. I need to know the IP address ahead of time. Unlike the ASM model where we had Test-AzureStaticVNetIP cmdlet, we don’t have an ARM equivalent.

2. Azure’s DHCP system isn’t always aware that this IP address is taken.

What’s the solution

I’m going to detail one approach, the one I have used to solve this and would be happy to hear about other approaches to this. What we can do is let the Azure Virtual Network’s DHCP system allocate the IP address and then switch it over to a static IP. This is simple to do from the Azure portal as shown in several articles such as this one. However, we need a more automated approach. By using linked templates, we can create a Network Interface Card (NIC) with a dynamic IP address and then update that NIC with its own IP address, setting it to static.

The ARM template to create the NIC will look as follows:

 

    {
      "apiVersion": "2015-06-15",
      "type": "Microsoft.Network/networkInterfaces",
      "name": "[variables(‘nicName’)]",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[concat(‘Microsoft.Network/virtualNetworks/’, variables(‘virtualNetworkName’))]"
      ],
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "subnet": {
                "id": "[variables(‘SubnetRef’)]"
              }
            }
          }
        ]
      }
    },
{
      "type": "Microsoft.Resources/deployments",
      "name": "[concat(‘updateip’)]",
      "apiVersion": "2015-01-01",
      "dependsOn": [
        "[concat(‘Microsoft.Network/networkInterfaces/’, variables(‘nicName’))]"
      ],
      "properties": {
        "mode": "Incremental",
        "templateLink": {
          "uri": "[variables(‘updateip_templateUri’)]",
          "contentVersion": "1.0.0.0"
        },
        "parameters": {
          "nicName": {
            "value": "[variables(‘nicName’)]"
          },
          "SubnetRef": {
            "value": "[variables(‘SubnetRef’)]"
          },
          "privateIp": {
            "value": "[reference(concat(‘Microsoft.Network/networkInterfaces/’, variables(‘nicName’))).ipConfigurations[0].properties.privateIPAddress]"
          }
        }
      }
    }
  ],
  "outputs": {
    "privateIp": {
        "type": "string",
        "value": "[reference(variables(‘nicName’)).ipConfigurations[0].properties.privateIPAddress]"
    }
  }

 

I’m showing two resources. First the NIC and it’s being created as your normally would with a dynamic IP address. The other resource is a deployment (linked template) that is dependent on creation of the NIC. In this resource we are passing a private IP address as the parameter. The value of this parameter is coming from the existing NIC resource. The linked template is simply specifying the creation of the NIC again, but with a static IP. Here’s how that looks:

 

    {
      "type": "Microsoft.Network/networkInterfaces",
      "name": "[parameters(‘nicName’)]",
      "apiVersion": "2015-06-15",
      "location": "[resourceGroup().location]",
      "tags": {
        "Role": "Web Server"
      },
      "dependsOn": [
      ],
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Static",
              "privateIPAddress": "[parameters(‘privateIp’)]",
              "subnet": {
                "id": "[parameters(‘SubnetRef’)]"
              }
            }
          }
        ]
      }
    }

Note that we are using “Static” allocation and specifying the parameter for the privateIPAddress. When called, this template is overwriting the original NIC properties and we can include other items such as tags.

Try it out

The complete example is available on GitHub. Please note, you will need to customize the parameters and also update the location of the linked template (it currently points to a non-existent location). For example, you could update it to point directly to my GitHub repository or use your own storage account.

0 comments

Profile Import fails when running Central Administration over ssl

SharePoint

I’m documenting this issue I discovered last week because I haven’t seen it mentioned elsewhere.

The issue occurs only when using the new SharePoint Active Directory Import in SharePoint 2013 when your Central Administration site is running over SSL. When you first setup your profile sync, you have three choices under “Configure Synchronization Settings”:

  1. Use SharePoint Profile Synchronization (This one leverages the built-in FIM)
  2. Use SharePoint Active Directory Import (This is the direct AD Import)
  3. Enable External Identity Provider

 

I haven’t tested with #1 and #3 so this issue may not apply to those. I originally configured Central Administration to run over port 443 (the default SSL port) and my IIS bindings only have the “https” type installed. By selecting my certificate in IIS, it gives me the proper host header: https://spadmin.mynetwork.com.

I did test using alternate ports without success. When using http, any port will work, whether you use the default port 80 or some other port like 8888. I’d like to mention if everything is setup correctly and working, Spence Harbar (@harbars) has a great blog post which explains how to set this up: http://www.harbar.net/archive/2012/07/23/sp13adi.aspx. Refer to that post to see how good ULS logs look. What if you have a similar configuration to the one I have?

You’ll see something like this first:

CreateProfileImportExportId: Could not InitializeProfileImportExportProcess for end point 'direct://spadmin.mynetwork.com:443/_vti_bin/profileimportexportservice.asmx?ApplicationID=03063851-9e17-4f94-8db6-a71f7b967bd3', retries left '0': Exception 0

And then:

ActiveDirectory Import failed for ConnectionForectName 'mynetwork.com', ConnectionSynchronizationOU 'DC=mynetwork,DC=com', ConnectionUserName 'mynetwork\spupssync'. Error message: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> Microsoft.Office.Server.UserProfiles.UserProfileApplicationNotAvailableException: No User Profile Application 

The Workaround

As far as I know, there is no true fix, its a bug. I hope this will be addressed by an update. To get this working you must use a non-SSL Central Administration. However, as a workaround, you can put a load balancer in front of  Central Administration so that client to server communications are encrypted but they’re unencrypted from the load balancer to Central Administration.

0 comments

SharePoint on Windows Azure – 10 Tips

SharePoint

Tip #1: Growing Your SharePoint Farm

So you have a bunch of servers in your farm but you need one or two more. The steps I outlined before will work with some modification. Now that you’ve already created your Cloud Service, you have to do things a bit differently. If you try to do the New-AzureVM command like before, you’ll likely see "DNS name already taken" error that I’ve highlighted further down. To add additional servers to your Cloud Service, follow the guidance from the previous articles about connecting to Windows Azure, specifying your Cloud Service, storage account, and other pertinent information. Then define your new server or servers as we did before.

?View Code POWERSHELL
1
2
3
4
5
## Create SP App3 
$size = "Small"
$vhdname = "Arch-SPApp3.vhd" 
$vmStorageLocation = $mediaLocation + $vhdname
$spwebnew = New-AzureVMConfig -Name 'Arch-SPApp3' -AvailabilitySetName $avsetwfe -ImageName $spimage -InstanceSize $size -MediaLocation $vmStorageLocation | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 -Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol TCP -PublicPort (get-random -max 65000 -min 20000) -LocalPort 3389 | Set-AzureSubnet $subnetName | Add-AzureProvisioningConfig -Password 'pass@word1' -Windows

The example I’m showing would create a new Windows server without SharePoint. Then here’s the part to really pay attention to, the New-AzureVM command:

?View Code POWERSHELL
1
New-AzureVM -DeploymentName "NewSharePointServer" -ServiceName $serviceName -VNetName $vnetname -VMs $spwebnew

Here, we only need to give the -ServiceName, -VNetName, and -VMs parameters. If you put in more information, especially the AffinityGroup, you’ll get the DNS already taken error.

Tip #2: DNS name already taken

image

If you get this, it usually means this is the second time you’ve run the New-AzureVM for the Cloud Service. If everything is OK and you just want to add a server, see tip #1. If something is wrong, remove the Cloud Service and try it (the New-AzureVM cmdlets) again:

?View Code POWERSHELL
1
Remove-AzureService -ServiceName "ArchSharePoint" -Force

Tip #3: Finish Up

Remember to create the healthcheck file. In our examples, we specified the -ProbePath ‘/healthcheck/iisstart.htm’. So, we need to go to each server that we want our Azure load balancer to use and add that file. To do so, open IIS Manager and expand the Default Web Site. Right-click on the Default Web Site and select Add Virtual Directory. Give it a name (Alias) of “healthcheck” and choose a path (I used C:\Inetpub\wwwroot). Finally copy the “iisstart.htm” from the root of Default Web Site to the healthcheck folder, or just create an empty file called “iisstart.htm” Here’s how it looks when its finished.

image

Tip #4: Automate

If you’re going to be doing this repeatedly (for example, to perform a bunch of tests), automate. Lucky for us, Ram Gopinathan has written a script to do just that: http://gallery.technet.microsoft.com/PowerShell-Script-for-f43bb414 Ram’s script using an input file so you can easily document what you’re doing, then it provisions all your VMs. After you’re done, it’ll tear everything down, nice and neatly.

Tip #5: CSUpload Error: Too many arguments for command Add-Disk

This error occurred when using Add-Disk:

image

The problem was the – (the dashes). I used notepad, PowerGUI and other tools, but the dash doesn’t translate to the command prompt. To resolve this, just re-type the dashes.

Tip #6: CSUpload Error: CSUpload is expecting a page blob

I had copied down a vhd to make some modifications and then tried to upload it later. Here’s what I kept getting:

image

In fact, CloudXplorer and other tools I tried reported the same thing. What’s the fix? Use AzCopy. See the link to read up on how to get it. AzCopy is great because it can multi-thread. My uploads using AzCopy were very fast. By specifying the /BlobType:page parameter, you can upload a block blob as a page blob. Here’s how my command looks: azcopy E:\ftproot\vhd http://storageacct.blob.core.windows.net/vhds/ /blobtype:page /destkey:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxx/== /Z /V

Tip #7: No RemoteDesktop Endpoint

I also realized that none of my VMs had an endpoint for Remote Desktop. We can add that by using a command like this:

?View Code POWERSHELL
1
2
$vmname = "Arch-SQL1"
Get-AzureVM -ServiceName "ArchSharePoint" -Name $vmname | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol "TCP" -PublicPort 53223 -LocalPort 3389 | Update-AzureVM

Remember to use a different PublicPort for each VM in a particular Cloud Service. I just incremented by one (53223, 53224, etc.). Or, this should work too:

?View Code POWERSHELL
1
Get-AzureVM -ServiceName $servicename | ForEach-Object {Add-AzureEndpoint -Name $endpointname -Protocol TCP -PublicPort (get-random -max 65000 -min 20000) -LocalPort 3389} | Update-AzureVM

I understand that it’s possible, but not likely, Get-Random could pick the same random number. I supposed you could script it further to check if a port is being used but that wasn’t worth the effort here.

Tip #8: Office Web Apps Tip

I published my SharePoint farm publicly, by creating an endpoint at 443. This allows me to browse SharePoint from anywhere using https://.cloudapp.net. However, I couldn’t use my Office Web App servers since they were only accessible from within Azure. The Office Web Apps were configured to use https and run on port 443. I could publish an endpoint, but I had already used 443 for SharePoint (and I don’t want to use an insecure port, like 80). So, I picked a random port, 9443. Now, how do we get SharePoint to know about this? SharePoint will PULL information from Office Web Apps, so first we need to configure Office Web Apps with this information (do this on any Office Web Apps server):

?View Code POWERSHELL
1
2
# Set the ExternalUrl on any Office Web Apps Server
New-OfficeWebAppsFarm -ExternalUrl https://archsharepoint.cloudapp.net:9443

Then, on the SharePoint server, New-SPWopiBinding will bind to the public address on the public port (https://archsharepoint.cloudapp.net:9443). Create an endpoint in the WAC server for a TCP connection on public port 9443 to private port 443.

Tip #9: Blocked RDP in Firewall

During testing of ports and protocols, I blocked all connections on one of my servers. Obviously, that included the RDP port. If you do that, the only thing you can really do is download the VHD for that server, mount it on-premises somewhere (which will give you console access) and change it. Then, upload it back to Windows Azure. It’s a tedious process, so if you really don’t need machine or data, re-create it.

Tip #10: Reduce costs

UPDATE: 6/10/2013

As of June 3rd, we are no longer billed for VMs in a stopped state, great news! See Scott Guthrie’s Blog for more info.

If you’re testing scenarios over a period of week or months and you don’t necessarily use your VM’s every day – you can cut some costs. For some of my work, I might go a week or two without ever logging in to my SharePoint farms in Windows Azure. All the while though, you’re being charged for compute hours. The only way to stop that is to delete your VMs. Powering down doesn’t cut your costs, you’re still charged for compute hours. You can use tip #4 to do a complete “tear down,” or use some PowerShell snippets to do a partial tear down. I’ll show you an example that should be pretty easy to follow. First, we’ll backup our VM configs:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#Region Func: BackupVMConfigs
function BackupVMConfigs () 
{
	param
    (
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$CloudService,
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$Folder
	)
If (Get-AzureVM -ServiceName $cloudservice) 
	{
		Write-Host "Backing up SharePoint VM Configurations..."
		Try {
			Get-AzureVM -ServiceName $cloudservice | foreach {
			$path = $folder + $_.Name + '.xml'
			Export-AzureVM -ServiceName $cloudservice -Name $_.Name -Path $path 
			}
			}
		Catch 
			{
			Write-Output $_
			Throw " - Error occurred. Not continuing since VM configuration may not have been saved and we don't want to delete them."
			}	
	}
Else { Write-Host " - Skipping $cloudservice because it doesn't seem to exist." -ForegroundColor Yellow}
Write-Host "- Done backing up. Removing VM's from Cloud Service..." -ForegroundColor Yellow
}
#EndRegion

To use that function, you’d type in BackupVMConfig -CloudService " -Folder "" I’ve used that function in the next snippet, where we actually remove the Cloud Service/Deployment:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
#Region Func: Remove SP Deployment
function RemoveSP {
$CloudService = "ArchSharePoint"
BackupVMConfigs -CloudService $CloudService -Folder "C:\Scripts\Azure\SharePoint\"
#Removing all VMs while keeping the cloud service/DNS name
Remove-AzureDeployment -ServiceName $cloudservice -Slot Production -Force
Write-Host "Removed all SharePoint VMs"
}
#EndRegion

Once you run the “RemoveSP”; function, you’re no longer incurring compute charges. The actual VHD’s are still there in your Windows Azure storage account though. The charges for storage are very small, almost insignificant so we keep those there. Later, when you’re ready to fire up your farm you can do the inverse. First, I have a function to do the actual import of the VMs:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#Region Func: ImportVMs ($CloudService, $Folder)
function ImportVMs ()
{
	param
    (
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$CloudService,
        [Parameter(Mandatory=$True)]
		[ValidatePattern({\\$})]
		[ValidateScript({Test-Path $_ -PathType 'Container'})] 
        [String]$Folder
	)
$cs = Get-AzureVM -ServiceName $cloudservice
If ($cs.Count -ge 1) 
	{
	Write-Host "It looks like (at least some) of the VMs are up. Skipping restore..." -ForegroundColor Green
	}
Try 
	{
	Write-Host " - Importing VMs..." -ForegroundColor Yellow
	$script:vms = @()
	Get-ChildItem $folder | foreach {
		$path = $folder + $_
		$script:vms += Import-AzureVM -Path $path
		}
	}
Catch
	{
	Write-Output $_
	Throw " - Error occurred."
	}	
}
#EndRegion

And here’s where I’m using it:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#Region Restore SharePoint Deployment
function RestoreSharePoint 
{
$cloudservice = "ArchSharePoint"
$folder = "C:\Scripts\Azure\SharePoint\" #must include trailing \
ImportVMs -CloudService $CloudService -Folder $Folder
Write-Host " - Restoring takes about 30 minutes..." -ForegroundColor Yellow
Measure-Command {New-AzureVM -ServiceName $cloudservice -VNetName "vNet-Arch" -VMs $vms}
}
<# 
#This is used if the Cloud Service was deleted
## Cloud Service Paramaters  
$serviceLabel = $cloudservice
$serviceDesc = "Architecture SharePoint Farm" 
$ag = 'AG-SharePoint-Arch'
Measure-Command {New-AzureVM -ServiceName $cloudservice -ServiceLabel $serviceLabel -ServiceDescription $serviceDesc -AffinityGroup $ag -VNetName $vNet -VMs $vms}
#>
#EndRegion

Running the “RestoreSharePoint” function will call the ImportVMs function with the right parameters and attach your VHDs to some shiny new VMs based on the backups you took before.

Conclusion

That’s the end of our series. I hope I’ve achieved my goal of consolidating all the information needed to understand and setup SharePoint farms on Windows Azure IaaS. If you haven’t started playing around yet, go back to the introduction to learn how to sign up for a free account. A big thanks to my colleagues who helped me throughout the series: Paul Stubbs, Len Cardinal, Kosma Zygouras, Shane Laney.

0 comments

SharePoint on Windows Azure – Part 4: SharePoint Farm

SharePoint

clip_image002Let’s review the previous articles in this series:

Part 1: Introduction – We showed how to set up tools to interact with Windows Azure.

Part 2: Storage – We showed how to setup Windows Azure storage, how to upload vhd’s and some tools to manage storage.

Part 3: Networking – We showed how virtual networking works, the advantages of using Cloud Services and how to setup the first virtual machine.

Endpoints

Let’s talk about endpoints, load balancing and probes for a minute since we’ll use this concept a bit later on. By default, when you create a VM an endpoint is automatically created. This is what allows you to use RDP (Remote Desktop Protocol) to connect to your VM. Here’s how that looks in the management portal:
clip_image004

Windows Azure assigns a random high port number for the public port that maps to the RDP port (3389). If you’re familiar with NAT in networking, this is similar in concept. Note, this one is not load balanced.

So, how do we use this for our SharePoint Farm? To get a load balanced port that’s publically accessible, we simply define a local port and public port that’s the same for each VM. For example, we can open port 80 (public) and map it to port 80 (private) for our SharePoint Web Servers. If we do this a second time, Windows Azure recognizes that we want this port to be load balanced against all the virtual machines we’ve specified.

Let’s also take a minute to talk about probes. A probe is a method used to check the health of a server, especially in a load balanced set. For example, you may have 3 SharePoint Web Servers but one of them is down. The probe can detect this and prevent users from hitting that server. This is also useful in cases where you just don’t want users accessing one of the servers (to apply updates, let’s say).

Following the sample from Paul Stubbs, we’ll use this snippet:

?View Code POWERSHELL
1
2
Add-AzureEndpoint -Name 'http' -LBSetName 'lbhttp' -LocalPort 80 -PublicPort 80 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm'

In this example, we’re asking Windows Azure to use the “http” protocol on port 80 and target the iisstart.htm page. For this to work, the machine much have a website at that path that’s alive. If not, the probe will fail and users won’t be routed to that machine. For our scenario, we’ll keep the IIS “Default Web Site” to use for probing purposes.

SharePoint Servers

Now, it gets much easier. We’re going to provision the rest of the SharePoint Farm. Once again, let’s do a quick connection test. Open Windows Azure PowerShell and type in the following:

?View Code POWERSHELL
1
Get-AzureSubscription | Select SubscriptionName, CurrentStorageAccount

Hopefully you can still connect. Next, we’ll assign some values to our account variables as we did before. Remember to use the values you got when you ran the last command.

?View Code POWERSHELL
1
2
3
4
5
6
# your imported subscription name
$subscriptionName = "Windows Azure Internal Consumption"
$storageAccount = "wahidstore"
AzureSubscription $subscriptionName
Set-AzureSubscription $subscriptionName -CurrentStorageAccount 
$storageAccount

Now, we’re going to specify parameters for a new Cloud Service. In my previous example, I used CS-ArchSP2013. For this one, I’m going to call it ArchSharePoint. Use any name that’s available, to test for available names, use Test-AzureName –Service ‘<your chosen name>’

?View Code POWERSHELL
1
2
3
4
# Cloud Service Paramaters
$serviceName = "ArchSharePoint"
$serviceLabel = "ArchSharePoint"
$serviceDesc = "Cloud Service for SharePoint Farm"

Now, I’ll specify the networking parameters using the same $vnetname, $ag, and $primaryDNS as before but the other $subnetName I created.:

?View Code POWERSHELL
1
2
3
4
5
# Network Paramaters
$vnetname = 'vNet-Arch'
$subnetName = 'SharePoint'
$ag = 'AG-SharePoint-Arch'
$primaryDNS = '192.168.1.4'

If you’re creating a brand new farm, rather than using uploaded images, specify these two parameters too:

?View Code POWERSHELL
1
2
3
4
#VM Image names taken from: Get-AzureVMImage | select Label, ImageName | 
fl<br />$spimage= 
'MSFT__Win2K8R2SP1-120612-1520-121206-01-en-us-30GB.vhd'<br />$sqlimage = 
'MSFT__Sql-Server-11EVAL-11.0.2215.0-05152012-en-us-30GB.vhd'

Next, we’ll specify a few availability sets. Think of availability sets as a way to configure high availability. We’re telling Windows Azure that VMs in an availability set have the same role. This instructs Windows Azure to put these VMs on different racks and/or different servers to avoid any single points of failure.

?View Code POWERSHELL
1
2
3
4
# Availability Sets
$avsetwfe = 'avset-arch-web'
$avsetapps = 'avset-arch-apps'
$avsetsql = 'avset-arch-sql'

Specify a location for these VMs. I’ve created a folder under my “vhds” container called “Arch” and I want these VMs to go there.

clip_image006

?View Code POWERSHELL
1
2
# MediaLocation
$mediaLocation = "http://wahidstore.blob.core.windows.net/vhds/Arch/"

The next thing we’re going to specify is the VM configuration. Since there’s a lot going on in the next set of PowerShell scripts, let me explain. The $size variable is self-explanatory. I’m going to use “Large” which gives me 4 cores and 7 GB of RAM. The recommendation here is to use smaller VM’s that you can scale out (my adding more), rather than Extra Large VMs. You pay for compute hours based on the size, so many times several smaller ones are better than fewer larger ones.

For $vmStorageLocation, we’re going to specify the $mediaLocation above, plus a name for the vhd. If you uploaded VM’s use the name of the vhd you uploaded.

I’m going to skip it below, but you can use the following command to automate joining the domain by piping it after $vmLocaction:

?View Code POWERSHELL
1
2
3
4
$vmStorageLocation | 
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd `
-Domain $domain -DomainUserName $domuser -DomainPassword $dompwd `
-MachineObjectOU $advmou -JoinDomain $joindom

So, let’s define our VM configuration:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
## Create SP Web1
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SPWeb1.vhd"
$spweb1 = New-AzureVMConfig -Name 'Arch-SPWeb1' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 
-PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

Note: If you’re creating new VMs, rather than using uploaded VMs, make the following changes.

§ Change –DiskName $vmStorageLocation to –ImageName $spimage (or $sqlimage for the SQL servers).

§ After –InstanceSize $size, type in –MediaLocation $vmStorageLocation

§ After Set-AzureSubnet $subnetName, add the following line:

Add-AzureProvisioningConfig -Password ‘pass@word1’ -Windows

We can copy the above set of commands as a template to build out our farm. In my case, I’ve got another SharePoint Web Server, a SharePoint App Server, and an Office Web Apps server.

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
## Create SP Web2
$size = &quot;Large&quot;
$vmStorageLocation = $mediaLocation + "Arch-SPWeb2.vhd"
$spweb2 = New-AzureVMConfig -Name 'Arch-SPWeb2' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName
 
## Create SP App1
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SPApp1.vhd"
$spapp1 = New-AzureVMConfig -Name 'Arch-SPApp1' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 
-PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName
 
## Create WAC
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-WAC1.vhd"
$wac1 = New-AzureVMConfig -Name 'Arch-WAC1' -AvailabilitySetName $avsetwfe 
`
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

Similarly, I’m going to specify the configuration for the SQL Server. There’s not much different here except that I don’t need to create an endpoint. However, I do want to create a data disk. Refer to Part 2 on storage for an explanation:

?View Code POWERSHELL
1
2
3
4
5
6
7
## Create SQL Server1 
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SQL1.vhd"
$spsql1 = New-AzureVMConfig -Name 'Arch-SQL1' -AvailabilitySetName 
$avsetsql -DiskName $vmStorageLocation -InstanceSize $size | 
Add-AzureDataDisk -CreateNew -DiskSizeInGB 100 -DiskLabel 'data' -LUN 0 –HostCaching “None” | 
Set-AzureSubnet $subnetName

Now, DOUBLE-CHECK everything before you move forward. Undoing this configuration can be complicated. We have just created our configurations, nothing has actually happened just yet, that’s what’s the following commands do:

?View Code POWERSHELL
1
2
3
4
5
$dns1 = New-AzureDns -Name 'DNS1' -IPAddress $primaryDNS
New-AzureVM -ServiceName $serviceName -ServiceLabel $serviceLabel `
-ServiceDescription $serviceDesc `
-AffinityGroup $ag -VNetName $vnetname -DnsSettings $dns1 `
-VMs $spweb1,$spweb2,$spapp1,$wac1, $spsql1

clip_image008Hit Enter and away it goes. If you run into any problems, I’ll try to cover the ones I’ve encountered in the next article. If everything went smooth, you’ll see something like the screenshot below. This will take a few minutes, sit back and enjoy some coffee or something. In my demo, this was taking a while on the second VM and I really wanted to hit CTRL+C or something, DON’T – just be patient.

0 comments
%d bloggers like this: