Profile Import fails when running Central Administration over ssl

SharePoint

I’m documenting this issue I discovered last week because I haven’t seen it mentioned elsewhere.

The issue occurs only when using the new SharePoint Active Directory Import in SharePoint 2013 when your Central Administration site is running over SSL. When you first setup your profile sync, you have three choices under “Configure Synchronization Settings”:

  1. Use SharePoint Profile Synchronization (This one leverages the built-in FIM)
  2. Use SharePoint Active Directory Import (This is the direct AD Import)
  3. Enable External Identity Provider

 

I haven’t tested with #1 and #3 so this issue may not apply to those. I originally configured Central Administration to run over port 443 (the default SSL port) and my IIS bindings only have the “https” type installed. By selecting my certificate in IIS, it gives me the proper host header: https://spadmin.mynetwork.com.

I did test using alternate ports without success. When using http, any port will work, whether you use the default port 80 or some other port like 8888. I’d like to mention if everything is setup correctly and working, Spence Harbar (@harbars) has a great blog post which explains how to set this up: http://www.harbar.net/archive/2012/07/23/sp13adi.aspx. Refer to that post to see how good ULS logs look. What if you have a similar configuration to the one I have?

You’ll see something like this first:

CreateProfileImportExportId: Could not InitializeProfileImportExportProcess for end point 'direct://spadmin.mynetwork.com:443/_vti_bin/profileimportexportservice.asmx?ApplicationID=03063851-9e17-4f94-8db6-a71f7b967bd3', retries left '0': Exception 0

And then:

ActiveDirectory Import failed for ConnectionForectName 'mynetwork.com', ConnectionSynchronizationOU 'DC=mynetwork,DC=com', ConnectionUserName 'mynetwork\spupssync'. Error message: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> Microsoft.Office.Server.UserProfiles.UserProfileApplicationNotAvailableException: No User Profile Application 

The Workaround

As far as I know, there is no true fix, its a bug. I hope this will be addressed by an update. To get this working you must use a non-SSL Central Administration. However, as a workaround, you can put a load balancer in front of  Central Administration so that client to server communications are encrypted but they’re unencrypted from the load balancer to Central Administration.

0 comments

SharePoint on Windows Azure – 10 Tips

SharePoint

Tip #1: Growing Your SharePoint Farm

So you have a bunch of servers in your farm but you need one or two more. The steps I outlined before will work with some modification. Now that you’ve already created your Cloud Service, you have to do things a bit differently. If you try to do the New-AzureVM command like before, you’ll likely see "DNS name already taken" error that I’ve highlighted further down. To add additional servers to your Cloud Service, follow the guidance from the previous articles about connecting to Windows Azure, specifying your Cloud Service, storage account, and other pertinent information. Then define your new server or servers as we did before.

?View Code POWERSHELL
1
2
3
4
5
## Create SP App3 
$size = "Small"
$vhdname = "Arch-SPApp3.vhd" 
$vmStorageLocation = $mediaLocation + $vhdname
$spwebnew = New-AzureVMConfig -Name 'Arch-SPApp3' -AvailabilitySetName $avsetwfe -ImageName $spimage -InstanceSize $size -MediaLocation $vmStorageLocation | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 -Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol TCP -PublicPort (get-random -max 65000 -min 20000) -LocalPort 3389 | Set-AzureSubnet $subnetName | Add-AzureProvisioningConfig -Password 'pass@word1' -Windows

The example I’m showing would create a new Windows server without SharePoint. Then here’s the part to really pay attention to, the New-AzureVM command:

?View Code POWERSHELL
1
New-AzureVM -DeploymentName "NewSharePointServer" -ServiceName $serviceName -VNetName $vnetname -VMs $spwebnew

Here, we only need to give the -ServiceName, -VNetName, and -VMs parameters. If you put in more information, especially the AffinityGroup, you’ll get the DNS already taken error.

Tip #2: DNS name already taken

image

If you get this, it usually means this is the second time you’ve run the New-AzureVM for the Cloud Service. If everything is OK and you just want to add a server, see tip #1. If something is wrong, remove the Cloud Service and try it (the New-AzureVM cmdlets) again:

?View Code POWERSHELL
1
Remove-AzureService -ServiceName "ArchSharePoint" -Force

Tip #3: Finish Up

Remember to create the healthcheck file. In our examples, we specified the -ProbePath ‘/healthcheck/iisstart.htm’. So, we need to go to each server that we want our Azure load balancer to use and add that file. To do so, open IIS Manager and expand the Default Web Site. Right-click on the Default Web Site and select Add Virtual Directory. Give it a name (Alias) of “healthcheck” and choose a path (I used C:\Inetpub\wwwroot). Finally copy the “iisstart.htm” from the root of Default Web Site to the healthcheck folder, or just create an empty file called “iisstart.htm” Here’s how it looks when its finished.

image

Tip #4: Automate

If you’re going to be doing this repeatedly (for example, to perform a bunch of tests), automate. Lucky for us, Ram Gopinathan has written a script to do just that: http://gallery.technet.microsoft.com/PowerShell-Script-for-f43bb414 Ram’s script using an input file so you can easily document what you’re doing, then it provisions all your VMs. After you’re done, it’ll tear everything down, nice and neatly.

Tip #5: CSUpload Error: Too many arguments for command Add-Disk

This error occurred when using Add-Disk:

image

The problem was the – (the dashes). I used notepad, PowerGUI and other tools, but the dash doesn’t translate to the command prompt. To resolve this, just re-type the dashes.

Tip #6: CSUpload Error: CSUpload is expecting a page blob

I had copied down a vhd to make some modifications and then tried to upload it later. Here’s what I kept getting:

image

In fact, CloudXplorer and other tools I tried reported the same thing. What’s the fix? Use AzCopy. See the link to read up on how to get it. AzCopy is great because it can multi-thread. My uploads using AzCopy were very fast. By specifying the /BlobType:page parameter, you can upload a block blob as a page blob. Here’s how my command looks: azcopy E:\ftproot\vhd http://storageacct.blob.core.windows.net/vhds/ /blobtype:page /destkey:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxx/== /Z /V

Tip #7: No RemoteDesktop Endpoint

I also realized that none of my VMs had an endpoint for Remote Desktop. We can add that by using a command like this:

?View Code POWERSHELL
1
2
$vmname = "Arch-SQL1"
Get-AzureVM -ServiceName "ArchSharePoint" -Name $vmname | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol "TCP" -PublicPort 53223 -LocalPort 3389 | Update-AzureVM

Remember to use a different PublicPort for each VM in a particular Cloud Service. I just incremented by one (53223, 53224, etc.). Or, this should work too:

?View Code POWERSHELL
1
Get-AzureVM -ServiceName $servicename | ForEach-Object {Add-AzureEndpoint -Name $endpointname -Protocol TCP -PublicPort (get-random -max 65000 -min 20000) -LocalPort 3389} | Update-AzureVM

I understand that it’s possible, but not likely, Get-Random could pick the same random number. I supposed you could script it further to check if a port is being used but that wasn’t worth the effort here.

Tip #8: Office Web Apps Tip

I published my SharePoint farm publicly, by creating an endpoint at 443. This allows me to browse SharePoint from anywhere using https://.cloudapp.net. However, I couldn’t use my Office Web App servers since they were only accessible from within Azure. The Office Web Apps were configured to use https and run on port 443. I could publish an endpoint, but I had already used 443 for SharePoint (and I don’t want to use an insecure port, like 80). So, I picked a random port, 9443. Now, how do we get SharePoint to know about this? SharePoint will PULL information from Office Web Apps, so first we need to configure Office Web Apps with this information (do this on any Office Web Apps server):

?View Code POWERSHELL
1
2
# Set the ExternalUrl on any Office Web Apps Server
New-OfficeWebAppsFarm -ExternalUrl https://archsharepoint.cloudapp.net:9443

Then, on the SharePoint server, New-SPWopiBinding will bind to the public address on the public port (https://archsharepoint.cloudapp.net:9443). Create an endpoint in the WAC server for a TCP connection on public port 9443 to private port 443.

Tip #9: Blocked RDP in Firewall

During testing of ports and protocols, I blocked all connections on one of my servers. Obviously, that included the RDP port. If you do that, the only thing you can really do is download the VHD for that server, mount it on-premises somewhere (which will give you console access) and change it. Then, upload it back to Windows Azure. It’s a tedious process, so if you really don’t need machine or data, re-create it.

Tip #10: Reduce costs

UPDATE: 6/10/2013

As of June 3rd, we are no longer billed for VMs in a stopped state, great news! See Scott Guthrie’s Blog for more info.

If you’re testing scenarios over a period of week or months and you don’t necessarily use your VM’s every day – you can cut some costs. For some of my work, I might go a week or two without ever logging in to my SharePoint farms in Windows Azure. All the while though, you’re being charged for compute hours. The only way to stop that is to delete your VMs. Powering down doesn’t cut your costs, you’re still charged for compute hours. You can use tip #4 to do a complete “tear down,” or use some PowerShell snippets to do a partial tear down. I’ll show you an example that should be pretty easy to follow. First, we’ll backup our VM configs:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#Region Func: BackupVMConfigs
function BackupVMConfigs () 
{
	param
    (
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$CloudService,
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$Folder
	)
If (Get-AzureVM -ServiceName $cloudservice) 
	{
		Write-Host "Backing up SharePoint VM Configurations..."
		Try {
			Get-AzureVM -ServiceName $cloudservice | foreach {
			$path = $folder + $_.Name + '.xml'
			Export-AzureVM -ServiceName $cloudservice -Name $_.Name -Path $path 
			}
			}
		Catch 
			{
			Write-Output $_
			Throw " - Error occurred. Not continuing since VM configuration may not have been saved and we don't want to delete them."
			}	
	}
Else { Write-Host " - Skipping $cloudservice because it doesn't seem to exist." -ForegroundColor Yellow}
Write-Host "- Done backing up. Removing VM's from Cloud Service..." -ForegroundColor Yellow
}
#EndRegion

To use that function, you’d type in BackupVMConfig -CloudService " -Folder "" I’ve used that function in the next snippet, where we actually remove the Cloud Service/Deployment:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
#Region Func: Remove SP Deployment
function RemoveSP {
$CloudService = "ArchSharePoint"
BackupVMConfigs -CloudService $CloudService -Folder "C:\Scripts\Azure\SharePoint\"
#Removing all VMs while keeping the cloud service/DNS name
Remove-AzureDeployment -ServiceName $cloudservice -Slot Production -Force
Write-Host "Removed all SharePoint VMs"
}
#EndRegion

Once you run the “RemoveSP”; function, you’re no longer incurring compute charges. The actual VHD’s are still there in your Windows Azure storage account though. The charges for storage are very small, almost insignificant so we keep those there. Later, when you’re ready to fire up your farm you can do the inverse. First, I have a function to do the actual import of the VMs:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#Region Func: ImportVMs ($CloudService, $Folder)
function ImportVMs ()
{
	param
    (
        [Parameter(Mandatory=$True)][ValidateNotNullOrEmpty()]
        [String]$CloudService,
        [Parameter(Mandatory=$True)]
		[ValidatePattern({\\$})]
		[ValidateScript({Test-Path $_ -PathType 'Container'})] 
        [String]$Folder
	)
$cs = Get-AzureVM -ServiceName $cloudservice
If ($cs.Count -ge 1) 
	{
	Write-Host "It looks like (at least some) of the VMs are up. Skipping restore..." -ForegroundColor Green
	}
Try 
	{
	Write-Host " - Importing VMs..." -ForegroundColor Yellow
	$script:vms = @()
	Get-ChildItem $folder | foreach {
		$path = $folder + $_
		$script:vms += Import-AzureVM -Path $path
		}
	}
Catch
	{
	Write-Output $_
	Throw " - Error occurred."
	}	
}
#EndRegion

And here’s where I’m using it:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#Region Restore SharePoint Deployment
function RestoreSharePoint 
{
$cloudservice = "ArchSharePoint"
$folder = "C:\Scripts\Azure\SharePoint\" #must include trailing \
ImportVMs -CloudService $CloudService -Folder $Folder
Write-Host " - Restoring takes about 30 minutes..." -ForegroundColor Yellow
Measure-Command {New-AzureVM -ServiceName $cloudservice -VNetName "vNet-Arch" -VMs $vms}
}
<# 
#This is used if the Cloud Service was deleted
## Cloud Service Paramaters  
$serviceLabel = $cloudservice
$serviceDesc = "Architecture SharePoint Farm" 
$ag = 'AG-SharePoint-Arch'
Measure-Command {New-AzureVM -ServiceName $cloudservice -ServiceLabel $serviceLabel -ServiceDescription $serviceDesc -AffinityGroup $ag -VNetName $vNet -VMs $vms}
#>
#EndRegion

Running the “RestoreSharePoint” function will call the ImportVMs function with the right parameters and attach your VHDs to some shiny new VMs based on the backups you took before.

Conclusion

That’s the end of our series. I hope I’ve achieved my goal of consolidating all the information needed to understand and setup SharePoint farms on Windows Azure IaaS. If you haven’t started playing around yet, go back to the introduction to learn how to sign up for a free account. A big thanks to my colleagues who helped me throughout the series: Paul Stubbs, Len Cardinal, Kosma Zygouras, Shane Laney.

0 comments

SharePoint on Windows Azure – Part 4: SharePoint Farm

SharePoint

clip_image002Let’s review the previous articles in this series:

Part 1: Introduction – We showed how to set up tools to interact with Windows Azure.

Part 2: Storage – We showed how to setup Windows Azure storage, how to upload vhd’s and some tools to manage storage.

Part 3: Networking – We showed how virtual networking works, the advantages of using Cloud Services and how to setup the first virtual machine.

Endpoints

Let’s talk about endpoints, load balancing and probes for a minute since we’ll use this concept a bit later on. By default, when you create a VM an endpoint is automatically created. This is what allows you to use RDP (Remote Desktop Protocol) to connect to your VM. Here’s how that looks in the management portal:
clip_image004

Windows Azure assigns a random high port number for the public port that maps to the RDP port (3389). If you’re familiar with NAT in networking, this is similar in concept. Note, this one is not load balanced.

So, how do we use this for our SharePoint Farm? To get a load balanced port that’s publically accessible, we simply define a local port and public port that’s the same for each VM. For example, we can open port 80 (public) and map it to port 80 (private) for our SharePoint Web Servers. If we do this a second time, Windows Azure recognizes that we want this port to be load balanced against all the virtual machines we’ve specified.

Let’s also take a minute to talk about probes. A probe is a method used to check the health of a server, especially in a load balanced set. For example, you may have 3 SharePoint Web Servers but one of them is down. The probe can detect this and prevent users from hitting that server. This is also useful in cases where you just don’t want users accessing one of the servers (to apply updates, let’s say).

Following the sample from Paul Stubbs, we’ll use this snippet:

?View Code POWERSHELL
1
2
Add-AzureEndpoint -Name 'http' -LBSetName 'lbhttp' -LocalPort 80 -PublicPort 80 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm'

In this example, we’re asking Windows Azure to use the “http” protocol on port 80 and target the iisstart.htm page. For this to work, the machine much have a website at that path that’s alive. If not, the probe will fail and users won’t be routed to that machine. For our scenario, we’ll keep the IIS “Default Web Site” to use for probing purposes.

SharePoint Servers

Now, it gets much easier. We’re going to provision the rest of the SharePoint Farm. Once again, let’s do a quick connection test. Open Windows Azure PowerShell and type in the following:

?View Code POWERSHELL
1
Get-AzureSubscription | Select SubscriptionName, CurrentStorageAccount

Hopefully you can still connect. Next, we’ll assign some values to our account variables as we did before. Remember to use the values you got when you ran the last command.

?View Code POWERSHELL
1
2
3
4
5
6
# your imported subscription name
$subscriptionName = "Windows Azure Internal Consumption"
$storageAccount = "wahidstore"
AzureSubscription $subscriptionName
Set-AzureSubscription $subscriptionName -CurrentStorageAccount 
$storageAccount

Now, we’re going to specify parameters for a new Cloud Service. In my previous example, I used CS-ArchSP2013. For this one, I’m going to call it ArchSharePoint. Use any name that’s available, to test for available names, use Test-AzureName –Service ‘<your chosen name>’

?View Code POWERSHELL
1
2
3
4
# Cloud Service Paramaters
$serviceName = "ArchSharePoint"
$serviceLabel = "ArchSharePoint"
$serviceDesc = "Cloud Service for SharePoint Farm"

Now, I’ll specify the networking parameters using the same $vnetname, $ag, and $primaryDNS as before but the other $subnetName I created.:

?View Code POWERSHELL
1
2
3
4
5
# Network Paramaters
$vnetname = 'vNet-Arch'
$subnetName = 'SharePoint'
$ag = 'AG-SharePoint-Arch'
$primaryDNS = '192.168.1.4'

If you’re creating a brand new farm, rather than using uploaded images, specify these two parameters too:

?View Code POWERSHELL
1
2
3
4
#VM Image names taken from: Get-AzureVMImage | select Label, ImageName | 
fl<br />$spimage= 
'MSFT__Win2K8R2SP1-120612-1520-121206-01-en-us-30GB.vhd'<br />$sqlimage = 
'MSFT__Sql-Server-11EVAL-11.0.2215.0-05152012-en-us-30GB.vhd'

Next, we’ll specify a few availability sets. Think of availability sets as a way to configure high availability. We’re telling Windows Azure that VMs in an availability set have the same role. This instructs Windows Azure to put these VMs on different racks and/or different servers to avoid any single points of failure.

?View Code POWERSHELL
1
2
3
4
# Availability Sets
$avsetwfe = 'avset-arch-web'
$avsetapps = 'avset-arch-apps'
$avsetsql = 'avset-arch-sql'

Specify a location for these VMs. I’ve created a folder under my “vhds” container called “Arch” and I want these VMs to go there.

clip_image006

?View Code POWERSHELL
1
2
# MediaLocation
$mediaLocation = "http://wahidstore.blob.core.windows.net/vhds/Arch/"

The next thing we’re going to specify is the VM configuration. Since there’s a lot going on in the next set of PowerShell scripts, let me explain. The $size variable is self-explanatory. I’m going to use “Large” which gives me 4 cores and 7 GB of RAM. The recommendation here is to use smaller VM’s that you can scale out (my adding more), rather than Extra Large VMs. You pay for compute hours based on the size, so many times several smaller ones are better than fewer larger ones.

For $vmStorageLocation, we’re going to specify the $mediaLocation above, plus a name for the vhd. If you uploaded VM’s use the name of the vhd you uploaded.

I’m going to skip it below, but you can use the following command to automate joining the domain by piping it after $vmLocaction:

?View Code POWERSHELL
1
2
3
4
$vmStorageLocation | 
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd `
-Domain $domain -DomainUserName $domuser -DomainPassword $dompwd `
-MachineObjectOU $advmou -JoinDomain $joindom

So, let’s define our VM configuration:

?View Code POWERSHELL
1
2
3
4
5
6
7
8
## Create SP Web1
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SPWeb1.vhd"
$spweb1 = New-AzureVMConfig -Name 'Arch-SPWeb1' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 
-PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

Note: If you’re creating new VMs, rather than using uploaded VMs, make the following changes.

§ Change –DiskName $vmStorageLocation to –ImageName $spimage (or $sqlimage for the SQL servers).

§ After –InstanceSize $size, type in –MediaLocation $vmStorageLocation

§ After Set-AzureSubnet $subnetName, add the following line:

Add-AzureProvisioningConfig -Password ‘pass@word1’ -Windows

We can copy the above set of commands as a template to build out our farm. In my case, I’ve got another SharePoint Web Server, a SharePoint App Server, and an Office Web Apps server.

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
## Create SP Web2
$size = &quot;Large&quot;
$vmStorageLocation = $mediaLocation + "Arch-SPWeb2.vhd"
$spweb2 = New-AzureVMConfig -Name 'Arch-SPWeb2' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName
 
## Create SP App1
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SPApp1.vhd"
$spapp1 = New-AzureVMConfig -Name 'Arch-SPApp1' -AvailabilitySetName 
$avsetwfe `
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 
-PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName
 
## Create WAC
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-WAC1.vhd"
$wac1 = New-AzureVMConfig -Name 'Arch-WAC1' -AvailabilitySetName $avsetwfe 
`
-DiskName $vmStorageLocation -InstanceSize $size | Add-AzureEndpoint -Name 'https' -LBSetName 'lbhttps' -LocalPort 443 -PublicPort 443 `
-Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath '/healthcheck/iisstart.htm' | Set-AzureSubnet $subnetName

Similarly, I’m going to specify the configuration for the SQL Server. There’s not much different here except that I don’t need to create an endpoint. However, I do want to create a data disk. Refer to Part 2 on storage for an explanation:

?View Code POWERSHELL
1
2
3
4
5
6
7
## Create SQL Server1 
$size = "Large"
$vmStorageLocation = $mediaLocation + "Arch-SQL1.vhd"
$spsql1 = New-AzureVMConfig -Name 'Arch-SQL1' -AvailabilitySetName 
$avsetsql -DiskName $vmStorageLocation -InstanceSize $size | 
Add-AzureDataDisk -CreateNew -DiskSizeInGB 100 -DiskLabel 'data' -LUN 0 –HostCaching “None” | 
Set-AzureSubnet $subnetName

Now, DOUBLE-CHECK everything before you move forward. Undoing this configuration can be complicated. We have just created our configurations, nothing has actually happened just yet, that’s what’s the following commands do:

?View Code POWERSHELL
1
2
3
4
5
$dns1 = New-AzureDns -Name 'DNS1' -IPAddress $primaryDNS
New-AzureVM -ServiceName $serviceName -ServiceLabel $serviceLabel `
-ServiceDescription $serviceDesc `
-AffinityGroup $ag -VNetName $vnetname -DnsSettings $dns1 `
-VMs $spweb1,$spweb2,$spapp1,$wac1, $spsql1

clip_image008Hit Enter and away it goes. If you run into any problems, I’ll try to cover the ones I’ve encountered in the next article. If everything went smooth, you’ll see something like the screenshot below. This will take a few minutes, sit back and enjoy some coffee or something. In my demo, this was taking a while on the second VM and I really wanted to hit CTRL+C or something, DON’T – just be patient.

0 comments

SharePoint on Windows Azure – Part 3: Networking

SharePoint

By now, I hope you’ve read Part 1: Introduction to get everything set up and ready.

Much of the documentation on Windows Azure is confusing when it comes to networking. There are a few reasons for this:

  • Networking, specifically virtual networks were released later in the lifecycle.
  • Since Windows Azure has its beginnings in PaaS, the concepts don’t necessarily apply to IaaS.
  • Terminology changed, for example “Cloud Service” was first called “Hosted Service,” and some API’s still use the term.

Networking Definitions

If you keep those things in mind, and pay attention to the date of publication for whatever article you’re reading, it should help you understand what’s being written about. In this article, I’m going to highlight just a few networking concepts as they relate to IaaS and then we’ll start creating our network.

There are 3 definitions we need to know to start:

Virtual Private Network (VPN) – A VPN creates a private network that operates like a local network but spans greater distances. We’re not going to create a VPN for our scenarios, however you could do so to protect your Windows Azure virtual machines from external access.

Virtual Network – A virtual network in Windows Azure is a container where you define the IP address ranges your virtual machines may use. Windows Azure uses infinite-lease DHCP addresses and you can’t assign static IP addresses.

  • By defining your IP addresses with a virtual network, you can control which IP’s are given to your virtual machines.
  • This is also where you can define a DNS server. Any virtual machine in a specific virtual network will be automatically assigned the DNS server specified.
  • Finally, you must assign an Affinity Group to your virtual network. This way, all VMs in your virtual network can be hosted closely together (i.e., same datacenter).

Cloud Service – Formerly called Hosted Service. A Cloud Service comes from the PaaS world but allow me describe how it’s useful in the IaaS world. A Cloud Service is also another container. You can assign a subnet to a Cloud Service, which allows you better control of how your IP addresses are assigned. We’ll use this concept later for our domain controllers. Cloud Services have other benefits as well:

  • Separation of “roles,” which allow you to start up one Cloud Service before others start. For example, our domain controllers will be in their own Cloud Service and we want this one to start up first since everything else depends on it.
  • Separation of “roles” also permits us to have different external entry points. For example, we’ll open up ports 80 and 443 for SharePoint but we don’t want these open for our domain controllers.
  • By creating two or more endpoints within a Cloud Service that have the same port number, Windows Azure recognizes a need for a load balancer and automatically load balances those endpoints.
  • You have the ability to export and import configurations of a Cloud Service. We’ll look at this in more detail later and why it’s so important.
  • Virtual Machines within a Cloud Service can communicate freely over any port and protocol. Cloud Services within a Virtual Network can also communicate freely over any port and protocol.

So with that, I hope I’ve convinced you to use Virtual Networks and Cloud Services wisely to design your network and SharePoint Farms. Without using these concepts, you could have problems with farms communicating with each other, DNS issues, security issues due to exposing too many endpoints and complicated setup of load balancing.

Paul Stubbs (@paulstubbs) was the source again for most of this information and I’ve added a link to his blog in the resources section.

Virtual Network

We want to create a virtual network and an associated Affinity Group. We’ll do this using PowerShell because it’s easier, repeatable, and it documents what we’re doing. First, we need to create an XML-based configuration file. Use your favorite text editor (mine is Notepad++) and create a new file with the following contents:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
 
<VirtualNetworkConfiguration>
 <Dns>
   <DnsServers>
     <DnsServer name="DNS1" IPAddress="192.168.1.4" />
   </DnsServers>
 </Dns>
   <VirtualNetworkSites>
     <VirtualNetworkSite name="vNet-Corp" AffinityGroup="AG-Arch-SharePoint">
       <AddressSpace>
         <AddressPrefix>192.168.0.0/16</AddressPrefix>
       </AddressSpace>
     <Subnets>
       <Subnet name="DomainControllers">
         <AddressPrefix>192.168.1.0/29</AddressPrefix>
       </Subnet>
       <Subnet name="SharePoint">
         <AddressPrefix>192.168.2.0/24</AddressPrefix>
       </Subnet>
     </Subnets>
         <DnsServersRef>
             <DnsServerRef name="DNS1" />
         </DnsServersRef>
       </VirtualNetworkSite>
     </VirtualNetworkSites>
   </VirtualNetworkConfiguration>
</NetworkConfiguration>

Name this file SharePointFarmVNET.xml. Feel free to change the network range and subnets to your desire. However, keep the following things in mind:

  • By default, Windows Azure assigns the first VM an x.x.x.4 address. We’ll spin up our domain controller first, which will be our DNS server so it will have that .4 address. You should probably keep this as is.
  • We want our domain controllers to have a small IP range. In CIDR notation, a /30 gives you just 2 usable host addresses but Windows Azure doesn’t allow this. The next smallest is a /29, which gives 6 usable host addresses so that’s what we’ll use.
  • You can add as many DNS addresses as you want.
  • All the Subnets must be a part of the overall Address Prefix. Search for “subnet calc” online to find calculators if you want more specific IP address ranges and read up on “CIDR notation.”

Now that we’ve defined our network, let’s create it. First, let’s do a quick connection test. Open Windows Azure PowerShell and type in the following:

?View Code POWERSHELL
1
Get-AzureSubscription | Select SubscriptionName, CurrentStorageAccount

Next, we’ll assign those values to some variable. I’m going to use my values, be sure to substitute these for your own.

?View Code POWERSHELL
1
2
3
4
$subscriptionName = "Windows Azure Wahid"
$storageAccount = "wahidstore" 
Select-AzureSubscription $subscriptionName 
Set-AzureSubscription $subscriptionName -CurrentStorageAccount $storageAccount

Next, we need to pick an Affinity Group. To see a list, use:

?View Code POWERSHELL
1
Get-AzureLocation | Select Name

Select one and specify it below for $AGLocation. The Affinity Group is one of the most important decisions you can make. Everything will be tied to this. I made a mistake once and chose the wrong Affinity Group and to change it, I had to delete my virtual network, storage account, VMs and a bunch of other things. The Affinity Group you choose here must be in the same datacenter as your disks (storage account).

?View Code POWERSHELL
1
2
3
4
5
# Affinity Group parameters 
$AGLocation = "East US"
$AGDesc = "SharePoint 2013 Architecture Affinity Group"
$AGName = "AG-Arch-SharePoint" 
$AGLabel = "AG-Arch-SharePoint"

Now we’ll create the Affinity Group:

?View Code POWERSHELL
1
2
3
# Create a new Affinity Group 
New-AzureAffinityGroup -Location $AGLocation -Description $AGDesc ` 
-Name $AGName -Label $AGLabel

We shouldn’t have a virtual network configuration but if you do and want to clear it, use:

?View Code POWERSHELL
1
Remove-AzureVNetConfig -ErrorAction SilentlyContinue

Finally, we apply the network configuration

?View Code POWERSHELL
1
2
3
# Apply new network. Either use the full path or run PowerShell from the location of the XML file.
$configPath = "C:\Scripts\Azure\SharePointFarmVNET.xml" 
Set-AzureVNetConfig -ConfigurationPath $configPath

That’s it, we’ve created an Affinity Group and virtual network. To verify and check our results, type in:

?View Code POWERSHELL
1
Get-AzureVNetConfig | Select -ExpandProperty XMLConfiguration

Domain Controllers

Next, we’ll create our Cloud Service container and domain controller. Most of this should be self-explanatory.

  • If you’ve uploaded your vhds, just follow along.
  • If you’re creating brand new vhds, pay special attentions to the notes.

For $diskname, put in the value of the disk that has been uploaded. And for $subnet, $vnetname, and $ag, use the values from the previous steps, when you created them.

Remember, csupload adds the disks to the repository for you but CloudXPlorer or other tools may not. First, check to see if the disk is in the repository by typing:

?View Code POWERSHELL
1
Get-AzureDisk | select diskname

If it’s not there, you’ll have to add it now. For example,

?View Code POWERSHELL
1
Add-AzureDisk -DiskName "Arch-DC-1.vhd" -MediaLocation "https://wahidstore.blob.core.windows.net/vhds/Arch-DC-1.vhd" -OS Windows

Let’s continue. The commands below setup the variables, create a VM configuration and setup the Cloud Service variable.

Note: If you don’t have an existing vhd in Azure and just want to create a new one, comment out the $diskname and uncomment the last 3 lines.

?View Code POWERSHELL
1
2
3
4
5
6
7
8
9
10
## Domain Controller 1 Parameters
$vmName = 'Arch-DC-1'
$diskName = "Arch-DC-1.vhd"
$size = "ExtraSmall"
$deploymentName = "SP2013-DC1-Deployment"
$deploymentlabel = "SharePoint 2013 DC1 Deployment"
$subnet = "DomainControllers"
#$imageName = 'MSFT__Win2K8R2SP1-120612-1520-121206-01-en-us-30GB.vhd'
#$vmStorageLocation = "http://wahidstore.blob.core.windows.net/vhds/Arch-DC-1.vhd"
#$password = "pass@word1"

Now we create the VM configuration.

Note: If you’re creating a new vhd instead of using one that was uploaded, uncomment and use the second set of commands instead of the one that follows.

?View Code POWERSHELL
1
2
3
## Create VM Configuration if using uploaded vhd
$dc1 = New-AzureVMConfig -Name $vmName -InstanceSize $size -DiskName $diskName | Add-AzureEndpoint -Name 'RemoteDesktop' -LocalPort 3389 –PublicPort (Get-Random –Min 10000 –Max 65000) -Protocol tcp
Set-AzureSubnet -SubnetNames $subnet -VM $dc1

I found that when creating VMs this way, no endpoints are created. If you want to be able to use Remote Desktop to administer the machine, you need to add the endpoint like I’ve done above (using Add-AzureEndpoint).

?View Code POWERSHELL
1
2
3
4
## Create VM Configuration if creating new VM
#$dc1 = New-AzureVMConfig -Name $vmName -InstanceSize $size -ImageName $imageName -MediaLocation $vmStorageLocation 
#Add-AzureProvisioningConfig -Windows -Password $password -VM $dc1
#Set-AzureSubnet -SubnetNames $subnet -VM $dc1

The rest is the same whether you’re just attaching an existing vhd or creating from from an image:

?View Code POWERSHELL
1
2
3
4
5
6
## Cloud Service Paramaters 
$serviceName = "ArchDC"
$serviceLabel = "ArchDC"
$serviceDesc = "Architecture Domain Controllers"
$vnetname = 'vNet-Arch'>
$ag = 'AG-Arch-SharePoint'

When we do the next part, New-AzureVM, it will first create the Cloud Service for us and then create the VM using the disk we specified.

?View Code POWERSHELL
1
2
## Create the VM(s)
New-AzureVM -ServiceName $serviceName -ServiceLabel $serviceLabel -ServiceDescription $serviceDesc -AffinityGroup $ag -VNetName $vnetname -VMs $dc1

My output looks like this:

So let’s wrap up. We’ve created a virtual network with 2 subnets, we’ve created a Cloud Service in that virtual network, and we just created a virtual machine in that Cloud Service. Here’s how our little deployment looks right now:

What’s next? We’ll create our SharePoint Farm, including the SQL Servers.

Resources

Paul Stubbs Blog: http://blogs.msdn.com/b/pstubbs


				
0 comments

SharePoint on Windows Azure – Part 2: Storage

SharePoint

In Part 1 of “SharePoint on Windows Azure” we gave a brief introduction and set up our basic tools to communicate with Windows Azure. There’s one more we’ll use that deals specifically with storage so let’s set that up now.

Storage Account Setup

First we need a storage account in Windows Azure. One may have been created when you signed up so let’s check first and if not, we’ll create one.

  1. In the Windows Azure management portal, click on “Storage” to see if a Storage Account exists. If one exists, skip to step # 6. Otherwise continue to step #2.
  2. To create a new Storage Account, click the New button at the bottom of the screen and click Data Services.
  3. Select Storage and Quick Create. Fill in a unique name for the URL.
  4. The Affinity Group may be blank, don’t worry if it is. Also, uncheck “Enable Geo-replication” if you want. We don’t need that for Dev/Test farms.
  5. Click Create Storage Account.
  6. Now we need to copy the key. Click on the Storage Account to get to the dashboard view.
  7. At the bottom, click “Manage Keys” and copy the Account Name and Primary Access Key somewhere temporary (like Notepad or OneNote). We’ll use these coming up soon.

Viewing Storage

There are several tools you can use to view Windows Azure Storage, I’ve put a link to a list of tools to explore Windows Azure storage in the resources section at the end. For now, we’ll use ClumsyLeaf CloudXPlorer (http://clumsyleaf.com/products/cloudxplorer), we’ll be using the freeware version, version 1). After you’ve downloaded and installed CloudXPlorer, we need to add a new account:

  1. Open CloudXPlorer and click File…, Accounts…, New…, Windows Azure account.
  2. Type in the Account Name you copied down in Step #7 of the earlier procedure.
  3. For Secret Key, type in the Primary Access Key you copied down in Step #7 of the earlier procedure.
  4. Check “Use SSL/TLS” and click OK. The account has been added, so close the Manage Accounts window if it’s still open.

Back in CloudXPlorer, you’ll see your account in a tree view. It’s probably empty now but you should have one container called “vhds.” This is where we can place our images and disks. Before we continue, we should spend a minute understanding the difference between an image and a disk.

Image vs. Disk

A disk is a vhd of a virtual machine that may or may not contain an operating system. When you attach this to a virtual machine in Windows Azure, it boots and expects everything to be ready. In contrast, an image is a vhd of virtual machine that has been sysprepped. It means, the machine will get a new SID (security identifier), new product key, and new Administrator credentials. This type of vhd can be used to deploy virtual machines, whereas a disk cannot.

What’s confusing is when researching how to upload a vhd, many people come to this article: https://www.windowsazure.com/en-us/manage/windows/common-tasks/upload-a-vhd/. In my opinion both the URL and the tile are inaccurately named, “Creating and Uploading a Virtual Hard Disk that Contains the Windows Server Operating System.” The instructions are good if you want to create an “image,” but not if you want to create a “disk” in Windows Azure. Windows Azure treats them differently. I uploaded a non-sysprepped vhd and attached it to a new VM. It tried to provision endlessly. In any case, that article *should* be titled “Creating and Uploading a Virtual Hard Disk that Contains the Windows Server Operating System as an Image,” and should link to another article for people that just want to upload a vhd as a disk.

Working with disks

Next, we’re going to upload a vhd. Unfortunately, there’s no support for vmdk’s or iso files (yet) so if you want your files in Windows Azure, you need to create a vhd. Let’s take a break and go over a few notes about vhd’s in Windows Azure. For a vhd to “work” it must meet these guidelines:

  • No dynamic disks. The vhd must be a fixed disk.
  • Maximum size must be under 127 GB.
  • No support for vhdx (need to verify this one)
  • Must be uploaded as a page blob (rather than a block blob).

If you create a vhd in Hyper-V for use in Windows Azure, make sure it’s a fixed disk and smaller than 127 GB. If you have existing dynamic disks you want to upload, you can use the “Edit Disk” feature in Hyper-V to convert it to a fixed disk. With that, let’s look at two upload tools.

The first is GUI-based, CloudXPlorer. To upload a vhd using CloudXplorer, DON’T CLICK THE UPLOAD BUTTON. If you look carefully, this is for block blobs. Also when you right-click, you’ll see an “Upload” link in the context menu – again for block blobs and as stated, we need to upload page blobs. The only way to upload a page blob is to right-click in a container or folder and click on “Upload page blob,” that’s the right option we want. I’ve pasted a link in the resources section called “Understanding Block blobs and Page blobs” if you want to read up on it. Once you click “Upload page blob,” you just select your .vhd file and away it goes.

The second tool is a called “CSUpload,” a command-line program included as part of the Windows Azure SDK that you downloaded in Part 1. I believe csupload has one major advantage, it’s multi-threaded so your uploads will go faster. CloudXPlorer is only muti-threaded for block blobs. To upload a .vhd using csupload:

  1. Open Windows Azure Command Prompt (in your Start Menu or Start Screen)
  2. Type in:

    csupload add-disk -destination “https://<storage account name>.blob.core.windows.net/vhds/MyDemoDisk.vhd” -Label “MyDemoDisk” -LiteralPath “<path to>\MyDemoDisk.vhd”

    Where <storage account name> is your Storage Account name, MyDemoDisk.vhd is the name of your vhd file and <path to> is the full path to the location of the vhd file.

    Note: “csupload add-disk” uploads the vhd, converts dynamic to fixed disks on the fly, and adds your disk to the repository. In contrast, the Windows Azure PowerShell version, “Add-AzureDisk” only adds the disk to the repository, after is uploaded. It doesn’t upload or convert dynamic disks to fixed disks.

  3. As stated earlier, the “-” character can get messed up when you copy and paste, so re-type it if it doesn’t work. For a 40GB file on a fast connection (5Mbps), I saw about 30 minute upload times.

You can use CloudXplorer to add containers and folders if you want to better organize your vhds. Just click on the parent object and then right-click in the content pane. Select “New” container or folder. In my screenshot, I’m in the “vhds” container so only New Folders are available. If I click on my account “wahids acct,” I’d see the New Container option.

Note: Windows Azure Management Portal doesn’t show tree views. So, all your disks will appear in one single view. The folders you create will all show up as “$$$.$$$”

 

 

OS Disks vs. Data Disks

In Windows Azure, OS Disks are those that contain a bootable operating system. A Data Disk is a non-bootable disk usually created to hold data (such as database files). In addition, all Windows Azure virtual machines get a “Temporary Storage” disk. The volumes are usually represented as follows:

Drive Letter

Type

Best Used For

C

OS Disk, System Drive

OS only

D

Temporary Storage, non-persistent drive

Pagefile, TempDB

E

Data Disk, not created by default or automatically

Databases, Program Files

 

When creating disks (or adding them) in Windows Azure, there’s two ways to specify that a disk is an OS Disk.

  1. First, in the Windows Azure management portal on the “Create Disk” dialog box you’ll put a check mark on the box “The VHD contains an operating system” and select the OS from the drop down box.
  2. If you’re using PowerShell, you’ll just specify the –OS parameter followed by either “Windows” or “Linux” during the Add-AzureDisk command.

By default, disks (either kind) in Windows Azure use caching to increase performance. This is great for the OS Disks because it leads to really fast boot times. However, on Data Disks, this isn’t ideal. We want to be assured our data is written immediately so we want to disable caching. We can do this by using the Set-AzureDataDisk –HostCaching “None” command for existing disks or as part of Add-AzureDataDisk when we’re creating new data disks.

Note: Changing the cache on a running VM will reboot the VM.

 

Some Best Practices

For the type of machines we’ll be using, I’m including some best practices I learned from a presentation by Corey Sanders (@CoreySandersWA) and included a link to his Channel 9 videos in the resources section.

SQL Servers: For the TempDB data and logs, keep them on the D: drive (the temporary, non-persistent disk) that Windows Azure gives us automatically. That drive is optimized for these kind of temporary data stores. Keep your actual database files on a separate data disk, even on multiple data disks. As those disks get used more often, Windows Azure optimizes performance. You may even get better performance by using separate storage accounts, just make sure they’re in the same region/Affinity Group as your VM.

Domain Controllers: Like any other applications we host, we shouldn’t put the database on the C: drive. If you’re setting up a DC from scratch, create a data disk and use that to store the Active Directory databases and folders when asked during dcpromo.

SharePoint: The same pattern applies here. Keep the data separate, install SharePoint on a data disk.

Finally, not really a best practice but just a note on storage costs. You pay for what you use, so creating a 700 GB data disk won’t cost you 700 GB worth of storage costs. As the drive fills up, you’ll pay for what you’re using. However, there is no “trim” functionality as of this writing. That means, even if you delete files, your costs won’t be reduced. One workaround that you can use in rare cases, such as if you know the data won’t grow again to fill up the space is to create a second disk and copy the data over. Use your new disk and delete the old one.

Conclusion

I’ll talk about creating a VM from scratch later in the series. For our purposes, I’m going to assume you have created a domain controller and SQL Server (and possible SharePoint servers) outside of Windows Azure (in Hyper-V). Upload all your vhds using csupload (you can write a batch file).

If you want to create your VMs from scratch in Windows Azure, you can still follow along in Part 3.

Next time, we’ll start building out our virtual network in Windows Azure. We won’t need our disks initially, but will want at least our domain controller to be ready.

Resources:

0 comments
%d bloggers like this: