Isolating Azure Storage Accounts for greater Virtual Machine resiliency

In my day to day role as an Azure Solution Architect I get involved in some pretty substantial and very complex deployments for customers which requires a lot of planning and design work. One thing I have found, especially in this new cloud world is that there are about a dozen ways to solve the customer’s problem and each one would be technically right. It typically comes down to what gives the customer the best solution without breaking the bank.

One of the more complex issues I’ve found when working on IaaS deployments with a large number of virtual machines is ensuring that the storage account design is sound. One of the accepted practices is to group common VM tiers into shared storage accounts and to of course place each VM into an availability set to ensure that they fall into Microsoft’s 99.95% SLA. Digging a bit deeper this practice isn’t as resilient as one might think. Sure having VMs in an availability set spreads the VMs across separate fault and update domains, but what about storage? If I have both of my VMs in the same storage account and the underlying storage is unavailable then what happens?

Is this right? Do I need to place each VM into its own storage account for greater resiliency?After doing a bit of research I found this great article:

In particular this section caught my attention: (I have highlighted the key points)

Are you using premium storage and separate storage accounts for each of your virtual machines?

It is a best practice to use premium storage for your production virtual machines. In addition, you should make sure that you use a separate storage account for each virtual machine (this is true for small-scale deployments. For larger deployments you can re-use storage accounts for multiple machines but there is a balancing that needs to be done to ensure you are balanced across update domains and across tiers of your application).

So it seems premium storage and separate storage accounts are the way to go. Things get even more interesting. Read on…

Not only should you use premium storage and separate storage accounts for your VMs you need to name the storage accounts following a specific naming convention or you run the risk of the storage partitions being potentially co-located on the same partition server. That caught my attention. Luckily I was sent this article: and the section that really cleared everything up for me was this:

Partition Naming Convention

…naming conventions such as lexical ordering (e.g. msftpayroll, msftperformance, msftemployees, etc) or using time-stamps (log20160101, log20160102, log20160102, etc) will lend itself to the partitions being potentially co-located on the same partition server, until a load balancing operation splits them out into smaller ranges.
You can follow some best practices to reduce the frequency of such operations.

  • Examine the naming convention you use for accounts, containers, blobs, tables and queues, closely. Consider prefixing account names with a 3-digit hash using a hashing function that best suits your needs.
  • If you organize your data using timestamps or numerical identifiers, you have to ensure you are not using an append-only (or prepend-only) traffic patterns. These patterns are not suitable for a range -based partitioning system, and could lead to all the traffic going to a single partition and limiting the system from effectively load balancing. For instance, if you have daily operations that use a blob object with a timestamp such as yyyymmdd, then all the traffic for that daily operation is directed to a single object which is served by a single partition server.

So from the above information it seems that the following holds true:

1) Use Premium storage in conjunction with separate storage accounts. This gets around any IOPS limits per storage account as well but there is a limit of 200 storage accounts per subscription which is a hard limit.

2) Prefix your storage accounts with a random 3 digit hash per storage account to ensure that the storage accounts are properly spread across load balanced partition servers. For example naming your storage accounts storageaccount1, storageaccount2 isn’t sufficient. Go with something like fxwstorage1, bcdstorage2 etc. to ensure that the storage accounts are load balanced correctly. Luckily for us we can use ARM templates to provision storage accounts using the naming convention as mentioned above, but that’s for another post….

Posted in Microsoft Azure, Storage, storage accounts, VMs | Leave a comment

This blog is moving to a new home..

Its been a long while but I am moving this blog to something more permanent.

I am moving it to

Lots of good stuff coming I promise 🙂

Posted in Uncategorized | Leave a comment

How to flush the Azure Redis Cache with PowerShell

Recently I was working with a customer and they wanted an easy want to flush their Azure based Redis cache of all key/value pairs. One developer suggested iterating over each of the collection elements and removing the iterated item, another suggestion was to delete and recreate the Redis cache from scratch. Both are valid suggestions but also not efficient ways to simply flush the Azure Redis cache of all data.

So I have written a simple PowerShell script to flush the cache for you.

You will need the ‘StackExchange.Redis.dll’ to be in the same directory as the script as there isn’t a REST client API you can easily call so you need to call the client DLL directly. You can easily get that via the Visual Studio NuGet package and just copy it to the script folder.

From there the script is pretty self explanatory

The code is below:

##Global variables [string]$RedisCacheHost = "<CACHE_ENDPOINT>" [string]$RedisCacheKey = "<CACHE_KEY>" [int]$RedisCachePort = "6380" #Flushes the Azure cache function FlushCache { param( [string] [parameter(mandatory)] $RedisCacheHost, [string] [parameter(mandatory)] $RedisCacheKey, [int] [parameter(mandatory)] $RedisCachePort ) Write-Host "Flushing cache on host - $RedisCacheHost" -ForegroundColor Yellow #connection string $redis_connstr = "$RedisCacheHost,ssl=true,password=$RedisCacheKey,allowAdmin=true" #Add the Redis type from the assembly Add-Type -Path "StackExchange.Redis.dll" -PassThru | out-null #open a connection [object]$redis_cache = [StackExchange.Redis.ConnectionMultiplexer]::Connect($redis_connstr,$null) #Flush the cache $redisServer = $redis_cache.GetServer($RedisCacheHost, $RedisCachePort,$null) $redisServer.FlushAllDatabases() #Dispose connection $redis_cache.Dispose() Write-Host "Cache flush done" -ForegroundColor Yellow } #Get Subscription details $subscription = Get-AzureSubscription -Current -ErrorAction Stop if($subscription -eq $null) { Write-Host "Windows Azure Subscription is not configured or the specified subscription name is invalid." Write-Host "Use Get-AzurePublishSettingsFile and Import-AzurePublishSettingsFile first" return } #Call the function FlushCache -RedisCacheHost $RedisCacheHost -RedisCacheKey $RedisCacheKey -RedisCachePort $RedisCachePort

Posted in Microsoft Azure, PowerShell, Redis Cache | Leave a comment

AWS EC2 Windows Instance – Get instance details

When provisioning an AWS EC2 fleet either manually or via a launch configuration it is very useful to be able to determine if the instance(s) are both operational and the Elastic load balancer is correctly redirecting HTTP requests and spreading the load evenly amongst the active EC2 instances.

The best way to achieve this is to have the instance “self provision” the functionality via userdata or via a launch configuration that can be injected when the instance is created. This code can be stored in an S3 bucket and copied to the instance on start-up.

PowerShell handles the automation process for you via UserData or by a launch configuration and does the following:

1) Install-WindowsFeature etc… – Installs IIS and all subcomponents. This can take some time – approx. 5 minutes per instance.

2) wget https://<bucketname> -outfile c:\inetpub\wwwroot\ – Gets the zipped code from the S3 bucket and copies it locally to the /inetpub folder within the EC2 instance.

3) [System.Reflection.Assembly]::LoadWithPartialName(‘System.IO.Compression.FileSystem’)
[System.IO.Compression.ZipFile]::ExtractToDirectory(“c:\inetpub\wwwroot\”, “c:\inetpub\wwwroot\aspxless”) – Unzips the code into a separate folder and makes it ready for use under the \wwwroot\aspxless folder.

Note: Make sure that the S3 bucket where the zip file is stored has the “Action s3:GetObject” set otherwise you will get a permission denied error. You can secure this via an IAM role (Highly recommended) for better security but for the purposes of the demo the “*” permission will suffice.

The code within the default.aspx page queries the instance metadata and returns the following:

  1. InstanceID
  2. Public Host Name
  3. Public IP
  4. Instance Type
  5. Availability Zone

You would then query the instance data by the following url: http://<ELB_End_Point_Or_PublicIP>/aspxless/default.aspx

The complete code for the walkthrough above can be found from here:

Posted in Amazon Web Services, AWS, EC2 | Leave a comment

CyberPuzzles…a look back at 1997

Back in 1997-1998 I was a pretty decent Java Developer especially since the language was only a year or so old. I had a very rare privilege to be able to work on some very cutting edge code for some online puzzles during my time at OzEmail who at the time were Australia’s largest Internet service provider.

At the time there was nothing like what we had developed anywhere on the internet, we looked it didn’t exist. We had to build it from scratch. So we did and it was amazing for me to be involved with this project.

We built a very sophisticated online crossword puzzle amongst others (Word Chain, Hidden Word, Broken word etc) that leveraged some pretty smart code that would read a data set that we defined stored in a SQL 6.5 database and it would render based on the dataset that it was fed. We had a built in system for clues and scoring, audio and animation all built around some central themes for kids and adults alike. Testing was fun as you can imagine as we had to support all the main browsers such as Internet Explorer 3/4, Netscape navigator. And you guessed they had implemented Java applet runtimes in different ways. We lived by the Java mantra at the time “Write once, Debug everywhere”.

We leveraged custom built parsers, Object Oriented Programming (OOP), SQL Server 6.5 database calls to host the puzzle data, multithreading for loading images and audio, keyboard and mouse input support and the list goes on…

Just to put things into context here was the technology state of play back in 1997:

1) There was NO C# or .NET  or ASP.Net. It would be 3-5 years before that technology came along and became mainstream.

2) Java Applets were all the rage. There was no Angular.JS or Flash we had to handle all the animations and user input ourselves.

3) The cloud didn’t exist but working for a hosting provider we managed to have access to as many servers as we needed.

4) Developer tools were very text based. We had Visual Studio with Visual J++ but Eclipse didn’t exist but we decided to develop in Symantec Visual Cafe which you can read more about here: and looked like this:


Times have really changed but I have to say (without bragging or anything) being involved in this project was a really big highlight of my career and I really do believe that the CyberPuzzles code was some of the best code I have ever written in my 20+ years in IT 🙂

The code is pretty self explanatory but pretty complex and reading through it brings back some very fond memories and I wanted to dedicate this release to my manager at the time Neil Reading who passed away in late 1998 and to my great mentor Bryan Richards who I learnt a lot from at the time.

So for all those Java coders out there that want to have a look at what Java development was like in 1997-98 you can download the CyberPuzzles Crossword code from GitHub here:

The code comes without any warranty and has zero support but is fun to read and have a look at.


Posted in Applet, Java, Retro Code | Leave a comment

I passed – AWS Certified Solutions Architect – Associate Level exam!

I sat and passed my first of many AWS exams today and am very stoked with the results.

Keep an eye on my blog for LOTS of AWS related posts coming soon.

Posted in Amazon Web Services, AWS, Cloud | Leave a comment

IaaS IOPs Test – Azure Vs. AWS

I was curious recently about comparing disk IOPs between AWS and Microsoft Azure for IaaS Virtual Machines and the one thing that really surprised me was how much control and configurability you had when tuning and configuring disks. I decided to setup equivalent test labs in Azure and AWS, run some disk performance tests and compare the results. No doubt I am sure there is something I missed and further tweaks I can make. For the most part I kept things as the default where possible to keep the tests comparable.

Here are the two test lab configurations I used:


  • Azure VM: A4 – 8 cores, 14GB memory
  • OS: Windows Server 2012 Datacenter
  • Region: East Asia
  • VHD attached disk – Size: 100GB – NTFS Formatted. Host Cache Preference: None


  • AWS EC2: c3.2xlarge 8vCPUs 15GB memory 2X80 SSDs
  • OS: Microsoft Windows Server 2012 Base – ami-ab563191
  • Region/AZ: Sydney – ap-southeast-2a
  • EBS attached disk – Size: 100GB – NTFS Formatted, General purpose SSD, IOPS 300 / 3000

The tool that I ran to generate the results was CrystalDiskMark. One factor that no doubt skewed the test results was the fact that AWS has EBS volumes backed by SSDs whereas Azure doesn’t have SSDs for their VHD attached disks as yet. This may change in the future so for now Azure has 500 IOPS per disk as a hardcoded limit. One way around this limit is to have multiple volumes and create a striped volume, something I will try in the future.

The tests themselves were very simple, start with a 100MB sample data file, then use a 500MB file then increase it to a 1000MB file. For the Azure part of the test I both enabled and disabled the host cache preference setting on the attached disk and the results proved to be very interesting.

For the tests I simply attached a VHD for Azure and an EBS volume for AWS that was mounted as an empty drive in the VM to test against. I didn’t use any striping of volumes but that may come in another follow up article. The volumes are formatted as NTFS simple volumes.

So for now here are the results:

Test Cloud Provider Read [MB/S]- Seq Read [MB/S]- 512K Read [MB/S]- 4K Read [MB/S]- 4K QD32 Write [MB/S]- Seq Write [MB/S]- 512K Write [MB/S]- 4K Write [MB/S]- 4K QD32
Test 1 – 100MB file Azure 21.57 17.18 0.227 2.035 14.26 9.271 0.466 2.005
Test 2 – 500MB file Azure 17.64 12.20 0.157 1.921 12.45 11.42 0.476 1.988
Test 3 – 1000MB file Azure 23.56 12.80 0.162 20.31 12.46 11.13 0.483 2.041
Test 1 – 100MB file AWS 136.7 136.6 12.53 12.53 116.9 112.0 6.574 12.53
Test 2 – 500MB file   AWS 136.7 136.8 12.53 12.54 118.9 115.6 6.659 12.53
Test 3 – 1000MB file)   AWS 136.7 136.9 12.45 12.53 119.4 112.9 6.706 12.54

So from the above it appears that AWS is the clear winner..but when I enabled the host cache preference setting in Azure for the attached VHD volume, the results took a slight turn in Azure’s favour, especially in Disk Read operations. Very Interesting. One thing that I will try in the next round of my tests is to take a look at the provisioned IOPs that AWS has to offer, no doubt these will change things significantly.

Here are the results in Azure when I enabled the Host cache preference: Read/write for the attached VHD volume:

Test Cloud Provider Read [MB/S]- Seq Read [MB/S]- 512K Read [MB/S]- 4K Read [MB/S]- 4K QD32 Write [MB/S]- Seq Write [MB/S]- 512K Write [MB/S]- 4K Write [MB/S]- 4K QD32
Test 1 – 100MB file Azure 1041 943.7 52.42 183.5 23.51 45.60 2.034 2.038
Test 2 – 500MB file Azure 1055 916.3 49.96 181.7 12.17 56.38 2.039 2.034
Test 3 – 1000MB file Azure 1081 940.2 48.90 182.9 12.82 48.67 2.035 2.032

Take from these results what you will and these are just some initial observations. I will have a look at doing some SQL IaaS disk performance comparisons between AWS and Azure in a future article.




Posted in AWS, Microsoft Azure | Tagged , , , , , | Leave a comment

Changing pace in my career

Recently I have joined a new organisation (Yes I have left Microsoft late last year). I joined Readify as a Lead SharePoint Consultant and delivered on all things SharePointy. I recently got offered a new role in the same company as a Pre-Sales Technical Specialist. It was a great opportunity and I figured that it was time to move out of technical delivery and into a pre-sales role.

Yes I have moved to the darkside and giving a pre-sales role a go.

In my new role I will be working with the Readify sales team and providing support from a technical perspective.

The biggest changes I have found so far are:

  • No more timesheets or utilisation based targets
  • Engage with customer from a very early stage to understand their requirements
  • I do a LOT more presenting and meeting customers
  • Prepare and deliver CIE's (Customer Immersion Experience) events
  • I need to be across a lot more technologies. Even looking at non-Microsoft technologies
  • I am working on getting my head around more pure Cloud solutions that involve Office365, Azure and possibly AWS.
  • Focus on the end state solution for a customer instead of just a point solution
  • Work with Microsoft account teams in a very different way.

Its going to be a fun ride and I will share my thoughts and any tips and tricks I pick up along the way.

Posted in Personal, Technical PreSales | Leave a comment

Patching Office Web Apps to Service Pack 1.

Here are some steps to patch your Office Web Apps servers to the very recently released Service Pack 1 build. I have tested these at a few clients. Let me know if you have any problems or questions.

Firstly read this article and understand the steps involved:

Here are the steps to follow:

1) Remove the WOPI binding from the Central Administration server using the following PowerShell:

   1: Remove-SPWOPIBinding –All –confirm:$false

2) Remove the machines from the OWA farm, to be run for each farm in the OWA farm  (effectively scrapping the farm) :

   1: Import-Module OfficeWebApps 

   2: Remove-OfficeWebAppsMachine

3) Patch each OWA server with OWA SP1 and reboot each OWA server.

4) Recreate the OWA farm on the First OWA server using the following PowerShell command. Make sure that you have the  -SSLOffloaded parameter specified if you are using a load balancer. (Reference:

Change to suit your environment of course:

   1: Import-Module OfficeWebApps 

   2: New-OfficeWebAppsFarm –InternalURL "http://Contoso-WAC" –AllowHttp –EditingEnabled

5) On each subsequent OWA server run the following PowerShell to rejoin the machine to the farm:

   1: Import-Module OfficeWebApps 

   2: New-OfficeWebAppsMachine –MachineToJoin OWA1.<FQDN> -Confirm:$false

6) On the first OWA server run the following PowerShell to do a quick visual check on the farm and machine status:

   1: Import-Module OfficeWebApps 


   3: #Get the Farm Status

   4: Get-OfficeWebAppsFarm 


   6: #Get the machines in the farm

   7: (Get-OfficeWebAppsFarm).Machines

7) Reconnect SharePoint to the OWA farm. Run the following PowerShell on the Central Administration server:

   1: #Change to your OWA loadbalanced url (Can be a single server if there is only one)

   2: WACServer = "officeweb.<FQDN>"


   4: #Set the WOPI Zone

   5: $WOPIZone = "internal-https" 


   7: #Configure the SP Farm to connect to the OWA Farm

   8: New-SPWOPIBinding -ServerName $WACServer 


  10: #Set the WOPI Zone

  11: Set-SPWOPIZone –zone $WOPIZone

  12: Get-SPWopiZone 


  14: #Allow HTTP over OAuth

  15: $config = (Get-SPSecurityTokenServiceConfig)

  16: $config.AllowOAuthOverHttp = $true

  17: $config.Update()

Also make sure you test from a browser using a NON-Service account otherwise you might find some errors/unexpected behaviour.

Happy hunting!


Posted in OWA, Service Pack 1, SharePoint 2013 | Leave a comment

Create a Folder and set the ProgID programmatically using the SharePoint 2013 CSOM

Following up from my post:

Here is how to create a folder and set the ProgID on that folder in a library using the CSOM.

Setting the ProgID using the normal API isn’t supported as its read-only but luckily there is another way. (As there usually is with SharePoint 🙂 )

   1: //Set the ProgID - In this case its a OneNote notebook    

   2: listItem.set_item("HTML_x0020_File_x0020_Type", "OneNote.Notebook");

The function to a create a folder and set a ProgID is as follows. I haven’t included the Callback functions in the snippet but you get the idea.

   1: //Sets the ProgID

   2: function addFolderToLibrary(listName) {


   4:     var listItem;


   6:     //get the list details - assuming you have the spWeb context

   7:     var theList = spWeb.get_lists().getByTitle(listName);


   9:     //Set the list creation info        

  10:     var itemCreateInfo = new SP.ListItemCreationInformation();

  11:     itemCreateInfo.set_underlyingObjectType(SP.FileSystemObjectType.folder);

  12:     itemCreateInfo.set_leafName(clientName);


  14:     //Add the item

  15:     listItem = theList.addItem(itemCreateInfo);


  17:     //Set the ProgID - In this case its a OneNote notebook    

  18:     listItem.set_item("HTML_x0020_File_x0020_Type", "OneNote.Notebook");


  20:     //apply any updates

  21:     listItem.update();

  22:     clientContext.load(listItem);


  24:     //commit the  changes

  25:     clientContext.executeQueryAsync(

  26:         Function.createDelegate(this, onListUpdateSuccess),

  27:         Function.createDelegate(this, onListUpdateFail)

  28:     );


  30: }

Posted in CSOM, JavaScript, Office 365, SharePoint 2013, SharePoint Online | Leave a comment