Isolating Azure Storage Accounts for greater Virtual Machine resiliency

In my day to day role as an Azure Solution Architect I get involved in some pretty substantial and very complex deployments for customers which requires a lot of planning and design work. One thing I have found, especially in this new cloud world is that there are about a dozen ways to solve the customer’s problem and each one would be technically right. It typically comes down to what gives the customer the best solution without breaking the bank.

One of the more complex issues I’ve found when working on IaaS deployments with a large number of virtual machines is ensuring that the storage account design is sound. One of the accepted practices is to group common VM tiers into shared storage accounts and to of course place each VM into an availability set to ensure that they fall into Microsoft’s 99.95% SLA. Digging a bit deeper this practice isn’t as resilient as one might think. Sure having VMs in an availability set spreads the VMs across separate fault and update domains, but what about storage? If I have both of my VMs in the same storage account and the underlying storage is unavailable then what happens?

Is this right? Do I need to place each VM into its own storage account for greater resiliency?After doing a bit of research I found this great article:

In particular this section caught my attention: (I have highlighted the key points)

Are you using premium storage and separate storage accounts for each of your virtual machines?

It is a best practice to use premium storage for your production virtual machines. In addition, you should make sure that you use a separate storage account for each virtual machine (this is true for small-scale deployments. For larger deployments you can re-use storage accounts for multiple machines but there is a balancing that needs to be done to ensure you are balanced across update domains and across tiers of your application).

So it seems premium storage and separate storage accounts are the way to go. Things get even more interesting. Read on…

Not only should you use premium storage and separate storage accounts for your VMs you need to name the storage accounts following a specific naming convention or you run the risk of the storage partitions being potentially co-located on the same partition server. That caught my attention. Luckily I was sent this article: and the section that really cleared everything up for me was this:

Partition Naming Convention

…naming conventions such as lexical ordering (e.g. msftpayroll, msftperformance, msftemployees, etc) or using time-stamps (log20160101, log20160102, log20160102, etc) will lend itself to the partitions being potentially co-located on the same partition server, until a load balancing operation splits them out into smaller ranges.
You can follow some best practices to reduce the frequency of such operations.

  • Examine the naming convention you use for accounts, containers, blobs, tables and queues, closely. Consider prefixing account names with a 3-digit hash using a hashing function that best suits your needs.
  • If you organize your data using timestamps or numerical identifiers, you have to ensure you are not using an append-only (or prepend-only) traffic patterns. These patterns are not suitable for a range -based partitioning system, and could lead to all the traffic going to a single partition and limiting the system from effectively load balancing. For instance, if you have daily operations that use a blob object with a timestamp such as yyyymmdd, then all the traffic for that daily operation is directed to a single object which is served by a single partition server.

So from the above information it seems that the following holds true:

1) Use Premium storage in conjunction with separate storage accounts. This gets around any IOPS limits per storage account as well but there is a limit of 200 storage accounts per subscription which is a hard limit.

2) Prefix your storage accounts with a random 3 digit hash per storage account to ensure that the storage accounts are properly spread across load balanced partition servers. For example naming your storage accounts storageaccount1, storageaccount2 isn’t sufficient. Go with something like fxwstorage1, bcdstorage2 etc. to ensure that the storage accounts are load balanced correctly. Luckily for us we can use ARM templates to provision storage accounts using the naming convention as mentioned above, but that’s for another post….

How to flush the Azure Redis Cache with PowerShell

Recently I was working with a customer and they wanted an easy want to flush their Azure based Redis cache of all key/value pairs. One developer suggested iterating over each of the collection elements and removing the iterated item, another suggestion was to delete and recreate the Redis cache from scratch. Both are valid suggestions but also not efficient ways to simply flush the Azure Redis cache of all data.

So I have written a simple PowerShell script to flush the cache for you.

You will need the ‘StackExchange.Redis.dll’ to be in the same directory as the script as there isn’t a REST client API you can easily call so you need to call the client DLL directly. You can easily get that via the Visual Studio NuGet package and just copy it to the script folder.

From there the script is pretty self explanatory

The code is below:

##Global variables [string]$RedisCacheHost = "<CACHE_ENDPOINT>" [string]$RedisCacheKey = "<CACHE_KEY>" [int]$RedisCachePort = "6380" #Flushes the Azure cache function FlushCache { param( [string] [parameter(mandatory)] $RedisCacheHost, [string] [parameter(mandatory)] $RedisCacheKey, [int] [parameter(mandatory)] $RedisCachePort ) Write-Host "Flushing cache on host - $RedisCacheHost" -ForegroundColor Yellow #connection string $redis_connstr = "$RedisCacheHost,ssl=true,password=$RedisCacheKey,allowAdmin=true" #Add the Redis type from the assembly Add-Type -Path "StackExchange.Redis.dll" -PassThru | out-null #open a connection [object]$redis_cache = [StackExchange.Redis.ConnectionMultiplexer]::Connect($redis_connstr,$null) #Flush the cache $redisServer = $redis_cache.GetServer($RedisCacheHost, $RedisCachePort,$null) $redisServer.FlushAllDatabases() #Dispose connection $redis_cache.Dispose() Write-Host "Cache flush done" -ForegroundColor Yellow } #Get Subscription details $subscription = Get-AzureSubscription -Current -ErrorAction Stop if($subscription -eq $null) { Write-Host "Windows Azure Subscription is not configured or the specified subscription name is invalid." Write-Host "Use Get-AzurePublishSettingsFile and Import-AzurePublishSettingsFile first" return } #Call the function FlushCache -RedisCacheHost $RedisCacheHost -RedisCacheKey $RedisCacheKey -RedisCachePort $RedisCachePort

CyberPuzzles…a look back at 1997

Back in 1997-1998 I was a pretty decent Java Developer especially since the language was only a year or so old. I had a very rare privilege to be able to work on some very cutting edge code for some online puzzles during my time at OzEmail who at the time were Australia’s largest Internet service provider.

At the time there was nothing like what we had developed anywhere on the internet, we looked it didn’t exist. We had to build it from scratch. So we did and it was amazing for me to be involved with this project.

We built a very sophisticated online crossword puzzle amongst others (Word Chain, Hidden Word, Broken word etc) that leveraged some pretty smart code that would read a data set that we defined stored in a SQL 6.5 database and it would render based on the dataset that it was fed. We had a built in system for clues and scoring, audio and animation all built around some central themes for kids and adults alike. Testing was fun as you can imagine as we had to support all the main browsers such as Internet Explorer 3/4, Netscape navigator. And you guessed they had implemented Java applet runtimes in different ways. We lived by the Java mantra at the time “Write once, Debug everywhere”.

We leveraged custom built parsers, Object Oriented Programming (OOP), SQL Server 6.5 database calls to host the puzzle data, multithreading for loading images and audio, keyboard and mouse input support and the list goes on…

Just to put things into context here was the technology state of play back in 1997:

1) There was NO C# or .NET  or ASP.Net. It would be 3-5 years before that technology came along and became mainstream.

2) Java Applets were all the rage. There was no Angular.JS or Flash we had to handle all the animations and user input ourselves.

3) The cloud didn’t exist but working for a hosting provider we managed to have access to as many servers as we needed.

4) Developer tools were very text based. We had Visual Studio with Visual J++ but Eclipse didn’t exist but we decided to develop in Symantec Visual Cafe which you can read more about here: and looked like this:


Times have really changed but I have to say (without bragging or anything) being involved in this project was a really big highlight of my career and I really do believe that the CyberPuzzles code was some of the best code I have ever written in my 20+ years in IT 🙂

The code is pretty self explanatory but pretty complex and reading through it brings back some very fond memories and I wanted to dedicate this release to my manager at the time Neil Reading who passed away in late 1998 and to my great mentor Bryan Richards who I learnt a lot from at the time.

So for all those Java coders out there that want to have a look at what Java development was like in 1997-98 you can download the CyberPuzzles Crossword code from GitHub here:

The code comes without any warranty and has zero support but is fun to read and have a look at.