Automating Secure Access to KeyVault (Part 1)

all-the-controlIn this post I plan to continue on from my earlier post on Secure Authentication of Azure Resource Management Deployments to show how to control access to Azure KeyVault from a PowerShell script.  A lot of this set up can be done from the Azure Portal and using tools like makecert.exe as was done in that earlier post, but to really have full, stand-alone automation we will want to replace tools like makecert.exe with some techniques that do not pop up password dialogs or require similar blocking user interaction.  There are a lot of details to consider so I’ll break this up into a few simpler parts and try to tie it all together at the end.

1. Automating Secure Password Generation

If you are just wanting to generate secure passwords, a great tool is PWGen for Windows.  But as mentioned already, in the case of automation we don’t want to block the process with a UI requiring direct user interaction.  To do this with PowerShell — or any other .Net language — we might be tempted to try the built-in Random Class.  In many cases this is good enough, but to secure your authentication with Azure — or especially with KeyVault where you will be store secrets and keys — you want to be as secure as possible.  So let’s consider some general good rules of thumb to apply:

  • Use a password that is long enough.  With the increasing speed of processors and greater availability of distributed computing, we will need a password long enough so a brute-force attack won’t figure out your password in a reasonably short amount of time.  There are many tools online that can easily analyse and show the approximate amount of computing power needed to break your password.  I find this one is simple and gives you quick and clear results.  Common techniques like character substitutions do not add much security.  What we quickly see with these analyzers is that length is often a more significant factor in making your password hard to break.  As of April, 2016 I generally consider a 16 character password to be long enough, but this may have to increase again in a year or two.  Also note that if you are just protecting a short-lived resource like a certificate (e.g. as in the previous article) you may easily get away with a shorter password.
  • Avoid Dictionary Words.  Even with long passwords, using plain dictionary word or words altered with character substitution is still not optimal.  The online password analyzers do not highlight the fact that use of dictionary words and character substitutions can very greatly reduce the search space that a brute-force attack must cover.
  • Store your password securely.  Some people will go to great lengths to create secure passwords and then, because it is hard to remember, write it down on a sticky note (or a close electronic, plain-text equivalent).  Or worse still, they will keep it secure but then share it with a co-worker over unencrypted email or similar communication.  There are simple solutions to these issues.  Use a password manager.  There are too many to list, but I’ll mention a free/open-source, cross-platform password manager that I often use called KeePass.  Password managers require you to memorize just one secure password (the one for the password manager db) that will give you access to the rest.  Similarly, if you have to share a password or any other secure information with a co-worker, then you should encrypt your email or other communications: GPG and OpenPGP supply simple tools to encrypt files, emails, and other data using (asymmetric) Public-Key Cryptography.
  • Maximize Entropy.  Information Theory introduces a concept called Shannon Entropy which is analogous to the entropy many people learned about in physics and thermodynamics.  Though this is a very interesting topic I won’t waste any time here except to say that for a given set of data there is a measurable (calculable) amount of entropy associated with it.  By maximizing the entropy in a set of information your are effectively optimizing the randomness and making it harder for brute-force attacks to extract the underlying information.

So how do we maximize entropy?  Various OS’es (and hardware) try to capture entropy from sources of randomness atached to them.  Linux, BSD, and many other *nix Os’es have a kernel entropy pool, often attached to /dev/random.  These entropy pools gather randomness or entropy from several sources in the system.  Even tools like PWGen will gather entropy from random mouse movements.  For Windows and .Net we have the RNGCryptoServiceProvider which does a much better job of maximizing entropy (i.e. randomness) compared to the simpler .Net Random Class.  RNGCryptoServiceProvider is even secure enough to comply with the U.S. Government FIP-140 standard.

So how do we use this?

Here is a simple PowerShell function that takes a single parameter, the password character length (defaulting to 16) and returns a secure password from the RNGCryptoServiceProvider:

function New-SecurePassword
{
  [CmdletBinding()]
  [OutputType([String])]
  param
  (
    [Parameter(Mandatory = $false)]
    [Int]$PasswordLength = 16
  )
  $lowerCase = [Char[]]([Char]'a'..[Char]'z')
  $upperCase = [Char[]]([Char]'A'..[Char]'Z')
  $digits = [Char[]]([Char]'0'..[Char]'9')
  $validPunctuation = [Char[]]'!.-+*^_'
  $validCharacters = $lowerCase + $upperCase + $digits + $validPunctuation
  $bytes = New-Object -TypeName 'System.Byte[]' -ArgumentList $PasswordLength
  $cryptoRNG = New-Object -TypeName 'System.Security.Cryptography.RNGCryptoServiceProvider'
  $cryptoRNG.GetBytes($bytes)
  $newPassword = ''
  foreach ($rndByte in $bytes)
  {
    $newPassword += $validCharacters[($rndByte % $validCharacters.Length)]
  }
  $newPassword
}

Note that here I am breaking up which characters are allowed in your password into upper and lower case (Latin alphabet) characters, decimal digits 0-9, and a collection of permitted punctuation characters.  There are some punctuation characters that I’ve deliberately left out by personal preference because they may cause difficulty if you store passwords in XML or similar markup.  Feel free to add or remove punctuation characters, but keep in mind that the more permitted characters used may effectively increase the search space required for a brute-force attack.  You may even be able to add some extra, non-ASCII characters in there, but in many cases the system you are authenticating against may not permit them (I haven’t tried this so I don’t know what Azure allows, or other tools used later on in this series).

I’ll use this function in my next post in which I’ll demonstrate the next step: automating certificate generation.  To facilitate this I’ll typically put commonly used PowerShell functions like this in a module so it can be easily reused for different purposes.

 

PowerShell: Simplify Complex Function Calls

Simplify the handling of complex functions or cmdlets using ScriptBlocks and Closures to capture parameter subsets.

powershell-logoA major component of DevOps is automation and for any automation work in Azure the recommended tool for the job is PowerShell.  Microsoft provides a large number of PowerShell cmdlets for managing all aspects of Azure including the classic Service API and ARM (Azure Resource Manager).  But one of the frustrations I’ve had with PowerShell is the tendency for very long parameter sets when invoking cmdlets or functions.  The result is usually that the invocation is a very long line or is wrapped across a few lines, with either of these affecting the readability of the code.  There are a number of ways to work around this:  wrapping data and code in an object (now much easier in PowerShell 5); passing a config object to a wrapper function; or what I’ll be covering in this post, using script blocks and closures.

First some background.  In PowerShell (2.0 or later), a ScriptBlock is a list of statements enclosed in curly braces that can be passed around and executed.  A script block can even accept parameters.  This is effectively a Lambda (or anonymous) function, though unfortunately due to the PowerShell syntax, invocation is slightly different from typical functions or cmdlets.  A Closure (or sometime Lexical Closure) is a function (or lambda function) that captures the state (i.e. shared variables) from its enclosing (lexical) scope.  Don’t worry if that doesn’t make sense yet, hopefully it will by the end of this post.

The Problem and Solution

At Lixar for deployments on a project I was working on, we needed to be able to send various types of deployment emails including pre-deployment warning, deployment start and completion emails, as well as various warning and error emails when issues are encountered.  A colleague helped by writing a separate script using the System.Net.Mail.SmtpClient object.  I won’t go into details of the script here, but the resulting invocation included a very long set of parameters and was used many times throughout the parent deployment script:

Notifications/DeploymentEmail.ps1 -EmailTemplate initialemail_template.cshtml -Project $($ProjectName) -Version "${BaseVersion}.${BuildNumber}" -Environment $($EnvDisplayName) -ExpectedDeploymentTime $($ExpectedDeployment) -Type $($DevType) -Name $($DeployerName) -Attachments $($ReleaseNotesPath) -RecipientEmailList $($EmailGroup) -YourEmail $($SenderAccount) -Password $($AccountPassword) -additionalMsg $($appsMsg)'

Sure, not all parameters are required all the time and the script could be refactored, but sometimes you just have to work with what you are given. With a function like above you may want to come up with 3 or more simpler functions (e.g. status emails, error emails, warning emails, etc.) that have the common parameters baked in.
This example is a bit too long and was just for illustration purposes, so let’s continue with a simpler example illustrating how to handle this problem. Consider this simpler, though contrived example:

function Get-MessageObject
{
   param
   (
      [String]$MsgType,
      [String]$Message,
      [String]$CurrentDayOfWeek
   )
   $result = New-Object -TypeName psobject -Property @{
                       'MessageType' = $MsgType
                       'Message'    = $Message
                       'DayOfWeek'  = $CurrentDayOfWeek
                       }
   $result
}

Here we have a function that takes 3 parameters, but we may imagine 3 common usages for message types: Info, Warning, and Error. Similar, unless this is a long-running script we might want to avoid determining the CurrentDayOfWeek every time we need to call this function.
In functional programming there is a common technique called Partial Application which is used to apply only a subset of the function parameters and get back another function which only requires the remaining parameters.  A way of doing this in languages which don’t have built-in <em>Partial Application</em> is to wrap the function in a Lambda function.  As alluded to earlier, here is where PowerShell’s <em>ScriptBlocks</em> become useful:

$MsgType = 'Info'
$currentDay = Get-Date -UFormat '%A'
$getInfoObject = {
   param
   (
      [String]$Message
   )
   Get-MessageObject -MsgType $MsgType -Message $Message -CurrentDayOfWeek $currentDay
}
$myInfo = &amp;amp;amp;amp; $getInfoObject -Message 'Have a nice day!'
$myInfo

Message          MessageType DayOfWeek
-------          ----------  ---------
Have a nice day! Info        Wednesday

This ScriptBlock is an invocable object that takes a single parameter (Message). The other parameter values ($MsgType and $currentDay) are inherited from the parent scope. The ScriptBlock is assigned to the variable $getInfoObject which can then be invoked (with the & operator) using different Message parameters.
But we won’t stop there. As mentioned, the “Baked In” parameters are getting their values from the parent scope, but they aren’t really baked in yet. Changes in the parent scope may change the behaviour of our script block. Furthermore, what if we were wrapping a script or a function imported from a module where the path to that entity may be redefined and reloaded (e.g. loading different implementations) — we most likely want dependencies like functions or scripts to be baked in when we define the script block.
Consider the following, final function:

function New-MessageObjectGeneratorForToday
{
   param
   (
      [String]$MsgType
   )
   $currentDay = (Get-Date -UFormat '%A')
   {
      param
      (
         [String]$Message
      )
      Get-MessageObject -MsgType $MsgType -Message $Message -CurrentDayOfWeek $currentDay
   }.GetNewClosure()
}
$getErrorMsg = New-MessageObjectGeneratorForToday -MsgType 'Error'
&amp;amp;amp; $getErrorMsg -Message 'No error here.  Move along...'

Message                       MessageType DayOfWeek
-------                       ----------  ---------
No error here.  Move along... Error       Wednesday

Here we are converting our ScriptBlock into a Closure using the ScriptBlock’s GetNewClosure() method. This will bake in the $MsgType and $currentDay values. Additionally, we wrap this in a function which takes the single MsgType parameter, plus dynamically determines the current day, both of which are baked into the closure. The function’s return value is the closure. (Note that we don’t need to use the return keyword in PowerShell objects accumulated from statements within the function are automatically returned.)

There is a lot more that you can do with ScriptBlocks and Closures in PowerShell. ScriptBlocks are also useful in working with PowerShell jobs and workflows. Closures can be used to define Continuations or Callbacks. The underlying functional techniques can be used in a lot of languages so they may be worth learning even if PowerShell is not your thing. In many other languages dealing with Lambdas, Partial Application and Closures are often much easier.

Wrapping this up we can look back at that initial example of an email script referenced many times within a parent deployment script.  This technique of generating a closure can be easily adapted to simplify reuse of the complex email script.  The resulting closure(s) can be passed around to other functions so they can easily generate email messages as needed.  Note that there are some exceptions here:  if you are using Start-Job, or PowerShell Workflows, or some other technique which spawns a new PowerShell session or process, the enclosed scope variables will not be passed along unless you assign the variables with the Using keyword.

Update:

I just discovered how to convert the Closures, as created above, into functions:

$function:global:getErrorMsg = New-MessageObjectGeneratorForToday -MsgType 'Error'
getErrorMsg -Message 'This is now a function.'

Message                 MessageType DayOfWeek
-------                 ---------- ---------
This is now a function. Error      Wednesday

getErrorMsg now acts like any other normally defined function, however you cannot include a “-” character in the name so these functions cannot comply with the common <Verb>-<Noun> naming scheme of functions and cmdlets in PowerShell.

Secure Authentication of Azure Resource Management Deployments

Best practices for authenticating AzureRM cmdlets in a Continuous Deployment system.

Overview

In my current job much of my day to day activity is spent developing and maintaining tools for build and deployment automation.  In particular, most of my work involves the Microsoft Azure Cloud, and the current and preferred approach for provisioning and deploying is to use ARM (Azure Resource Manager).

Many of the tools and scripts I write for deployments are written in PowerShell using the Azure PowerShell Cmdlets and these are generally run from a build system (e.g. Bamboo, TeamCity,  …).  As such, we need to be able to authenticate the PowerShell session with Azure at run time in what is effectively a headless environment.

AzureRM vs Azure Service Management

It is important to clarify now some key differences between the (classic or perhaps legacy) Azure Service Management Cmdlets and the newer Azure Resource Manager Cmdlets.  The Service Management Cmdlets can be typically authenticated by importing a PublishSettings file (see: Get-AzurePublishSettingsFile and Import-AzurePublishSettingsFile).  This “permanently” sets up your environment  to use management certificates to authenticate with Azure.  This is not the most secure solution since there is no clear association between the certificate thumbprints and the user accounts.  Also note that Service Management API (and the associated authentication) cannot be used for ARM deployments and related tasks.

Things are quite different with the AzureRM cmdlets.  In a typical PowerShell session you will use the Add-AzureRmAccount (or Login-AzureRmAccount) which will prompt you for your Azure credentials.  But here is were you might run into issues in your automated environment:  How do you secure your credentials used for AzureRM authentication?

Approaches to AzureRM Authentication

Simple AccountName-Password Authentication

$azureAccountName = 'your account name'
$azurePassword = 'your password' | ConvertTo-SecureString -AsPlainText -Force
$psCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($azureAccountName, $azurePassword)
Login-AzureRmAccount -Credential $psCred

This approach is the easiest. You can store your account name and password somewhere securely and load them up at run time. But in a build system where everything is logged, your credentials will show up in the logs so this is not something you want to do.

One might think saving the credentials object (or password SecureString) to a file will be fine:

$psCred = New-Object System.Management.Automation.PSCredential -ArgumentList ('username', ('password' | ConvertTo-SecureString -AsPlainText -Force))
$psCred | Out-File -FilePath .\myCreds

But there are problems here with the way Secure Strings are generated and tied to the local machine. You can work around this using a custom encryption key, but there are better, more secure ways that I’ll cover later in this article.

Saving your Azure Account Profile

A more elegant solution in my opinion is to use a save AzureRmProfile; see Save-AzureRmProfile and Select-AzureRmProfile.  This approach is great and even has the advantage of allowing you to manage separate profiles for each Azure Subscription you are dealing with — but we still have some issue and here’s were we get to the real purpose of this article.  One of the key problems with this approach is that the authenticated session stored in your profile will expire.  I’ve searched for documentation on this and haven’t been able to find the lifespan of the authenticated session stored in your profiles but it appears it may last for several days.  For some uses this might be considered adequate, but realistically when we are talking about CD (Continuous Deployment) and automation we want something longer lived than that.

Other Issues with User Credentials and Profiles

On a deeper level, the techniques outlined so far do not feel right to me for a number of reasons:

  1. Tying an automated process to user credentials is a bad idea.  Particular if the person setting up your deployment automation leaves your company for some reason then things will break when their account is disabled.  Similarly, just changing their password will break the automated process.
  2. DevOps personnel will typically have Admin/Owner roles in the subscriptions they are deploying to.  So using these credentials may be giving too many privileges to your automated processes.

Using an Azure AD ServicePrincipal for Authentication

So, here I’m getting to what I now consider a best-practice for authenticating your ARM PowerShell scripts tools with Azure.  Azure Active Directory (AAD) allows you to register applications (e.g. your deployment tools) and create a ServicePrincipal for authentication and access control.  This leads to a number of crucial advantages over previously mentioned methods for authentication:

  1. Your deployment PowerShell scripts are not tied to a particular user.  Each tool or application can be registered separately and have a unique ApplicationId within AAD.
  2. RBAC (Role Based Access Control).  This seems to be a big topic in Azure these days and for good reason.  RBAC allows you to control the granularity of what your users, or applications/service principals can see and do.
  3. Fine-grained control over when you want access to expire.

There are a number of approaches that can be used with AAD but in this article I will only cover the case of using a self-signed certificate with KeyCredential authentication.  Note that I discovered much of this while figuring out access control on Azure KeyVault, something I will cover in detail in a future article.

Generate Your Self-Signed Certificate:

$now = Get-Date
$startDate = (Get-Date $now -UFormat '%m/%d/%Y')
$endDate = (Get-Date $now.AddMonths(1) -UFormat '%m/%d/%Y')
& "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Bin\x64\makecert.exe" -sv aad-auth.pvk -n "cn=AAD Authentication" aad-auth.cer -b $startDate -e $endDate -len 2048 -r
# Convert your key and cert into a PFX file.
& "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Bin\pvk2pfx.exe" -pvk aad-auth.pvk -spc aad-auth.cer -pfx aad-auth.pfx -po 'your password'

This code (run in a PowerShell console) will generate a self-signed certificate that will be good for 1 month only. You can adjust the certificate lifespan to what you feel is appropriate for your organization and project.

The paths to the makecert.exe and pvk2pfx.exe utilities may vary depending on your setup.  Also note that makecert.exe  and pvk2pfx.exe will prompt you a few times for passwords via popup dialog boxes.  Please remember to use a secure password to protect your keys and certificates.

Create a KeyCredential Object:

$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList @($PfxPath, $PfxPassword)
# print out your cert thumbprint:
$cert.Thumbprint
# Get the key value as Base64:
$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
# Create the KeyCredential object:
Add-Type -Path 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager\AzureRM.Resources\Microsoft.Azure.Commands.Resources.dll'
$credentialEndDate = $now.AddDays(14)
$keyId = [guid]::NewGuid()
$keyCredential = New-Object -TypeName Microsoft.Azure.Commands.Resources.Models.ActiveDirectory.PSADKeyCredential
$keyCredential.StartDate = $now
$keyCredential.EndDate = $credentialEndDate
$keyCredential.KeyId = $keyId
$keyCredential.Type = 'AsymmetricX509Cert'
$keyCredential.Usage = 'Verify'
$keyCredential.Value = $keyValue

Note that I am using the variables $PfxPath for the full path to the new PFX file and $PfxPassword for the secure password protecting the PFX file.
An important point to make here is the use of a different, shorter lifespan (14 days) for the KeyCredential object compared to the certificate (1 month). You could create a very long-lived certificate and simply control expiry through the KeyCredential object.

Create Your AAD Application using the KeyCredential for Authentication:

$pfxBaseName = (Get-Item -Path $PfxPath).BaseName
$azureAdApplication = New-AzureRmADApplication -DisplayName $pfxBaseName -HomePage ('https://{0}' -f $pfxBaseName) -IdentifierUris ('https://{0}' -f $pfxBaseName) -KeyCredentials $keyCredential

Note that the URL’s do not have to resolve to real hosts. Here, for convenience, I am using the basename of the pfx file to define my application Display Name and URL’s. You can use something else if you feel it is more helpful.

Create Your Service Principal:

$applicationId = ($azureAdApplication.ApplicationId).Guid
$servicePrincipal = New-AzureRmADServicePrincipal -ApplicationId $applicationId
# Output the ApplicationId:
$applicationId
# Output the ServicePrincipal ObjectId:
$servicePrincipal.Id.Guid

Save the ApplicationId and ServicePrincipalId for convenience, but you can always look them up later from AAD using PowerShell.
Note that here you must wait 15 to 20 seconds for the operation to complete. Even if you test with the Get-AzureRmAdServicePrincipal cmdlet you will get a valid response even before it is ready for use.

Add a Role for your New Application:

$role = New-AzureRmRoleAssignment -RoleDefinitionName 'Owner' -ServicePrincipalName $applicationId

Note that in this example I am not restricting the access of the application — it should default to the current Azure Subscription. The New-AzureRmRoleAssignment does allow you to have more fine-grained control allowing you to define a ResourceGroup or a particular Resource that you can restrict access to.
Also note that I am granting the ‘Owner’ role to this application. You can use the Get-AzureRmRoleDefinition cmdlet to see other available roles in your account or subscription.

Login and Test Your New Application Authentication:

First you must import your PFX file into your CurrentUser/My store.  This is most easily done by double-clicking on the PFX file and providing your password.  (In a future post I will provide details on how to automate this too.)

# Retrieve a subscription object:
$subscription = Get-AzureRmSubscription | where {$_.SubscriptionName -eq 'your subscription name'}
Login-AzureRmAccount -CertificateThumbprint $cert.Thumbprint -ApplicationId $azureAdApplication.ApplicationId -ServicePrincipal -TenantId $subscription.TenantId
# If you don't get an error it worked!
# Now verify this with an AzureRM cmdlet to list all Resource Groups in the subscription:
Get-AzureRmResourceGroup

Summary

And that is it. All you will need to do is install the PFX (certificate) on your build machine(s) and use the ApplicationId (GUID) and Subscription TenantId to authentication your deployment scripts.
Note that if you don’t do anything your KeyCredentials will eventually expire, and eventually the certificate will too. This is a good thing in terms of security in case any of these details leak out; however, it is still recommended that you store your PFX files and associated passwords in a secure location (I have a separate, secure Git repo for keys, certificates and other secrets that very few people have access to).
If you think these credentials have been compromised in any way you can easily revoke them using the Remove-AzureRmADApplication cmdlet. Removing the AAD application will also remove the associate Service Principal and its Role assignment. Using the above snippets you can easily automate the creation of a new AAD application and Service Principal.