Category Archives: Azure

Step-by-Step guide to manage Azure Storage using Azure CLI 2.0 – Part 02

This is last part of my blog post series which is covering Azure CLI 2.0 functions. If you didn’t read part 01 yet please read it before start on this. You can find it on http://www.rebeladmin.com/2017/10/step-step-guide-manage-azure-storage-using-azure-cli-2-0-part-01/ 

In my demo setup, I have two VM running. One is created using Azure Managed disks. In part 01 I explained how to add additional disk. It is currently having a 100GB additional disk attached. 

Expand Disks

Let’s see how we can expand disks using Azure CLI. Before do this, make sure you log in to Azure CLI using az login

let’s start with expanding azure managed disks. First, we can verify the VM’s storage configuration using, 

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore1

In there I have two disks. One is for OS called osdisk_6469626e28 and the other data disk called DataDisk01

We can’t increase the disk on a running VM. Not even a data disk. So first we need to deallocate the VM. We can do it using. 

az vm deallocate --resource-group rebeladminrg01 --name REBLEVM101

in above command –resource-group defines resource group VM belongs to. –name defines the VM name. 

clistore2

once it is completed we can increase the disk sizes. 

I need to expand os Disk size to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name osdisk_6469626e28 --size-gb 150

clistore3

I also like to expand data disk to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name DataDisk01 --size-gb 150

clistore4

in above commands, –resource-group defines resource group disks belongs to. –name defines the disk’s name. 

after finish, we can start the VM using,

az vm start --resource-group rebeladminrg01 --name REBLEVM101

once VM is up we can go in and expand the disk in OS level. 

clistore5

if you looking to expand disk for unmanaged disks it can be done via interface or Azure CLI 1.0. more info can find in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks-nodejs

the document itself for Linux vm but expand part work same way. 

Snapshots

We also can take snapshots of disk as quick recovery option. It is full copy of a disk in the time it’s taken. It can keep as a backup or attach to another machine for troubleshooting. 

In my demo, I am going to take snapshot of an azure managed OS disk. Before do that I need to find the disk ID. It can be done using

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore6

then we can take snapshot using,

az snapshot create -g rebeladminrg01 --source "/subscriptions/xxxxx/resourceGroups/REBELADMINRG01/providers/Microsoft.Compute/disks/osdisk_6469626e28" --name vm101osDisk-backup

clistore7

in above –source defines the disk id and –name defines the snapshot name.

if it is a unmanaged disk, snapshot works on different way. You can read more about it from https://docs.microsoft.com/en-us/azure/virtual-machines/linux/incremental-snapshots 

Convert to Managed Disks

if required we can convert vm with unmanaged disks to managed disk. To do that first we need to deallocate the VM.

az vm deallocate --resource-group rebeladminrg01 --name REBELVM102

then we can start the converting process using,

az vm convert --resource-group rebeladminrg01 --name REBELVM102

once process is finished it will start the VM. 

clistore8

clistore9

Manage blobs

We also can create, manage and delete blobs using Azure CLI. 

To create a container we can simply use,

az storage container create --name datastorage01 --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above –name defines the container name. AccountName specify the storage account name and AccountKey specify the auth key for the storage account. 

By default, the container data is set to private. If need it can set to public read access for blobs (blob) or public read and list access to whole container (container). It can define using –public-access

Once container is created we can upload blob using,

az storage blob upload --file C:\myzip1.zip --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the local file path. –container-name defines the container name it is uploading to. –name defines the blob name once it is uploaded. 

clistore10

to verify, we can list down the files in blob using,

az storage blob list --container-name datastorage01 --output table --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore11

we can download blob to local storage using,

az storage blob download --container-name datastorage01 --name myzip1.zip --file C:\myzip2.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the path and name it will have when downloaded to the local storage.

clistore12

we can delete a blob using command similar to,

az storage blob delete --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=1WzgTd/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore13

This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step guide to manage Azure Storage using Azure CLI 2.0 – Part 01

This is another part of my blog post series which was covering Azure CLI 2.0 functions. If you not read those yet you can find it with following links.

Step-by-step guide to start with azure cli 2.0http://www.rebeladmin.com/2017/08/step-step-guide-start-azure-cli-2-0/

Step-by-step guide to create azure vm using azure cli 2.0http://www.rebeladmin.com/2017/08/step-step-guide-create-azure-vm-using-azure-cli-2-0/

In part 01 of this blog post, we are going to look in to managing disks using Azure CLI. 

First thing first, I am going to log in to Azure CLI with a privileged account. This can be done using az login

I have a windows VM setup under my subscription. I can view its details using az vm show --resource-group rebeladminrg01 --name REBLEVM101

In above --resource-group defines the resource group name and –name defines the VM name. 

sto1

In this VM, I have a disk with size of 128 GB. It is azure managed disk. 

sto2

I like to add couple of disks in to this VM. Adding “Azure Managed” disk is the simplest way. It simplifies the disk management process. The only thing you need to worry is disk type and size. 

az vm disk attach -g rebeladminrg01 --vm-name REBLEVM101 --disk DataDisk01 --new --size-gb 100

above creates a managed disk called DataDisk01 under rebeladminrg01 resource group. it is 100 GB in size. It also attached to REBLEVM101 VM.

We can verify it by running,

az disk show --name DataDisk01 --resource-group rebeladminrg01

sto4

if need we can also use “unmanaged” disks. First, I am going to create a new storage account for it. 

az storage account create --location westus --name rebelstorage01 --resource-group rebeladminrg01 --sku Standard_LRS

sto5

above creates a storage account called rebelstorage01 under westus region. Its created under rebeladminrg01 resource group. its Standard_LRS storage. 

Before configure the storage, first we need to set environment variables so the it can be use with commands. 

To do that need to type

az storage account show-connection-string --name rebelstorage01 --resource-group rebeladminrg01

then copy the connection string value and use it with

az storage container create --name data --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=oJOjFskwKlDBisEiGREBEsMRWnDbOA+q6stySqXKT1MsBiPZeJPThnfnkGgG9AgudKmJ/5CCl65cGcMIAZGQhg=="

above will create a container called data under the storage account. 

Let’s go ahead and add a new unmanaged disk to a VM. 

Note – You cannot add unmanaged disk to a VM created with managed disk. 

az vm unmanaged-disk attach -g rebeladminrg01 --vm-name REBELVM3 --new -n DataDisk6 --vhd-uri https://rebelstorage01.blob.core.windows.net/data/2.vhd --size-gb 100

in above rebeladminrg01 is the resource group where azure VM located. REBELVM3 is the VM name. I am creating a new disk called DataDisk6 on data/2.vhd path. Its size is 100 GB. 

sto6

In order to detach disk from VM we can use following commands. 

If its unmanaged disk we can use,

az vm unmanaged-disk detach --name DataDisk6 --resource-group rebeladminrg01 --vm-name REBELVM3

above command will detach unmanaged disk called DataDisk6 from REBELVM3 VM.

sto7

If its managed disk we can use,

az vm disk detach -g rebeladminrg01 --vm-name REBLEVM101 -n DataDisk02

above will remove data disk called DataDisk02 from REBLEVM101 VM.

sto8

This is the end of the part 01 of this post. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Azure AD Connect Staging Mode

Azure AD Connect is the tool use to connect on-premises directory service with Azure AD. It allows users to use same on-premises ID and passwords to authenticate in to Azure AD, Office 365 or other Applications hosted in Azure. Azure AD connect can install on any server if its meets following,

The AD forest functional level must be Windows Server 2003 or later. 

If you plan to use the feature password writeback, then the Domain Controllers must be on Windows Server 2008 (with latest SP) or later. If your DCs are on 2008 (pre-R2), then you must also apply hotfix KB2386717.

The domain controller used by Azure AD must be writable. It is not supported to use a RODC (read-only domain controller) and Azure AD Connect does not follow any write redirects.

It is not supported to use on-premises forests/domains using SLDs (Single Label Domains).

It is not supported to use on-premises forests/domains using "dotted" (name contains a period ".") NetBios names.

Azure AD Connect cannot be installed on Small Business Server or Windows Server Essentials. The server must be using Windows Server standard or better.

The Azure AD Connect server must have a full GUI installed. It is not supported to install on server core.

Azure AD Connect must be installed on Windows Server 2008 or later. This server may be a domain controller or a member server when using express settings. If you use custom settings, then the server can also be stand-alone and does not have to be joined to a domain.

If you install Azure AD Connect on Windows Server 2008 or Windows Server 2008 R2, then make sure to apply the latest hotfixes from Windows Update. The installation is not able to start with an unpatched server.

If you plan to use the feature password synchronization, then the Azure AD Connect server must be on Windows Server 2008 R2 SP1 or later.

If you plan to use a group managed service account, then the Azure AD Connect server must be on Windows Server 2012 or later.

The Azure AD Connect server must have .NET Framework 4.5.1 or later and Microsoft PowerShell 3.0 or later installed.

If Active Directory Federation Services is being deployed, the servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation.

If Active Directory Federation Services is being deployed, you need SSL Certificates.

If Active Directory Federation Services is being deployed, then you need to configure name resolution.

If your global administrators have MFA enabled, then the URL https://secure.aadcdn.microsoftonline-p.com must be in the trusted sites list. You are prompted to add this site to the trusted sites list when you are prompted for an MFA challenge and it has not added before. You can use Internet Explorer to add it to your trusted sites.

Azure AD Connect requires a SQL Server database to store identity data. By default a SQL Server 2012 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, you need to point the installation wizard to a different installation of SQL Server.

What is staging mode? 
 
In a given time, only one Azure AD connect instance can involve with sync process for a directory. But this gives few challenges. 
 
Disaster Recovery – If the server with Azure AD connect involves in a disaster it going to make impact on sync process. This can be worse if you using features such as password pass-through, single-sing-on, password writeback through AD connect.
Upgrades – If the system which running Azure AD connect needs upgrade or if Azure AD connect itself needs upgrade, will make impact for sync process. Again, the affordable downtime will be depending on the features and organization dependencies over Azure AD connect and its operations. 
Testing New Features – Microsoft keep adding new features to Azure AD connect. Before introduce those to production its always good to simulate and see how it will impact. But if its only one instance, it is not possible to do so. Even you have demo environment it may not simulate same impact as production in some occasions. 
 
Microsoft introduced the staging mode of Azure AD connect to overcome above challenges. With staging mode, it allows you to maintain another copy of Azure AD connect instance in another server. it can have same config as primary server. It will connect to Azure AD and receive changes and keep a latest copy to make sure the switch over is seamless as possible. However, it will not sync Azure AD connect configuration from primary server. it is engineer’s responsibility to update staging server AD connect configuration, if primary server AD connects config modified. 
 
Installation
 
Let’s see how we can configure Azure AD connect in staging mode.
 
1) Prepare a server according to guidelines given in prerequisites section to install Azure AD Connect. 
2) Review current configuration of Azure AD connect running on primary server. you can check this by Azure AD Connect | View current configuration 

sta1
 
sta2
 
3) Log in to server as Domain Administrator and download latest Azure Ad Connect from https://www.microsoft.com/en-us/download/details.aspx?id=47594
4) During the installation, please select customize option. 
 
sta3
 
5) Then proceed with the configuration according to settings used in primary server. 
6) At the last step of the configuration, select Enable staging mode: When selected, synchronization will not export any data to Ad or Azure AD and then click install
 
sta4
 
7) Once installation completed, in Synchronization Service (Azure AD Connect | Synchronization Service) we can confirm there is no sync jobs. 
 
sta5
 
Verify data
 
As I mentioned before, staging server allows to simulate export before it make as primary. This is important if you implement new configuration changes. 
 
In order to prepare a staged copy of export, 
 
1) Go to Start | Azure AD Connect | Synchronization Service | Connectors 
 
sta6
 
2) Select the Active Directory Domain Services connector and click on Run from the right-hand panel. 
 
sta7
 
3) Then in next window select Full Import and click OK.
 
sta8
 
4) Repeat same for Windows Azure Active Directory (Microsoft) 
5) Once both jobs completed, Select the Active Directory Domain Services connector and click on Run from the right-hand panel again. But this time select Delta Synchronization, and click OK.
 
sta9
 
6) Repeat same for Windows Azure Active Directory (Microsoft)
7) Once both jobs finished, go to Operation tab and verify if jobs were completed successfully. 
 
sta10
 
Now we have the staging copy, next step is to verify if the data is presented as expected. to do that we need to get help of a PowerShell script.  

 
Param(
    [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")]
    [string]$xmltoimport="%temp%\exportedStage1a.xml",
    [Parameter(Mandatory=$false, HelpMessage="Maximum number of users per output file")][int]$batchsize=1000,
    [Parameter(Mandatory=$false, HelpMessage="Show console output")][bool]$showOutput=$false
)

#LINQ isn't loaded automatically, so force it
[Reflection.Assembly]::Load("System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089") | Out-Null

[int]$count=1
[int]$outputfilecount=1
[array]$objOutputUsers=@()

#XML must be generated using "csexport "Name of Connector" export.xml /f:x"
write-host "Importing XML" -ForegroundColor Yellow

#XmlReader.Create won't properly resolve the file location,
#so expand and then resolve it
$resolvedXMLtoimport=Resolve-Path -Path ([Environment]::ExpandEnvironmentVariables($xmltoimport))

#use an XmlReader to deal with even large files
$result=$reader = [System.Xml.XmlReader]::Create($resolvedXMLtoimport) 
$result=$reader.ReadToDescendant('cs-object')
do 
{
    #create the object placeholder
    #adding them up here means we can enforce consistency
    $objOutputUser=New-Object psobject
    Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
    Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""

    $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
    if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}

    #object id
    $outID=$user.Attribute('id').Value
    if ($showOutput) {Write-Host ID: $outID}
    $objOutputUser.ID=$outID

    #object type
    $outType=$user.Attribute('object-type').Value
    if ($showOutput) {Write-Host Type: $outType}
    $objOutputUser.Type=$outType

    #dn
    $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
    if ($showOutput) {Write-Host DN: $outDN}
    $objOutputUser.DN=$outDN

    #operation
    $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
    if ($showOutput) {Write-Host Operation: $outOperation}
    $objOutputUser.operation=$outOperation

    #now that we have the basics, go get the details

    foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
    {
        $attrvalue=$attr.Attribute('name').Value
        $internalvalue= $attr.Element('value').Value

        switch ($attrvalue)
        {
            "userPrincipalName"
            {
                if ($showOutput) {Write-Host UPN: $internalvalue}
                $objOutputUser.UPN=$internalvalue
            }
            "displayName"
            {
                if ($showOutput) {Write-Host displayName: $internalvalue}
                $objOutputUser.displayName=$internalvalue
            }
            "sourceAnchor"
            {
                if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
                $objOutputUser.sourceAnchor=$internalvalue
            }
            "alias"
            {
                if ($showOutput) {Write-Host alias: $internalvalue}
                $objOutputUser.alias=$internalvalue
            }
            "proxyAddresses"
            {
                if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
                $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
            }
        }
    }

    $objOutputUsers += $objOutputUser

    Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)

    #every so often, dump the processed users in case we blow up somewhere
    if ($count % $batchsize -eq 0)
    {
        Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow

        #export the collection of users as as CSV
        Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
        $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation

        #increment the output file counter
        $outputfilecount+=1

        #reset the collection and the user counter
        $objOutputUsers = $null
        $count=0
    }

    $count+=1

    #need to bail out of the loop if no more users to process
    if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
    {
        break
    }

} while ($reader.Read)

#need to write out any users that didn't get picked up in a batch of 1000
#export the collection of users as as CSV
Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
$objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation

 
Save this as .ps1 on C Drive. 
 
1) Open PowerShell and type cd "C:\Program Files\Microsoft Azure AD Sync\Bin" (if your install path is different use the relevant path)
2) Then run .\csexport "myrebeladmin.onmicrosoft.com – AAD" C:\export.xml /f:x in here myrebeladmin.onmicrosoft.com -AAD should replace with your Azure AD connector name. This will export config to C:\export.xml
3) Then type .\analyze.ps1 -xmltoimport C:\export.xml in here analyze.ps1 is the script we saved in beginning of this section. 
4) Then it will create CSV file called processedusers1.csv and it’s contain all changes which will sync to Azure AD. 
 
However, this step is always not required. It can make as primary server without import and verify process. 
 
How to make it as primary Server?
 
In order to make staging server as primary server,
 
1) Go to Start | Azure AD Connect | Azure AD Connect
2) Then click on Configure in next page. 
3) In next page select option Configure staging mode and click Next
 
 
sta11
 
4) In next page provide the Azure AD login credentials for directory sync account. 
5) In next window, untick Enable staging mode and click Next
 
sta12
 
6) In next window select start the synchronization process… and click Configure
 
sta13
 
This completes the process of promoting staging server in to primary. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step guide to configure Azure MFA with ADFS 2016

Multifactor authentication (MFA) is commonly use to protect applications, web services which is publish to internet. It helps to verify the authenticity of the authentication requests. There are many multifactor service providers. Some are cloud based and some are required on-premises installations.  

Azure MFA first was introduced to use with Azure services and later developed further to support on-premises workload protections too. It is possible to configure Azure MFA with ADFS 2.0 and ADFS 3.0, however the configuration required to install additional MFA server for that. With ADFS 4.0 (windows server 2016) this is made simple and we can integrate Azure MFA without need of additional server. 

In this post, I am going to walk you through the integration of Azure MFA with ADFS 2016. 

Before we start we need to look in to the prerequisites. 

1. Valid Azure subscription.

2. Azure Global Administrator account 

3. Existing Federate Azure AD setup. More info about this configuration can find in https://docs.microsoft.com/en-gb/azure/active-directory/connect/active-directory-aadconnect-get-started-custom#configuring-federation-with-ad-fs 

4. Windows Server 2016 AD FS installed in on-premises

5. Enterprise Administrator Account to configure MFA

6. Users with Azure MFA enabled – http://www.rebeladmin.com/2016/01/step-by-step-guide-to-configure-mfa-multi-factor-authentication-for-azure-users/

7. Windows Azure Active Directory module for Windows PowerShell installed in ADFS server

Create Certificate in each ADFS server to use with Azure MFA 

First step of the configuration is to generate a certificate for Azure MFA. This needs to perform on every ADFS server in the farm. In order to generate the certificate, you can use following on PowerShell. 

$certbase64 = New-AdfsAzureMfaTenantCertificate -TenantID “Your Tenant ID”

Please replace “Your Tenant ID” with actual azure tenant ID. You can find tenant ID by running Login-AzureRmAccount on Azure AD PowerShell. 

Once it is generated, the certificate will be under local computer certificates. 

cert1

Add new credentials to connect with Auth Client SPN

Now, we have the certificate, but we need to tell Azure Multi-Factor Auth Client to use it as

a credential to connect with AD FS.

Before that, we need to connect to the Azure AD using Azure PowerShell. We can do that

using this:

Connect-MsolService

Then, it will prompt for login and make sure to use Azure Global Administrator account to connect.

After that execute the command,

New-MsolServicePrincipalCredential -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -Type asymmetric -Usage verify -Value $certbase64

In the above command, AppPrincipalId defines the GUID for Azure Multi-Factor Auth Client.

Configure ADFS farm to use Azure MFA

Now we have the components ready and next step is to configure ADFS farm to use Azure AD. In order to do that run the following PowerShell command.

Set-AdfsAzureMfaTenant -TenantId “Your Tenant ID” -ClientId 981f26a1-7f43-403b-a875-f8b09b8cd720

In above command replace “Your Tenant ID” with your Azure Tennant id. ClientId in the command represent the GUID for Azure Multi-Factor Auth Client.

cert2

Once it is completed restart the ADFS service. 

Enable Azure MFA globally

Last step of the configuration is to enable Azure MFA for authentication. In order to do that log in to ADFS server and go to Server Manager > Tools > AD FS Management. Then, in the MMC, go to Service > Authentication Methods > Then in the Actions panel, click on Edit Primary Authentication Method.

cert3

This opens up the window to configure global authentication methods. It has two tabs, and we can see Azure MFA on both.

cert4

By selecting each box, you can enable MFA for intranet and extranet. 

This completes the configuration. now you can use Azure MFA with your ADFS farm. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO)

I am sure most of you aware what is single sign-on (SSO) in Active Directory infrastructure and how it works. When we extend identity infrastructures to Azure by using Azure AD, it also allows to extend Single Sign-On capabilities to authenticate in to cloud workloads. it can be done using on-premises ADFS farm. Password Hash Synchronization or Pass-through Authentication allow users to use same user name and password to log in to cloud applications but this is not a “Seamless” access. Even they are using same user name and password, when log in to Azure workloads it will prompt for password. 

In my below example, I have an Azure AD instance integrated with on-premises AD using Pass-through Authentication. In there I have a user R272845. I logged in to a domain joined computer with this user and try to access application published using Azure. when I type the URL and press enter, it redirects me to Azure AD login page.

sso1

sso2

Azure Active Directory Seamless Single Sign-On is a feature which allow users to authenticate in to Azure AD without providing password again when login from domain join/ corporate device. This can be integrated with Password Hash Synchronization or Pass-through Authentication. This is still on preview which means cannot use in production environment yet. However, if it doesn’t work in environment, it will always issue the typical Azure AD authentication page, so it will not prevent you from accessing any application. This feature is not supported if you using ADFS option already.

According to Microsoft, following can list as key features of Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO)

Users are automatically signed into both on-premises and cloud-based applications.

Users don't have to enter their passwords repeatedly.

No additional components needed on-premises to make this work.

Works with any method of cloud authentication – Password Hash Synchronization or Pass-through Authentication.

Can be rolled out to some or all your users using Group Policy.

Register non-Windows 10 devices with Azure AD without the need for any AD FS infrastructure. This capability needs you to use version 2.1 or later of the workplace-join client.

Seamless SSO is an opportunistic feature. If it fails for any reason, the user sign-in experience goes back to its regular behavior – i.e, the user needs to enter their password on the sign-in page.

It can be enabled via Azure AD Connect.

It is a free feature, and you don't need any paid editions of Azure AD to use it.

It is supported on web browser-based clients and Office clients that support modern authentication on platforms and browsers capable of Kerberos authentication

According to Microsoft, following environments are supported. 

OS\Browser

Internet Explorer

Edge

Google Chrome

Mozilla Firefox

Safari

Windows 10

Yes

No

Yes

Yes, additional config required

N/A

Windows 8.1

Yes

N/A

Yes

Yes, additional config required

N/A

Windows 8

Yes

N/A

Yes

Yes, additional config required

N/A

Windows 7

Yes

N/A

Yes

Yes, additional config required

N/A

Mac OS X

N/A

N/A

Yes

Yes, additional config required

Yes, additional config required

The current release (at the time this blog post was written) do not support edge browser. Also this feature will not work when users use private browser mode on Firefox or when users have Enhanced Protection mode enabled in IE. 

How it works?

Before we look in to configuration, let’s go ahead and see how it’s really works. In following example, user is trying to access cloud based application (integrated with azure) using his on-premises username, password and domain joined device. 

Also, it is important to know what happen in corporate infrastructure when seamless SSO enabled.

System will create AZUREADSSOACCT computer object in on-premises AD to represent Azure AD

AZUREADSSOACCT computer account’s Kerberos decryption key is shared with Azure AD.

Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in which is https://autologon.microsoftazuread-sso.com and https://aadg.windows.net.nsatc.net 

sso3

1. User is accessing the application URL using his browser. He is doing it using his domain joined device in corporate network.

2. If user is not sign in already, it is pointed to Azure AD sign in page and then user type his user name.

3. Azure AD challenge back user via browser using 401 response to provide Kerberos ticket.

4. Browser request a Kerberos ticket for AZUREADSSOACCT computer object from on-premises AD. This account will be created in on premise AD as part of the process in order to represent Azure AD. 

5. On-premises AD locate the AZUREADSSOACCT computer object and return the Kerberos ticket to the browser encrypted using computer object’s secret. 

6. The browser forwards Kerberos ticket to Azure AD.

7. Azure AD decrypts the Kerberos ticket using Kerberos decryption key (This was shared with azure AD when SSO feature enable)

8. After evaluation, Azure AD pass the response back to the user (if required additional steps such as MFA required).

9. User allowed to access the application. 

Prerequisites

In order to implement this feature, we need the following,

1. Domain Admin / Enterprise Admin account to install and configure Azure AD Connect in on-premises 

2. Global Administrator Account for Azure subscription – in order to create custom domain, configure AD connect etc.

3. Latest Azure AD Connect https://www.microsoft.com/en-us/download/details.aspx?id=47594 – if you have older Azure AD connect version installed, you need to upgrade it to latest before we configure this feature.

4. Azure AD Connect can communicate with *.msappproxy.net URLs and over port 443. If connection is control via IP addresses, the range of azure IP addresses can find in here https://www.microsoft.com/en-us/download/details.aspx?id=41653 

5. Add is https://autologon.microsoftazuread-sso.com and https://aadg.windows.net.nsatc.net to browser intranet zone. If users are using IE and chrome, this can be done using group policy. I have written blog post before how to create policy targeting IE. You can find it here

6. Firefox need above URL added to the trusted Kerberos site list to do Kerberos authentication. To do that go to Firefox browser > Type about:config in address bar > in list look for network.negotiate-auth.trusted-uris > right click and select modify > type “https://autologon.microsoftazuread-sso.com, https://aadg.windows.net.nsatc.net" and click ok

7. if its MAC os, device need to be joined to AD. More details can be found in here

Configure Azure AD Seamless SSO
 
Configuration of this feature is straight forward, basically it’s just putting a one tick box. 
 
If its fresh Azure AD connect installation, select the customize option under express settings.
 
sso4
 
Then in User Sign-in page select the appropriate sign-in option and then select Enable single sign-on option.
 
sso5
 
If you have existing Azure AD connect instance running, double click on Azure AD connect short cut. In initial window click on Configure.
 
sso6
 
In additional task page click on Change user sign-in and then click on Next.
 
sso7
 
In next window, type the Azure AD sync account user name and password and click on Next.
 
sso8
 
Then under the User Sign-in page select Enable single sign-on option and then click Next
 
sso9
 
In next page, enter the credentials for on-premises domain admin account and click Next.  
 
sso10
 
At the end click on Configure to complete the process. 
 
sso11
 
This completes the configuration and next step is to verify if its configured SSO. First thing is to check if its create computer object called AZUREADSSOACCT under on-premises AD. You will be able to find it under default Computers OU.
 
sso12
 
Then log in to Azure Portal and go to Azure Active Directory > Azure AD Connect then under the user sign-in option we can see seamless sign-on option is enabled. 
 
sso13
 
This means it’s all good. Next step is to check if its working as expected. in order to do that I am login to corporate device with same user I used earlier which is R272845 and try to access same app url. 
 
This time, all I needed to type was the user name and it log me in. nice!!!!
 
sso14
 
Note – before testing make sure you added the two Azure AD urls to intranet zone as I mentioned in prerequisites section. 
 
Hope this information was useful and if you have any questions feel free to contact me on rebeladm@live.com

Azure Active Directory Pass-through Authentication

When organizations want to use same user name and passwords to log in to on-premises and cloud workloads (azure), there are two options. One is to sync user name and password hashes from on-premises active directory to azure AD. Other option is to deploy ADFS farm on-premises and use it to authenticate cloud based logins. But it needs additional planning and resources. On-premises AD uses hash values (which are generated by a hash algorithm) as passwords. They are NOT saved as clear text, and it is almost impossible to revert it to the original password even someone have hash value. There is misunderstanding about this as some people still think Azure AD password sync uses clear text passwords. Every two minutes, the Azure AD connect server retrieves password hashes from the on-premises AD and syncs it with Azure AD on a per user-basis in chronological order. In technical point of view, I do not see a reason why people should avoid password hash sync to azure AD. However, there are company policies and compliance requirements which do not accept any form of identity sync to external system even on hash format. Azure Active Directory Pass-through Authentication is introduced by Microsoft to answer these requirements. It allows users to authenticate in to cloud workloads using same passwords they are using in on-premises without syncing their password hash values to Azure AD. This feature is currently on preview. Which means it’s still not supported on production environment. But it is not too early to try it in development environments.

According to Microsoft, following can list as key features of Pass-through Authentication.

Users use the same passwords to sign into both on-premises and cloud-based applications.

Users spend less time talking to the IT helpdesk resolving password-related issues.

Users can complete self-service password management tasks in the cloud.

No need for complex on-premises deployments or network configuration.

Needs just a lightweight agent to be installed on-premises.

No management overhead. The agent automatically receives improvements and bug fixes.

On-premises passwords are never stored in the cloud in any form.

The agent only makes outbound connections from within your network. Therefore, there is no requirement to install the agent in a perimeter network, also known as a DMZ.

Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including Multi-Factor Authentication (MFA), and by filtering out brute force password attacks.

Additional agents can be installed on multiple on-premises servers to provide high availability of sign-in requests.

Multi-forest environments are supported if there are forest trusts between your AD forests and if name suffix routing is correctly configured.

It is a free feature, and you don't need any paid editions of Azure AD to use it.

It can be enabled via Azure AD Connect.

It protects your on-premises accounts against brute force password attacks in the cloud.

How it works?

Let’s see how it really works. In following example, user is trying to access cloud based application (integrated with azure) using his on-premises username and the password. This organization is using pass-through authentication.

pt1

1. User is accessing the application URL using his browser. 

2. In order to authenticate to the application, user is directed to Azure Active Directory sign-in page. User then type the user name, password and click on sign-in button. 

3. Azure AD receives the data and it encrypt the password using public key which is used to verify the data authenticity. Then it places it’s in a queue where it will wait till pass-through agent retrieves it.   

4. On-premises pass-through agent retrieves the data from Azure AD queue (using an outbound connection) 

5. Agent decrypt the password using private key available for it. 

6. Agent validates the user name and password information with on-premises Active Directory. It uses same mechanism as ADFS. 

7. On-premises AD evaluate the request and provides the response. It can be success, failure, password-expire or account lockout. 

8. Pass-through agent passes the response back to Azure AD. 

9. Azure AD evaluate response and pass it back to user.

10. If response was success, user is allowed to access the application. 

Prerequisites
 
In order to implement this feature, we need the following,
 
1. Domain Admin / Enterprise Admin account to install and configure Azure AD Connect in on-premises 
2. Global Administrator Account for Azure subscription – in order to create custom domain, configure AD connect etc. 
3. On-premises servers running windows server 2012 R2 or latest to install Azure AD connect and pass-through agent.
4. Latest Azure AD Connect https://www.microsoft.com/en-us/download/details.aspx?id=47594 – if you have older Azure AD connect version installed, you need to upgrade it to latest before we configure this feature. 
5. Allow outbound communication to Azure via TCP port 80 and 443 from servers which will have Azure AD connect and authentication agents. You can find azure datacenter ip ranges from https://www.microsoft.com/en-us/download/details.aspx?id=41653
 

Configure Azure Active Directory Pass-through Authentication
 
Once we have all the prerequisites ready, we can look in to configuration. if you running Azure AD connect for first time make sure to use custom method.
 
pt1-1
 
Then in User sign-in option, select pass-through authentication and continue. 
 
pt1-2
 
If you running it already in servers, first run as Azure AD Connect as administrator. Then click on Configure.
 
pt2
 
Then in next page, select Change user sign-in option and click Next.
 
pt3
 
In next window type the Azure AD sync account login details and then click Next.
 
pt4
 
In next window, select pass-through authentication and click Next
 
pt5
 
Note– If you have Azure AD App Proxy Connector installed on same Azure AD connect server you will receive error saying, Pass-through authentication cannot be configured on this machine because Azure AD Connect agent is already installed. To fix it uninstall the Azure AD proxy connector and then reconfigure AD connect. After that you can reinstall Azure AD App proxy Connector. 
 
Once it finishes the configuration click on configure to complete the process. 
 
pt6
 
Once process is completed, log in to Azure Portal and then go to Azure Active Directory > Azure AD Connect. In there we can see pass-through authentication is enabled. 
 
pt7
 
And if you click on there it will shows the connected agents status. 
 
pt8
 
At this stages users from on-premises should be able to sign in to their cloud applications by using pass-through authentication.
 
in order to add high availability, we can install agent in multiple domain join servers. it can download from pass-through authentication page.
 
pt9
 
This completes the implementation of pass-through authentication and hope this post was useful. If you have any questions, feel free to contact me on rebeladm@live.com

Step-By-Step guide to create Azure VM using Azure CLI 2.0

In my previous blog post I have explain what is Azure CLI and how we can integrate it with windows system. If you didn’t read it yet please look in to it before we continue on this post. You can find it on http://www.rebeladmin.com/2017/08/step-step-guide-start-azure-cli-2-0/

In this blog post I am going to demonstrate how we can create Azure VM using Azure CLI. 

1) Log in to Azure CLI using az login (This is explained on my first blog. If you using cloud shell this is not necessary. All you need to do is launch it on the portal)

clivm1

2) Next step on process is to create resource group. before we create we need to know the available locations. So, we can create resource group under relevant geographical location. To list down the locations, run az account list-locations

clivm2

In my demo I am going to create resource group called “rebeladminrg01” under west us. The command for that task will be az group create --name rebeladminrg01 --location westus. In above –name specify the resource group name and –location specifies the geographical location. 

clivm3

3) Next step is to create a virtual network under my new resource group. for that I am going to use 

az network vnet create --name rebeladminVNet --resource-group rebeladminrg01 --location westus --address-prefix 10.10.0.0/16

In above command –name specify the virtual network name. in sample, it is rebeladminVNet. --resource-group defines the resource group it belongs to. In above –location specify the geographical location it belongs to. --address-prefix specify the address space associated with the virtual network.

clivm4

4) Now we have virtual network, next step is to create subnet 10.10.20.0/24 under the virtual network rebeladminVNet. In order to do that I am going to use,

az network vnet subnet create --address-prefix 10.10.20.0/24 --name rebeladminsub1 --resource-group rebeladminrg01 --vnet-name rebeladminVNet

in above, --address-prefix specify the address space for the subnet. –name specify the name of the subnet. --resource-group specify the resource group new subnet belongs to. --vnet-name specify the virtual network it is belongs to. 

clivm5

5) let’s also associate a new public IP address with virtual network, so we can use it to connect from external to new vm that we about to create. 

az network public-ip create --name rebeladminpubip1 --resource-group rebeladminrg01 --location westus --allocation-method dynamic

In above –name specify the name of the public IP instance. --resource-group defines the resource group name it belongs to. –location specifies the georgical location resource belongs to. --allocation-method specifies the public IP allocation method. It can be static IP or dynamic Ip assignment. In this demo, I am going to use dynamic method. 

clivm6

6) Next step on the process to create NIC so we can attach it to VM. 

az network nic create --resource-group rebeladminrg01 --name rebeladminNic1 --vnet-name rebeladminVNet --subnet rebeladminsub1 --public-ip-address rebeladminpubip1

in above sample, --resource-group defines the resource group name it belongs to. --vnet-name specify the virtual network it is belongs to. –subnet specify the subnet it associated with. --public-ip-address specify the public ip address this NIC will associate with. 

clivm7

Now we have components needed for the vm (except storage, I will cover storage on different post. In here I will be using Azure managed disks). We can review the details about the resource we created using az resource list -g rebeladminrg01 this will list down the resource under resources group rebeladminrg01

clivm8

Some data such as subnet info will not display by using above command. Those can view using list command combine with resources group and parent resources. as an example, to view subnet info under the virtual network we can use,

az network vnet subnet list --vnet-name rebeladminVNet -g rebeladminrg01

in above --vnet-name specify the virtual network name and -g specify the resource group name. 

clivm9

7) Now it’s all ready, lets create first windows VM using the resource we created on previous steps. 

az vm create --resource-group rebeladminrg01 --location westus --nics rebeladminNic1 --name REBLEVM101 --image win2016datacenter --admin-username rebeladmin --admin-password Pa$$w0rd123456

in above, --resource-group specify the resources group VM belong to. –nics specify the network interface associated with the VM. –name is the VM name. –image specify the virtual machine image going to use with VM. You can get list of entire image list using az vm image list --output table –all

in sample --admin-username defines the admin user name for the new vm and --admin-password defines the VM password. 

clivm10

this creates the VM successfully. 

clivm11

In this demo, I explain how to create VM using azure cli. Hope this was useful and in next post on Azure CLI I will cover about storage. If you have any questions, feel free to contact me on rebeladm@live.com 

Step-by-Step Guide to Start with Azure CLI 2.0

There are many ways to create, manage, remove resources from Azure subscription. For the users who prefer GUI has Azure Classic portal and Azure Resource Manager. For PowerShell lovers Azure has Azure PowerShell module. Apart from that there are other methods such as terraform (I already wrote articles about it, if you want to know more about it, search for “terraform” in the blog) which simplifies Azure resource management. Azure CLI is also a command-line tool introduced by Microsoft which can use to manage azure resources. This is allowing to use from multiple platform such as Linux, Mac OS and Windows. This blog post is to explain how we can configure windows system to use Azure CLI. 

There are two ways which we can use to connect to Azure CLI. 

Using Azure Portal

Azure also allow to use web based version of Azure CLI with name of “Cloud Shell”. This is easily can open through the browser. In order to access it,

1) Log in to Azure Portal

2) Click on Cloud Shell icon on top right-hand side

cli1

3) When you do this for first time it will ask to create Azure file share. You can select relevant subscription and click on “Create Storage

cli2

4) Once it is created the storage, it will load up the shell access through the browser. 

cli3

Using Windows Computer

We also can use Azure CLI from the local computer. as I said this is not only supported to use with windows systems. it is supported to use with Linux and Mac OS. In this demo, I am going to demonstrate how to configure it with windows system. 

Azure CLI uses python so out configuration will be based on python installation. 

1) Log in to computer as an administrator

2) Go to https://www.python.org/downloads/ and download python

cli4

3) Once file is downloaded, run it as administrator to install. During the installation, make sure to select option “Add Python 3.6 to PATH” option. Then it will allow to use python commands without navigating to installation location. 

cli5

4) Once installation completed, open windows command-line and type python –version. this will confirm the python installation. (it is recommended to open command line as administrator, otherwise it will say PATH records are not added as we ran the installation as Administrator) 

cli6

5) Next step is to install Azure CLI libraries. In order to do that run pip install –user azure-cli

cli7

6) Once it is completed, move to C:\Users\[Admin User]\AppData\Roaming\Python\Python36\Scripts and run command az . This will verify the Azure CLI integration. If it needs to run from anywhere add it to the PATH. 

cli8

7) Now let’s try to log in to Azure using Azure CLI. In order to do that we can use az login -u azureusername -p password. the problem on this method is that password need to type in as clear text. Instead of that we can use browser based more secure login. To do that type az login in command-line. 

The it gives a link and code to use for authentication. 

cli9

8) Once it is open in browser it asks for the verification code. Once its enter click on Continue

cli10

In next page, it verifies the Azure login and then confirm the connection.

cli11

When we go back to Azure CLI, we can see its successfully logged in and showing the subscription data. 

cli12

This confirms the successful connection to Azure using Azure CLI. This is the end of this post and in next post let’s see how we can add, manage, remove azure resources via Azure CLI. Hoep this was helpful and if you have any questions feel free to contact me on rebeladm@live.com

Setting up Azure Virtual Machines with Terraform

In my previous article about terraform, I explain what is terraform and what it can do. Also, I explain how to set it up and how we can use it with Azure to simplify infrastructure configuration. If you didn’t read it before you can view it using this link  

In this post, we are going to look further in to Azure infrastructure setup using terraform.

Before that lets look in to sample configuration of an Azure resource and see how syntax been used.

resource "azurerm_resource_group" "test" {

  name     = "acctestrg"

  location = "West US"

}

 resource "azurerm_virtual_network" "test" {

  name                = "acctvn"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

}

Above code is to create an Azure resource group and Azure virtual network. In the code azurerm_resource_group and azurerm_virtual_network defines the azure resource type. The text test defines the name for that resource instance. This is not the azure resource group or azure virtual network name. This is the instance name. so, if you have another resource group it can be test2. Actual resource names are defined using name attribute. So, in above code the actual resource name for resource group is acctestrg and for virtual network its acctvn.

In above example, new virtual network is need placed under the acctestrg resource group. in the code it is defined using,

resource_group_name = "${azurerm_resource_group.test.name}"

in there, by azurerm_resource_group.test it defines the related resource group instance. In our example, it is test. Then using .name it calls for the attribute value of name under that particular resource group.

In the plan stage terraform creates the execution plan. It does not process the code top to bottom. It evaluates the code and then build the plan logically. There for it no longer consider the resource order. Let’s try it with an example, 

resource "azurerm_virtual_network" "test" {

  name                = "acctvn"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

}

 resource "azurerm_resource_group" "test2" {

  name     = "acctestrg2"

  location = "West US"

}

 resource "azurerm_virtual_network" "test2" {

  name                = "acctvn2"

  address_space       = ["11.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test2.name}"

}

 resource "azurerm_resource_group" "test" {

  name     = "acctestrg"

  location = "West US"

}

In above example, I am creating two resources group and two virtual networks. If you look in to highlighted sections, I placed the code related to virtual network before creating resources group. But when I run terraform plan it creates the execution plan in correct order.

tf1 

And once it is executed, it creates the expected resources.

tf2

As next step on demo, let’s see how we can create virtual machines in Azure using terraform.

resource "azurerm_virtual_machine" "testvm" {

  name                  = "acctvm"

  location              = "West US"

  resource_group_name   = "${azurerm_resource_group.test.name}"

  network_interface_ids = ["${azurerm_network_interface.test.id}"]

  vm_size               = "Standard_A0"

above code is an example to create a VM in azure. In code sample, azurerm_virtual_machine defines the resource type. testvm is the resource instance name. acctvm is the name of the virtual machine. According to code the resource will deploy under West US region. resource_group_name defines the resource group it belongs to. network_interface_ids defines the network interface id for the VM. vm_size defines the Azure VM template. The template list for the region can list down using following Azure CLI command.

az vm list-sizes --location west-us

This will list down the all available VM sizes in West US region.

tf3

Azure VM also need other components such as virtual network, storages, operating system so on. Let’s see how we can add these to the configuration.

In earlier on the post, I share samples for creating a resources group and virtual network. The next step of it will be to add a subnet under the virtual network.

resource "azurerm_subnet" "sub1" {

  name                 = "acctsub1"

  resource_group_name  = "${azurerm_resource_group.test.name}"

  virtual_network_name = "${azurerm_virtual_network.test.name}"

  address_prefix       = "10.0.2.0/24"

}

In above I am creating a subnet 10.0.2.0/24 under virtual network and resources group I already have. In code, azurerm_subnet defines the resource type. sub1 is the instance name and acctsub1 is the subnet name. resource_group_name defines on which resources group it belongs to. virtual_network_name defines which azure virtual network it associated with. address_prefix specifies the subnet value.

Now we have subnet also associated with network. We also need public IP address in order to connect to VM from internet. 

resource "azurerm_public_ip" "pub1" {

  name                         = "pub1"

  location                     = "West US"

  resource_group_name          = "${azurerm_resource_group.test.name}"

  public_ip_address_allocation = "dynamic"

}

According to above, I am creating public IP instance called pub1 under same resource group. it’s IP allocation is set to Dynamic. If need it can be static as well.

Next step is to create network interface for the VM.

resource "azurerm_network_interface" "ni1" {

  name                = "acctni1"

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

ip_configuration {

    name                          = "lan1"

    subnet_id                     = "${azurerm_subnet.test.id}"

   private_ip_address_allocation = "dynamic"

   public_ip_address_id  = "${azurerm_public_ip.pub1.id}"

  }

In above azurerm_network_interface is the resource type for the network interface. the interface name we are creating is acctni1. the second part of code which starts with ip_configuration defines the IP configuration for the network interface. subnet_id defines the subnet it belongs to. private_ip_address_allocation defines the ip allocation method. It can be Dynamic or Static. public_ip_address_id associates with the public ip created in the previous step. If this is not done you will not be able to connect to VM remotely once it is deployed.    

Next thing we need for the VM is storage. Let’s start with creating a Storage Account 

resource "azurerm_storage_account" "asa1" {

  name                = "accsa"

  resource_group_name = "${azurerm_resource_group.test.name}"

  location            = "westus"

  account_type        = "Standard_LRS"

 }

azurerm_storage_account is the resource type and accsa is the name for the account. account_type defines the storage account type. it can be Standard_LRS, Standard_GRS, Standard_RAGRS, Standard_ZRS, or Premium_LRS. More info about these account types can find from https://docs.microsoft.com/en-us/azure/storage/storage-introduction .

as next step, we can create a new storage container under the storage account.

resource "azurerm_storage_container" "con1" {

  name                  = "vhds"

  resource_group_name   = "${azurerm_resource_group.test.name}"

  storage_account_name  = "${azurerm_storage_account.test.name}"

  container_access_type = "private"

}

In above azurerm_storage_container is the resource type and it name is vhds. resource_group_name defines the resource group it belongs to and storage_account_name defines storage account it belongs to. container_access_type can be private, blob or container. More info about these container types can find from https://docs.microsoft.com/en-us/azure/storage/storage-introduction

Following image shows what it looks like when using GUI option. 

tf4

By now we have most of the resources ready for the VM. Next step is to define image for the VM.

  storage_image_reference {

    publisher = " MicrosoftWindowsServer"

    offer     = " WindowsServer"

    sku       = " 2016-Datacenter"

    version   = "latest"

  }

In above I am using windows server 2016 datacenter as image for the VM. Publisher, offer, sku and version info need to provide in order to select correct image. For windows servers, you can find these info in https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage. For Linux, this info available at https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cli-ps-findimage

Next step is to add a hard disk,

storage_os_disk {

    name          = "myosdisk1"

    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/myosdisk1.vhd"

    caching       = "ReadWrite"

    create_option = "FromImage"

  }

  storage_data_disk {

    name          = "datadisk0"

    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/datadisk0.vhd"

    disk_size_gb  = "60"

    create_option = "Empty"

    lun           = 0

  }

Above create two disks. one is for OS and one is for data. vhd_uri defines the path for the VHD which is saved under the storage account created.

Last but not least we need to define the OS configuration data such as hostname and administrator account details.

  os_profile {

    computer_name  = "rebelpro1"

    admin_username = "rebeladmin"

    admin_password = "Password1234!"

  }

In above, computer_name specify the hostname of the VM. admin_username specify the local administrator name and admin_password specify the local administrator password.

Now we have all the components ready to deploy a new VM. Some of the components we just need to create one time. as example virtual networks, subnets, storage accounts not need to create for each VM unless there is valid requirement. Let’s put all these together in to a one script so it will make more sense. 

# Configure the Microsoft Azure Provider

provider "azurerm" {

  subscription_id = "d7xxxxxxxxxxxxxxxxxxxxxx"

  client_id       = "d9xxxxxxxxxxxxxxxxxxxxxx"

  client_secret   = "f1xxxxxxxxxxxxxxxxxxxxxx "

  tenant_id       = "05xxxxxxxxxxxxxxxxxxxxxx "

}

resource "azurerm_resource_group" "rg1" {

  name     = "acctestrg"

  location = "West US"

}

resource "azurerm_virtual_network" "vn1" {

  name                = "vn1"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

}

resource "azurerm_public_ip" "pub1" {

  name                         = "pub1"

  location                     = "West US"

  resource_group_name          = "${azurerm_resource_group.rg1.name}"

  public_ip_address_allocation = "dynamic"

}

resource "azurerm_subnet" "sub1" {

  name                 = "sub1"

  resource_group_name  = "${azurerm_resource_group.rg1.name}"

  virtual_network_name = "${azurerm_virtual_network.vn1.name}"

  address_prefix       = "10.0.2.0/24"

}

resource "azurerm_network_interface" "ni1" {

  name                = "ni1"

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

 

  ip_configuration {

    name                          = "config1"

    subnet_id                     = "${azurerm_subnet.sub1.id}"

    private_ip_address_allocation = "dynamic"

    public_ip_address_id  = "${azurerm_public_ip.pub1.id}"

  }

}

 resource "azurerm_storage_account" "storevm123" {

  name                = "storevm123"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

  location            = "westus"

  account_type        = "Standard_LRS"

 

  tags {

    environment = "demo"

  }

}

 resource "azurerm_storage_container" "cont1" {

  name                  = "vhds"

  resource_group_name   = "${azurerm_resource_group.rg1.name}"

  storage_account_name  = "${azurerm_storage_account.storevm123.name}"

  container_access_type = "private"

}

 resource "azurerm_virtual_machine" "vm1" {

  name                  = "vm1"

  location              = "West US"

  resource_group_name   = "${azurerm_resource_group.rg1.name}"

  network_interface_ids = ["${azurerm_network_interface.ni1.id}"]

  vm_size               = "Standard_DS2_v2"

 

   storage_image_reference {

    publisher = "MicrosoftWindowsServer"

    offer     = "WindowsServer"

    sku       = "2016-Datacenter"

    version   = "latest"

  }

   storage_os_disk {

    name          = "osdisk1"

    vhd_uri       = "${azurerm_storage_account.storevm123.primary_blob_endpoint}${azurerm_storage_container.cont1.name}/osdisk1.vhd"

    caching       = "ReadWrite"

    create_option = "FromImage"

  }

   storage_data_disk {

    name          = "datadisk1"

    vhd_uri       = "${azurerm_storage_account.storevm123.primary_blob_endpoint}${azurerm_storage_container.cont1.name}/datadisk1.vhd"

    disk_size_gb  = "60"

    create_option = "Empty"

    lun           = 0

  }

     os_profile {

    computer_name  = "rebelpro1"

    admin_username = "rebeladmin"

    admin_password = "Password1234!"

  }

   tags {

    environment = "demo"

  }

}

Let’s verify the resources using Azure portal.

As we can see it is created all the expected resource under the resource group acctestrg.

tf5

Also, we can see it is created the VM as expected.

tf6

In this post, we went through the process of creating Azure VM and related components using terraform. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Azure resource setup simplified with terraform

This week I was testing Terraform , a simple tool which can use to automate Azure resource deployment.

It will be easier to explain terraform with a real-world example. I am developing a web application and as my resource provider I am using Azure. my first requirement is to setup a development environment. For that I need at least one web server, one database server and connectivity between these two servers. to setup the environment, I log in to the portal and then setup resource group, storage account and virtual network. After that I start to build servers. after I complete it, I setup web server application and database server. so even its looks like straight forward, its takes time. later on, in the development process I also required a test platform where I can try my application with different operating systems. Also, I like to test application by adding more components such as load balancers, web servers to the environment. These testing environments are temporally. So, each and every time I need to setup environment with all the different components and once testing process completes, needs to destroy it. when I need to sell it as a solution to people, I face another challenge as not everyone wants to run it on Azure. Even if it’s another service provider or on-premises environment, application should test in similar environment prior to sell as a solution. Each provider has their own way of setting up things.

On this given scenario I faced few challenges,

To setup required resources for application takes time as each component need to configure in certain way. 

To setup integration between components, I need to log in to different systems and adjust the settings. A single mistake can cause hours of disruptions to the project. 

Due to the complexity of setting up environments, I may end up keep running test environments longer which probably increase my development cost. 

How Terraform can help?

Using Terraform I can deploy the whole environment by executing a single script file. This script is basically a set of instructions explaining how to setup each and every component. If I need to setup a new server in azure from scratch there is a procedure for it. we need to setup relevant resource groups, network components, storage accounts before we start to build the server. terraform itself understand these dependencies and build the environment according to that. This also helps to standardize the resource setup process.

Terraform also can use to configure application settings as part of the environment setup. That means we do not need to log in to systems to make initial software configurations. This will prevent the human errors.

Once we setup an environment using terraform we can change it, destroy it using a single command. As an example, let’s assume we setup a test environment with two web servers (using terraform). I have a new requirement to add new web server to the same environment. To do that all I need to do is modify the same script and add new entry for new webserver. once I execute it, it will automatically detect the current environment and only add the missing components. When destroy, it is again a single command and it will remove each component in the proper order. As example, it will understand that before remove resource group, it need to remove all other components under it.

As setup and destroy process of resources is easy with terraform, we do not need to keep running non-critical resources. As an example, if I need to give a POC or show a demo to a customer, all I need to do is to execute the pre-created terraform script when needed and destroy it afterwards.

Terraform support different service providers. It is not only for cloud based solutions. It also supports on-premises solutions. As an example, terraform can use with Azure Pack and Azure stack to do the same thing in on-premises Hyper-V environment. It also supports to SaaS application configurations. The supported providers list can be found in here https://www.terraform.io/docs/providers/index.html

Terraform mainly have three functions.

Plan – Before execute the configuration, it should go for the planning stage. In here terraform will build the execution plan based on the configuration provided by the engineer. It will explain what will be created when configuration is executed.

Apply – In this phase it will apply execute the execution plan created on the “planning” stage. It will also report back once its completed the resource setup. If there were errors, it will also explain it in details.

Destroy – This is basically to undo the execution plan. By calling this, we can destroy all the resource created by a particular terraform configuration file.

I think it’s enough with the theory, let’s see why it’s so cool.

In my demo, I am going to show how to setup terraform and how to use it to create resource in azure. 

Setup Terraform

In my demo, I am going to use windows 10 as the system. Terraform also supported on Linux, Mac and solaris systems.

1) Go to this link and download the file relevant to windows architecture. 

2) Then create a folder and move the downloaded terraform.exe file. 

3) Next step is to setup the Binary path for the terraform so system knows when we use the terraform commands. To do that, run the PowerShell console as Administrator and then type

$env:Path += ";C:\terraform"

In here C:\terraform is the folder where I saved the terraform.exe

terra1

4) As next step, we can confirm terraform setup by running terraform in the PowerShell console. 

terra2
 
This confirms the Terraform setup and next step to configure Azure side to support terraform. 
 
Retrieve Required info from Azure

Terraform uses Azure ARM API to connect and manage azure resources. To connect to Azure, terraform need to provide following Azure ARM environment variables using configuration file.

ARM_SUBSCRIPTION_ID

ARM_CLIENT_ID

ARM_CLIENT_SECRET

ARM_TENANT_ID

To get ARM_CLIENT_ID, ARM_CLIENT_SECRET and ARM_TENANT_ID we need to create a Service Principal in Azure.

To do the we can use Azure Cloud Shell.  

1) Log in to Azure Portal ( https://portal.azure.com ) as a Global Administrator

2) Click on Cloud Shell Button. 

terra3

3) Then it will open the shell in the same window. If it’s your first time using this feature, it will ask to create a storage account. 

terra4

4) Next step is to fine the Subscription Id. To do that type following and press enter. 

az account list

Then it will provide an output like following. In there “id” represent the Subscription ID we required. 

terra5

5) Next step is to create the Service Principal. In order to do that use,

az ad sp create-for-rbac –role="Contributor" –scopes="/subscriptions/xxxxxxxxxxxx"

in above command xxxxxxxxxxxx should replace with the Subscription ID we found in the previous step. 

Then it gives an output similar to following

terra6

In above image 

appId is equal to Client ID.

Password is equal to Client Secret

Tenant is equal to Tenant ID

Now we have all the information we need in order to connect to Azure trough terraform. 

Create first configuration

Next step is to create first terraform configuration file. The file is using the extension of .tf. You can use your favorite text editor to create the file. I am using Visual Studio code and it can be download from https://code.visualstudio.com/

The file no need to save on the same folder where your terraform.exe file. However, you need to navigate to that folder before execute the terraform commands. In my demo, it is C:\terraform

My first configuration is following

 

# Configure the Microsoft Azure Provider

provider "azurerm" {

  subscription_id = "xxxxxxx"

  client_id       = " xxxxxxx "

  client_secret   = " xxxxxxx "

  tenant_id       = " xxxxxxx "

}

resource "azurerm_resource_group" "myterrapro1" {

  name     = "myterrapro1"

  location = "West US"

}

resource "azurerm_virtual_network" "myterrapro1network" {

  name                = "myterrapro1vn"

  address_space       = ["10.11.12.0/24"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.myterrapro1.name}"

}

In above code,

provider "azurerm"

define the service provider as Azure ARM.

 subscription_id = "xxxxxxx"

                client_id       = " xxxxxxx "

                client_secret   = " xxxxxxx "

                tenant_id       = " xxxxxxx "

the above values should replace by the values we found through the Azure.

resource "azurerm_resource_group" "myterrapro1" {

  name     = "myterrapro1"

  location = "West US"

}

The above saying to create new azure resource group called myterrapro1 in West US region.

resource "azurerm_virtual_network" "myterrapro1network" {

  name                = "myterrapro1vn"

  address_space       = ["10.11.12.0/24"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.myterrapro1.name}"

in the next section, it is creating an Azure Virtual Network called myterrapro1vn. It got address space allocated as 10.11.12.0/24. This will create under resource group called myterrapro1. This virtual network also will be created under West US region.

Once script is ready, save it with .tf extension.

Then launch PowerShell as Administrator. Then change the folder path to where script file is saved. In my demo, its C:\terraform. After that type following,

terraform plan

This is step where terraform build the execution plan.

terra7

This output shows what will happen when apply the execution plan.

To apply the plan type following and press enter.

terraform apply

once process is started, it will also show the progress of setting up resources.

terra8

according to above image we can see its successfully created the resources. we can confirm it using the Azure portal.

terra9

terra10

Now we know how to create resource. Let’s see how to destroy the resources we created.

In order to do that you do not need to change anything in the script. All you need to do is to issue the command,

terraform destroy

once we run the command it will ask to confirm. Type yes and press enter to proceed with the destroy process.

terra11

As we can see it remove all the resource in the configuration file. Once it done we can also log in to azure portal and confirm.

Isn’t this cool?

In this blog post I explained what is terraform and how we can use it to simplify resource setup in azure. in next blog post I will share more examples to show its capabilities.

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com