What is Content Freshness protection in DFSR?

Healthy Replication is a must for active directory environment. SYSVOL folder in domain controllers contain policies and log on scripts. It is replicated between domain controllers to maintain up to date config (consistency). Before windows server 2008, it used FRS (File Replication Service) to replicate sysvol content among domain controllers. With Windows server 2008 FRS was deprecated and introduced Distributed File System (DFS) for replication.

A healthy replication required healthy communication between domain controllers. sometime the communication can interrupt due to domain controller failure or link failure. Based on the impact it is still possible that the communication re-established after period of time. Then it will try to resume replication and catch up with SYSVOL changes. In such scenario, we may see event 4012 in event viewer. 

The DFS Replication service stopped replication on the replicated folder at local path c:\xxx. It has been disconnected from other partners for 70 days, which is longer than the MaxOfflineTimeInDays parameter. Because of this, DFS Replication considers this data to be stale, and will replace it with data from other members of the replication group during the next replication. DFS Replication will move the stale files to the local Conflict folder. No user action is required.

With windows server 2008, Microsoft introduced a setting called content freshness protection to protect DFS shares from stale data. DFS also use a multi-master database similar to active directory. It also has tombstone time limit similar to Active Directory. The default value for this is 60 days. If there were no replication more than that time and resume replication in later time, it can have stale data. It is similar to lingering objects in AD. To protect from this, we can define value for MaxOfflineTimeInDays. if the number of days from last successful DFS replication is larger than MaxOfflineTimeInDays it will prevent the replication. 

We can review this value by running,

For /f %m IN ('dsquery server -o rdn') do @echo %m && @wmic /node:"%m" /namespace:\\root\microsoftdfs path DfsrMachineConfig get MaxOfflineTimeInDays

cf1

There is two ways to recover from this. First method is to increase the value of MaxOfflineTimeInDays. it can be done using,

wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=120

cf2

It is recommended to run this on all domain controllers to maintain same config. 

If you not willing to change this value, it still can recover using non-authoritative restore. It will remove all conflicting values and take an updated copy. 

I have already written an article about non-authoritative restore of SYSVOL and it can be find in http://www.rebeladmin.com/2017/08/non-authoritative-authoritative-sysvol-restore-dfs-replication/ 

This is not only for SYSVOL replication. It is valid for DFS replication in general. 

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to setup Fine-Grained Password Policies

In AD environment, we can use password policy to define passwords security requirements. These settings are located under Computer Configuration | Policies | Windows Settings | Security Settings | Account Policies

fine1

Before Windows server 2008, only one password policy can apply to the users. But in an environment, based on user roles it may require additional protection. As an example, for sales users 8-character complex password can be too much but it is not too much for domain admin account. With windows server 2008 Microsoft introduced Fine-Grained Password Policies. This allow to apply different password policies users and groups. In order to use this feature, 

1) Your domain functional level should be windows server 2008 at least.

2) Need Domain/Enterprise Admin account to create policies. 

Similar to group policies, sometime objects may end up with multiple password policies applied to it. but in any given time, an object can only have one password policy. Each Fine-Grained Password Policy have a precedence value. This integer value can define during the policy setup. Lower precedence value means the higher priority. If multiple policies been applied to an object, the policy with lower precedence value wins. Also, policy linked to user object directly, always wins. 

We can create the policies using Active Directory Administrative Centre or PowerShell. In this demo, I am going to use PowerShell method. 

New-ADFineGrainedPasswordPolicy -Name "Tech Admin Password Policy" -Precedence 1 `

-MinPasswordLength 12 -MaxPasswordAge "30" -MinPasswordAge "7" `

-PasswordHistoryCount 50 -ComplexityEnabled:$true `

-LockoutDuration "8:00" `

-LockoutObservationWindow "8:00" -LockoutThreshold 3 `

-ReversibleEncryptionEnabled:$false

In above sample I am creating a new fine-grained password policy called “Tech Admin Password Policy”. New-ADFineGrainedPasswordPolicy is the cmdlet to create new policy. Precedence to define precedence. LockoutDuration and LockoutObservationWindow values are define in hours. LockoutThreshold value defines the number of login attempts allowed. 

More info about the syntax can find using,

Get-Help New-ADFineGrainedPasswordPolicy

Also, can view examples using 

Get-Help New-ADFineGrainedPasswordPolicy -Examples

fine2

Once policy is setup we can verify its settings using, 

Get-ADFineGrainedPasswordPolicy –Identity “Tech Admin Password Policy” 

fine3

Now we have policy in place. Next step is to attach it to groups or users. In my demo, I am going to apply this to a group called “IT Admins”

Add-ADFineGrainedPasswordPolicySubject -Identity "Tech Admin Password Policy" -Subjects "IT Admins"

I also going to attach it to s user account R143869

Add-ADFineGrainedPasswordPolicySubject -Identity "Tech Admin Password Policy" -Subjects "R143869"

We can verify the policy using following,

Get-ADFineGrainedPasswordPolicy -Identity "Tech Admin Password Policy" | Format-Table AppliesTo –AutoSize

fine4

This confirms the configuration. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

When AD password will expire?

In Active Directory environment users have to update their passwords when its expire. In some occasions, it is important to know when user password will expire.

For user account, the value for the next password change is saved under the attribute msDS-UserPasswordExpiryTimeComputed

We can view this value for a user account using a PowerShell command like following, 

Get-ADuser R564441 -Properties msDS-UserPasswordExpiryTimeComputed | select Name, msDS-UserPasswordExpiryTimeComputed 

In above command, I am trying to find out the msDS-UserPasswordExpiryTimeComputed attribute for the user R564441. In output I am listing value of Name attribute and msDS-UserPasswordExpiryTimeComputed

ex1

In my example, it gave 131412469385705537 but it’s not mean anything. We need to convert it to readable format. 

I can do it using,

Get-ADuser R564441 -Properties msDS-UserPasswordExpiryTimeComputed | select Name, {[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}

In above the value was converted to datetime format and now its gives readable value. 

ex2

We can further develop this to provide report or send automatic reminders to users. I wrote following PowerShell script to generate a report regarding all the users in AD. 

$passwordexpired = $null

$dc = (Get-ADDomain | Select DNSRoot).DNSRoot

$Report= "C:\report.html"

$HTML=@"

<title>Password Validity Period For $dc</title>

<style>

BODY{background-color :LightBlue}

</style>

"@

$passwordexpired = Get-ADUser -filter * –Properties "SamAccountName","pwdLastSet","msDS-UserPasswordExpiryTimeComputed" | Select-Object -Property "SamAccountName",@{Name="Last Password Change";Expression={[datetime]::FromFileTime($_."pwdLastSet")}},@{Name="Next Password Change";Expression={[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}}

$passwordexpired | ConvertTo-Html -Property "SamAccountName","Last Password Change","Next Password Change"-head $HTML -body "<H2> Password Validity Period For $dc</H2>"|

Out-File $Report

     Invoke-Expression C:\report.html

This creates HTML report as following. It contains user name, last password change time and date and time it going to expire. 

ex3

The attributes value I used in here is SamAccountName, pwdLastSet and msDS-UserPasswordExpiryTimeComputed. pwdLastSet attribute holds the value for last password reset time and date. 

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts. 

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to configure Azure MFA with ADFS 2016

Multifactor authentication (MFA) is commonly use to protect applications, web services which is publish to internet. It helps to verify the authenticity of the authentication requests. There are many multifactor service providers. Some are cloud based and some are required on-premises installations.  

Azure MFA first was introduced to use with Azure services and later developed further to support on-premises workload protections too. It is possible to configure Azure MFA with ADFS 2.0 and ADFS 3.0, however the configuration required to install additional MFA server for that. With ADFS 4.0 (windows server 2016) this is made simple and we can integrate Azure MFA without need of additional server. 

In this post, I am going to walk you through the integration of Azure MFA with ADFS 2016. 

Before we start we need to look in to the prerequisites. 

1. Valid Azure subscription.

2. Azure Global Administrator account 

3. Existing Federate Azure AD setup. More info about this configuration can find in https://docs.microsoft.com/en-gb/azure/active-directory/connect/active-directory-aadconnect-get-started-custom#configuring-federation-with-ad-fs 

4. Windows Server 2016 AD FS installed in on-premises

5. Enterprise Administrator Account to configure MFA

6. Users with Azure MFA enabled – http://www.rebeladmin.com/2016/01/step-by-step-guide-to-configure-mfa-multi-factor-authentication-for-azure-users/

7. Windows Azure Active Directory module for Windows PowerShell installed in ADFS server

Create Certificate in each ADFS server to use with Azure MFA 

First step of the configuration is to generate a certificate for Azure MFA. This needs to perform on every ADFS server in the farm. In order to generate the certificate, you can use following on PowerShell. 

$certbase64 = New-AdfsAzureMfaTenantCertificate -TenantID “Your Tenant ID”

Please replace “Your Tenant ID” with actual azure tenant ID. You can find tenant ID by running Login-AzureRmAccount on Azure AD PowerShell. 

Once it is generated, the certificate will be under local computer certificates. 

cert1

Add new credentials to connect with Auth Client SPN

Now, we have the certificate, but we need to tell Azure Multi-Factor Auth Client to use it as

a credential to connect with AD FS.

Before that, we need to connect to the Azure AD using Azure PowerShell. We can do that

using this:

Connect-MsolService

Then, it will prompt for login and make sure to use Azure Global Administrator account to connect.

After that execute the command,

New-MsolServicePrincipalCredential -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -Type asymmetric -Usage verify -Value $certbase64

In the above command, AppPrincipalId defines the GUID for Azure Multi-Factor Auth Client.

Configure ADFS farm to use Azure MFA

Now we have the components ready and next step is to configure ADFS farm to use Azure AD. In order to do that run the following PowerShell command.

Set-AdfsAzureMfaTenant -TenantId “Your Tenant ID” -ClientId 981f26a1-7f43-403b-a875-f8b09b8cd720

In above command replace “Your Tenant ID” with your Azure Tennant id. ClientId in the command represent the GUID for Azure Multi-Factor Auth Client.

cert2

Once it is completed restart the ADFS service. 

Enable Azure MFA globally

Last step of the configuration is to enable Azure MFA for authentication. In order to do that log in to ADFS server and go to Server Manager > Tools > AD FS Management. Then, in the MMC, go to Service > Authentication Methods > Then in the Actions panel, click on Edit Primary Authentication Method.

cert3

This opens up the window to configure global authentication methods. It has two tabs, and we can see Azure MFA on both.

cert4

By selecting each box, you can enable MFA for intranet and extranet. 

This completes the configuration. now you can use Azure MFA with your ADFS farm. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO)

I am sure most of you aware what is single sign-on (SSO) in Active Directory infrastructure and how it works. When we extend identity infrastructures to Azure by using Azure AD, it also allows to extend Single Sign-On capabilities to authenticate in to cloud workloads. it can be done using on-premises ADFS farm. Password Hash Synchronization or Pass-through Authentication allow users to use same user name and password to log in to cloud applications but this is not a “Seamless” access. Even they are using same user name and password, when log in to Azure workloads it will prompt for password. 

In my below example, I have an Azure AD instance integrated with on-premises AD using Pass-through Authentication. In there I have a user R272845. I logged in to a domain joined computer with this user and try to access application published using Azure. when I type the URL and press enter, it redirects me to Azure AD login page.

sso1

sso2

Azure Active Directory Seamless Single Sign-On is a feature which allow users to authenticate in to Azure AD without providing password again when login from domain join/ corporate device. This can be integrated with Password Hash Synchronization or Pass-through Authentication. This is still on preview which means cannot use in production environment yet. However, if it doesn’t work in environment, it will always issue the typical Azure AD authentication page, so it will not prevent you from accessing any application. This feature is not supported if you using ADFS option already.

According to Microsoft, following can list as key features of Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO)

Users are automatically signed into both on-premises and cloud-based applications.

Users don't have to enter their passwords repeatedly.

No additional components needed on-premises to make this work.

Works with any method of cloud authentication – Password Hash Synchronization or Pass-through Authentication.

Can be rolled out to some or all your users using Group Policy.

Register non-Windows 10 devices with Azure AD without the need for any AD FS infrastructure. This capability needs you to use version 2.1 or later of the workplace-join client.

Seamless SSO is an opportunistic feature. If it fails for any reason, the user sign-in experience goes back to its regular behavior – i.e, the user needs to enter their password on the sign-in page.

It can be enabled via Azure AD Connect.

It is a free feature, and you don't need any paid editions of Azure AD to use it.

It is supported on web browser-based clients and Office clients that support modern authentication on platforms and browsers capable of Kerberos authentication

According to Microsoft, following environments are supported. 

OS\Browser

Internet Explorer

Edge

Google Chrome

Mozilla Firefox

Safari

Windows 10

Yes

No

Yes

Yes, additional config required

N/A

Windows 8.1

Yes

N/A

Yes

Yes, additional config required

N/A

Windows 8

Yes

N/A

Yes

Yes, additional config required

N/A

Windows 7

Yes

N/A

Yes

Yes, additional config required

N/A

Mac OS X

N/A

N/A

Yes

Yes, additional config required

Yes, additional config required

The current release (at the time this blog post was written) do not support edge browser. Also this feature will not work when users use private browser mode on Firefox or when users have Enhanced Protection mode enabled in IE. 

How it works?

Before we look in to configuration, let’s go ahead and see how it’s really works. In following example, user is trying to access cloud based application (integrated with azure) using his on-premises username, password and domain joined device. 

Also, it is important to know what happen in corporate infrastructure when seamless SSO enabled.

System will create AZUREADSSOACCT computer object in on-premises AD to represent Azure AD

AZUREADSSOACCT computer account’s Kerberos decryption key is shared with Azure AD.

Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in which is https://autologon.microsoftazuread-sso.com and https://aadg.windows.net.nsatc.net 

sso3

1. User is accessing the application URL using his browser. He is doing it using his domain joined device in corporate network.

2. If user is not sign in already, it is pointed to Azure AD sign in page and then user type his user name.

3. Azure AD challenge back user via browser using 401 response to provide Kerberos ticket.

4. Browser request a Kerberos ticket for AZUREADSSOACCT computer object from on-premises AD. This account will be created in on premise AD as part of the process in order to represent Azure AD. 

5. On-premises AD locate the AZUREADSSOACCT computer object and return the Kerberos ticket to the browser encrypted using computer object’s secret. 

6. The browser forwards Kerberos ticket to Azure AD.

7. Azure AD decrypts the Kerberos ticket using Kerberos decryption key (This was shared with azure AD when SSO feature enable)

8. After evaluation, Azure AD pass the response back to the user (if required additional steps such as MFA required).

9. User allowed to access the application. 

Prerequisites

In order to implement this feature, we need the following,

1. Domain Admin / Enterprise Admin account to install and configure Azure AD Connect in on-premises 

2. Global Administrator Account for Azure subscription – in order to create custom domain, configure AD connect etc.

3. Latest Azure AD Connect https://www.microsoft.com/en-us/download/details.aspx?id=47594 – if you have older Azure AD connect version installed, you need to upgrade it to latest before we configure this feature.

4. Azure AD Connect can communicate with *.msappproxy.net URLs and over port 443. If connection is control via IP addresses, the range of azure IP addresses can find in here https://www.microsoft.com/en-us/download/details.aspx?id=41653 

5. Add is https://autologon.microsoftazuread-sso.com and https://aadg.windows.net.nsatc.net to browser intranet zone. If users are using IE and chrome, this can be done using group policy. I have written blog post before how to create policy targeting IE. You can find it here

6. Firefox need above URL added to the trusted Kerberos site list to do Kerberos authentication. To do that go to Firefox browser > Type about:config in address bar > in list look for network.negotiate-auth.trusted-uris > right click and select modify > type “https://autologon.microsoftazuread-sso.com, https://aadg.windows.net.nsatc.net" and click ok

7. if its MAC os, device need to be joined to AD. More details can be found in here

Configure Azure AD Seamless SSO
 
Configuration of this feature is straight forward, basically it’s just putting a one tick box. 
 
If its fresh Azure AD connect installation, select the customize option under express settings.
 
sso4
 
Then in User Sign-in page select the appropriate sign-in option and then select Enable single sign-on option.
 
sso5
 
If you have existing Azure AD connect instance running, double click on Azure AD connect short cut. In initial window click on Configure.
 
sso6
 
In additional task page click on Change user sign-in and then click on Next.
 
sso7
 
In next window, type the Azure AD sync account user name and password and click on Next.
 
sso8
 
Then under the User Sign-in page select Enable single sign-on option and then click Next
 
sso9
 
In next page, enter the credentials for on-premises domain admin account and click Next.  
 
sso10
 
At the end click on Configure to complete the process. 
 
sso11
 
This completes the configuration and next step is to verify if its configured SSO. First thing is to check if its create computer object called AZUREADSSOACCT under on-premises AD. You will be able to find it under default Computers OU.
 
sso12
 
Then log in to Azure Portal and go to Azure Active Directory > Azure AD Connect then under the user sign-in option we can see seamless sign-on option is enabled. 
 
sso13
 
This means it’s all good. Next step is to check if its working as expected. in order to do that I am login to corporate device with same user I used earlier which is R272845 and try to access same app url. 
 
This time, all I needed to type was the user name and it log me in. nice!!!!
 
sso14
 
Note – before testing make sure you added the two Azure AD urls to intranet zone as I mentioned in prerequisites section. 
 
Hope this information was useful and if you have any questions feel free to contact me on rebeladm@live.com
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure Active Directory Pass-through Authentication

When organizations want to use same user name and passwords to log in to on-premises and cloud workloads (azure), there are two options. One is to sync user name and password hashes from on-premises active directory to azure AD. Other option is to deploy ADFS farm on-premises and use it to authenticate cloud based logins. But it needs additional planning and resources. On-premises AD uses hash values (which are generated by a hash algorithm) as passwords. They are NOT saved as clear text, and it is almost impossible to revert it to the original password even someone have hash value. There is misunderstanding about this as some people still think Azure AD password sync uses clear text passwords. Every two minutes, the Azure AD connect server retrieves password hashes from the on-premises AD and syncs it with Azure AD on a per user-basis in chronological order. In technical point of view, I do not see a reason why people should avoid password hash sync to azure AD. However, there are company policies and compliance requirements which do not accept any form of identity sync to external system even on hash format. Azure Active Directory Pass-through Authentication is introduced by Microsoft to answer these requirements. It allows users to authenticate in to cloud workloads using same passwords they are using in on-premises without syncing their password hash values to Azure AD. This feature is currently on preview. Which means it’s still not supported on production environment. But it is not too early to try it in development environments.

According to Microsoft, following can list as key features of Pass-through Authentication.

Users use the same passwords to sign into both on-premises and cloud-based applications.

Users spend less time talking to the IT helpdesk resolving password-related issues.

Users can complete self-service password management tasks in the cloud.

No need for complex on-premises deployments or network configuration.

Needs just a lightweight agent to be installed on-premises.

No management overhead. The agent automatically receives improvements and bug fixes.

On-premises passwords are never stored in the cloud in any form.

The agent only makes outbound connections from within your network. Therefore, there is no requirement to install the agent in a perimeter network, also known as a DMZ.

Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including Multi-Factor Authentication (MFA), and by filtering out brute force password attacks.

Additional agents can be installed on multiple on-premises servers to provide high availability of sign-in requests.

Multi-forest environments are supported if there are forest trusts between your AD forests and if name suffix routing is correctly configured.

It is a free feature, and you don't need any paid editions of Azure AD to use it.

It can be enabled via Azure AD Connect.

It protects your on-premises accounts against brute force password attacks in the cloud.

How it works?

Let’s see how it really works. In following example, user is trying to access cloud based application (integrated with azure) using his on-premises username and the password. This organization is using pass-through authentication.

pt1

1. User is accessing the application URL using his browser. 

2. In order to authenticate to the application, user is directed to Azure Active Directory sign-in page. User then type the user name, password and click on sign-in button. 

3. Azure AD receives the data and it encrypt the password using public key which is used to verify the data authenticity. Then it places it’s in a queue where it will wait till pass-through agent retrieves it.   

4. On-premises pass-through agent retrieves the data from Azure AD queue (using an outbound connection) 

5. Agent decrypt the password using private key available for it. 

6. Agent validates the user name and password information with on-premises Active Directory. It uses same mechanism as ADFS. 

7. On-premises AD evaluate the request and provides the response. It can be success, failure, password-expire or account lockout. 

8. Pass-through agent passes the response back to Azure AD. 

9. Azure AD evaluate response and pass it back to user.

10. If response was success, user is allowed to access the application. 

Prerequisites
 
In order to implement this feature, we need the following,
 
1. Domain Admin / Enterprise Admin account to install and configure Azure AD Connect in on-premises 
2. Global Administrator Account for Azure subscription – in order to create custom domain, configure AD connect etc. 
3. On-premises servers running windows server 2012 R2 or latest to install Azure AD connect and pass-through agent.
4. Latest Azure AD Connect https://www.microsoft.com/en-us/download/details.aspx?id=47594 – if you have older Azure AD connect version installed, you need to upgrade it to latest before we configure this feature. 
5. Allow outbound communication to Azure via TCP port 80 and 443 from servers which will have Azure AD connect and authentication agents. You can find azure datacenter ip ranges from https://www.microsoft.com/en-us/download/details.aspx?id=41653
 

Configure Azure Active Directory Pass-through Authentication
 
Once we have all the prerequisites ready, we can look in to configuration. if you running Azure AD connect for first time make sure to use custom method.
 
pt1-1
 
Then in User sign-in option, select pass-through authentication and continue. 
 
pt1-2
 
If you running it already in servers, first run as Azure AD Connect as administrator. Then click on Configure.
 
pt2
 
Then in next page, select Change user sign-in option and click Next.
 
pt3
 
In next window type the Azure AD sync account login details and then click Next.
 
pt4
 
In next window, select pass-through authentication and click Next
 
pt5
 
Note– If you have Azure AD App Proxy Connector installed on same Azure AD connect server you will receive error saying, Pass-through authentication cannot be configured on this machine because Azure AD Connect agent is already installed. To fix it uninstall the Azure AD proxy connector and then reconfigure AD connect. After that you can reinstall Azure AD App proxy Connector. 
 
Once it finishes the configuration click on configure to complete the process. 
 
pt6
 
Once process is completed, log in to Azure Portal and then go to Azure Active Directory > Azure AD Connect. In there we can see pass-through authentication is enabled. 
 
pt7
 
And if you click on there it will shows the connected agents status. 
 
pt8
 
At this stages users from on-premises should be able to sign in to their cloud applications by using pass-through authentication.
 
in order to add high availability, we can install agent in multiple domain join servers. it can download from pass-through authentication page.
 
pt9
 
This completes the implementation of pass-through authentication and hope this post was useful. If you have any questions, feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-By-Step guide to create Azure VM using Azure CLI 2.0

In my previous blog post I have explain what is Azure CLI and how we can integrate it with windows system. If you didn’t read it yet please look in to it before we continue on this post. You can find it on http://www.rebeladmin.com/2017/08/step-step-guide-start-azure-cli-2-0/

In this blog post I am going to demonstrate how we can create Azure VM using Azure CLI. 

1) Log in to Azure CLI using az login (This is explained on my first blog. If you using cloud shell this is not necessary. All you need to do is launch it on the portal)

clivm1

2) Next step on process is to create resource group. before we create we need to know the available locations. So, we can create resource group under relevant geographical location. To list down the locations, run az account list-locations

clivm2

In my demo I am going to create resource group called “rebeladminrg01” under west us. The command for that task will be az group create --name rebeladminrg01 --location westus. In above –name specify the resource group name and –location specifies the geographical location. 

clivm3

3) Next step is to create a virtual network under my new resource group. for that I am going to use 

az network vnet create --name rebeladminVNet --resource-group rebeladminrg01 --location westus --address-prefix 10.10.0.0/16

In above command –name specify the virtual network name. in sample, it is rebeladminVNet. --resource-group defines the resource group it belongs to. In above –location specify the geographical location it belongs to. --address-prefix specify the address space associated with the virtual network.

clivm4

4) Now we have virtual network, next step is to create subnet 10.10.20.0/24 under the virtual network rebeladminVNet. In order to do that I am going to use,

az network vnet subnet create --address-prefix 10.10.20.0/24 --name rebeladminsub1 --resource-group rebeladminrg01 --vnet-name rebeladminVNet

in above, --address-prefix specify the address space for the subnet. –name specify the name of the subnet. --resource-group specify the resource group new subnet belongs to. --vnet-name specify the virtual network it is belongs to. 

clivm5

5) let’s also associate a new public IP address with virtual network, so we can use it to connect from external to new vm that we about to create. 

az network public-ip create --name rebeladminpubip1 --resource-group rebeladminrg01 --location westus --allocation-method dynamic

In above –name specify the name of the public IP instance. --resource-group defines the resource group name it belongs to. –location specifies the georgical location resource belongs to. --allocation-method specifies the public IP allocation method. It can be static IP or dynamic Ip assignment. In this demo, I am going to use dynamic method. 

clivm6

6) Next step on the process to create NIC so we can attach it to VM. 

az network nic create --resource-group rebeladminrg01 --name rebeladminNic1 --vnet-name rebeladminVNet --subnet rebeladminsub1 --public-ip-address rebeladminpubip1

in above sample, --resource-group defines the resource group name it belongs to. --vnet-name specify the virtual network it is belongs to. –subnet specify the subnet it associated with. --public-ip-address specify the public ip address this NIC will associate with. 

clivm7

Now we have components needed for the vm (except storage, I will cover storage on different post. In here I will be using Azure managed disks). We can review the details about the resource we created using az resource list -g rebeladminrg01 this will list down the resource under resources group rebeladminrg01

clivm8

Some data such as subnet info will not display by using above command. Those can view using list command combine with resources group and parent resources. as an example, to view subnet info under the virtual network we can use,

az network vnet subnet list --vnet-name rebeladminVNet -g rebeladminrg01

in above --vnet-name specify the virtual network name and -g specify the resource group name. 

clivm9

7) Now it’s all ready, lets create first windows VM using the resource we created on previous steps. 

az vm create --resource-group rebeladminrg01 --location westus --nics rebeladminNic1 --name REBLEVM101 --image win2016datacenter --admin-username rebeladmin --admin-password Pa$$w0rd123456

in above, --resource-group specify the resources group VM belong to. –nics specify the network interface associated with the VM. –name is the VM name. –image specify the virtual machine image going to use with VM. You can get list of entire image list using az vm image list --output table –all

in sample --admin-username defines the admin user name for the new vm and --admin-password defines the VM password. 

clivm10

this creates the VM successfully. 

clivm11

In this demo, I explain how to create VM using azure cli. Hope this was useful and in next post on Azure CLI I will cover about storage. If you have any questions, feel free to contact me on rebeladm@live.com 

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Non-Authoritative and Authoritative SYSVOL Restore (DFS Replication)

Healthy SYSVOL replication is key for every active directory infrastructure. when there is SYSVOL replication issues you may notice,

1. Users and systems are not applying their group policy settings properly. 

2. New group policies not applying to certain users and systems. 

3. Group policy object counts is different between domain controllers (inside SYSVOL folders)

4. Log on scripts are not processing correctly

Also, same time if you look in to event viewer you may able to find events such as,

Event Id

Event Description

2213

The DFS Replication service stopped replication on volume C:. This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication.

Recovery Steps

1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.

2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:

wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid=”xxxxxxxx″ call ResumeReplication

5002

The DFS Replication service encountered an error communicating with partner <FQDN> for replication group Domain System Volume.

5008

The DFS Replication service failed to communicate with partner <FQDN> for replication group Home-Replication. This error can occur if the host is unreachable, or if the DFS Replication service is not running on the server.

5014

The DFS Replication service is stopping communication with partner <FQDN> for replication group Domain System Volume due to an error. The service will retry the connection periodically.

Some of these errors can be fixed with simple server reboot or running commands describe in the error ( ex – event 2213 description) but if its keep continuing we need to do Non-Authoritative or Authoritative SYSVOL restore.

Non-Authoritative Restore 

If it’s only one or few domain controller (less than 50%) which have replication issues in a given time, we can issue a non-authoritative replication. In that scenario, system will replicate the SYSVOL from the PDC. 

Authoritative Restore

If more than 50% of domain controllers have SYSVOL replication issues, it possible that entire SYSVOL got corrupted. In such scenario, we need to go for Authoritative Restore. In this process, first we need to restore SYSVOL from backup to PDC and then replicate over or force all the domain controllers to update their SYSVOL copy from the copy in PDC. 

SYSVOL can replicate using FRS too. This is deprecated after windows server 2008, but if you migrated from older Active Directory environment you may still have FRS for SYSVOL replication. It also supports for Non-Authoritative and Authoritative restore but in this demo, I am going to talk only about SYSVOL with DFS replication. 

Non-Authoritative DFS Replication 

In order to perform a non-authoritative replication,

1) Backup the existing SYSVOL – This can be done by copying the SYSVOL folder from the domain controller which have DFS replication issues in to a secure location. 

2) Log in to Domain Controller as Domain Admin/Enterprise Admin

3) Launch ADSIEDIT.MSC tool and connect to Default Naming Context

sys1

4) Brows to DC=domain,DC=local > OU=Domain Controllers > CN=(DC NAME) > CN=DFSR-LocalSettings > Domain System Volume > SYSVOL Subscription

5) Change value of attribute msDFSR-Enabled = FALSE

sys2

6) Force the AD replication using,

repadmin /syncall /AdP

7) Run following to install the DFS management tools using (unless this is already installed), 

Add-WindowsFeature RSAT-DFS-Mgmt-Con

8) Run following command to update the DFRS global state,

dfsrdiag PollAD

9) Search for the event 4114 to confirm SYSVOL replication is disabled. 

Get-EventLog -Log "DFS Replication" | where {$_.eventID -eq 4114} | fl

10) Change the attribute value back to msDFSR-Enabled=TRUE (step 5)

11) Force the AD replication as in step 6

12) Update DFRS global state running command in step 8

13) Search for events 4614 and 4604 to confirm successful non-authoritative synchronization. 

sys3

All these commands should run from domain controllers set as non-authoritative. 

Authoritative DFS Replication 

In order to perform to initiate authoritative DFS Replication,

1) Log in to PDC FSMO role holder as Domain Administrator or Enterprise Administrator

2) Stop DFS Replication Service (This is recommended to do in all the Domain Controllers)

3) Launch ADSIEDIT.MSC tool and connect to Default Naming Context

4) Brows to DC=domain,DC=local > OU=Domain Controllers > CN=(DC NAME) > CN=DFSR-LocalSettings > Domain System Volume > SYSVOL Subscription

5) Update the given attributes values as following, 

msDFSR-Enabled=FALSE

msDFSR-options=1

sys4

6) Modify following attribute on ALL other domain controller.

msDFSR-Enabled=FALSE

7) Force the AD replication using,

repadmin /syncall /AdP

8) Start DFS replication service in PDC

9) Search for the event 4114 to verify SYSVOL replication is disabled.

10) Change following value which were set on the step 5,

msDFSR-Enabled=TRUE

11) Force the AD replication using,

repadmin /syncall /AdP

12) Run following command to update the DFRS global state,

dfsrdiag PollAD

13) Search for the event 4602 and verify the successful SYSVOL replication. 

14) Start DFS service on all other Domain Controllers

15) Search for the event 4114 to verify SYSVOL replication is disabled.

16) Change following value which were set on the step6. This need to be done on ALL domain controllers. 

msDFSR-Enabled=TRUE

17) Run following command to update the DFRS global state,

dfsrdiag PollAD

18) Search for events 4614 and 4604 to confirm successful authoritative synchronization. 

Please note you do not need to run Authoritative DFS Replication for every DFS replication issue. It should be the last option.

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com 

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step Guide to Start with Azure CLI 2.0

There are many ways to create, manage, remove resources from Azure subscription. For the users who prefer GUI has Azure Classic portal and Azure Resource Manager. For PowerShell lovers Azure has Azure PowerShell module. Apart from that there are other methods such as terraform (I already wrote articles about it, if you want to know more about it, search for “terraform” in the blog) which simplifies Azure resource management. Azure CLI is also a command-line tool introduced by Microsoft which can use to manage azure resources. This is allowing to use from multiple platform such as Linux, Mac OS and Windows. This blog post is to explain how we can configure windows system to use Azure CLI. 

There are two ways which we can use to connect to Azure CLI. 

Using Azure Portal

Azure also allow to use web based version of Azure CLI with name of “Cloud Shell”. This is easily can open through the browser. In order to access it,

1) Log in to Azure Portal

2) Click on Cloud Shell icon on top right-hand side

cli1

3) When you do this for first time it will ask to create Azure file share. You can select relevant subscription and click on “Create Storage

cli2

4) Once it is created the storage, it will load up the shell access through the browser. 

cli3

Using Windows Computer

We also can use Azure CLI from the local computer. as I said this is not only supported to use with windows systems. it is supported to use with Linux and Mac OS. In this demo, I am going to demonstrate how to configure it with windows system. 

Azure CLI uses python so out configuration will be based on python installation. 

1) Log in to computer as an administrator

2) Go to https://www.python.org/downloads/ and download python

cli4

3) Once file is downloaded, run it as administrator to install. During the installation, make sure to select option “Add Python 3.6 to PATH” option. Then it will allow to use python commands without navigating to installation location. 

cli5

4) Once installation completed, open windows command-line and type python –version. this will confirm the python installation. (it is recommended to open command line as administrator, otherwise it will say PATH records are not added as we ran the installation as Administrator) 

cli6

5) Next step is to install Azure CLI libraries. In order to do that run pip install –user azure-cli

cli7

6) Once it is completed, move to C:\Users\[Admin User]\AppData\Roaming\Python\Python36\Scripts and run command az . This will verify the Azure CLI integration. If it needs to run from anywhere add it to the PATH. 

cli8

7) Now let’s try to log in to Azure using Azure CLI. In order to do that we can use az login -u azureusername -p password. the problem on this method is that password need to type in as clear text. Instead of that we can use browser based more secure login. To do that type az login in command-line. 

The it gives a link and code to use for authentication. 

cli9

8) Once it is open in browser it asks for the verification code. Once its enter click on Continue

cli10

In next page, it verifies the Azure login and then confirm the connection.

cli11

When we go back to Azure CLI, we can see its successfully logged in and showing the subscription data. 

cli12

This confirms the successful connection to Azure using Azure CLI. This is the end of this post and in next post let’s see how we can add, manage, remove azure resources via Azure CLI. Hoep this was helpful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Setting up Azure Virtual Machines with Terraform

In my previous article about terraform, I explain what is terraform and what it can do. Also, I explain how to set it up and how we can use it with Azure to simplify infrastructure configuration. If you didn’t read it before you can view it using this link  

In this post, we are going to look further in to Azure infrastructure setup using terraform.

Before that lets look in to sample configuration of an Azure resource and see how syntax been used.

resource "azurerm_resource_group" "test" {

  name     = "acctestrg"

  location = "West US"

}

 resource "azurerm_virtual_network" "test" {

  name                = "acctvn"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

}

Above code is to create an Azure resource group and Azure virtual network. In the code azurerm_resource_group and azurerm_virtual_network defines the azure resource type. The text test defines the name for that resource instance. This is not the azure resource group or azure virtual network name. This is the instance name. so, if you have another resource group it can be test2. Actual resource names are defined using name attribute. So, in above code the actual resource name for resource group is acctestrg and for virtual network its acctvn.

In above example, new virtual network is need placed under the acctestrg resource group. in the code it is defined using,

resource_group_name = "${azurerm_resource_group.test.name}"

in there, by azurerm_resource_group.test it defines the related resource group instance. In our example, it is test. Then using .name it calls for the attribute value of name under that particular resource group.

In the plan stage terraform creates the execution plan. It does not process the code top to bottom. It evaluates the code and then build the plan logically. There for it no longer consider the resource order. Let’s try it with an example, 

resource "azurerm_virtual_network" "test" {

  name                = "acctvn"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

}

 resource "azurerm_resource_group" "test2" {

  name     = "acctestrg2"

  location = "West US"

}

 resource "azurerm_virtual_network" "test2" {

  name                = "acctvn2"

  address_space       = ["11.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test2.name}"

}

 resource "azurerm_resource_group" "test" {

  name     = "acctestrg"

  location = "West US"

}

In above example, I am creating two resources group and two virtual networks. If you look in to highlighted sections, I placed the code related to virtual network before creating resources group. But when I run terraform plan it creates the execution plan in correct order.

tf1 

And once it is executed, it creates the expected resources.

tf2

As next step on demo, let’s see how we can create virtual machines in Azure using terraform.

resource "azurerm_virtual_machine" "testvm" {

  name                  = "acctvm"

  location              = "West US"

  resource_group_name   = "${azurerm_resource_group.test.name}"

  network_interface_ids = ["${azurerm_network_interface.test.id}"]

  vm_size               = "Standard_A0"

above code is an example to create a VM in azure. In code sample, azurerm_virtual_machine defines the resource type. testvm is the resource instance name. acctvm is the name of the virtual machine. According to code the resource will deploy under West US region. resource_group_name defines the resource group it belongs to. network_interface_ids defines the network interface id for the VM. vm_size defines the Azure VM template. The template list for the region can list down using following Azure CLI command.

az vm list-sizes --location west-us

This will list down the all available VM sizes in West US region.

tf3

Azure VM also need other components such as virtual network, storages, operating system so on. Let’s see how we can add these to the configuration.

In earlier on the post, I share samples for creating a resources group and virtual network. The next step of it will be to add a subnet under the virtual network.

resource "azurerm_subnet" "sub1" {

  name                 = "acctsub1"

  resource_group_name  = "${azurerm_resource_group.test.name}"

  virtual_network_name = "${azurerm_virtual_network.test.name}"

  address_prefix       = "10.0.2.0/24"

}

In above I am creating a subnet 10.0.2.0/24 under virtual network and resources group I already have. In code, azurerm_subnet defines the resource type. sub1 is the instance name and acctsub1 is the subnet name. resource_group_name defines on which resources group it belongs to. virtual_network_name defines which azure virtual network it associated with. address_prefix specifies the subnet value.

Now we have subnet also associated with network. We also need public IP address in order to connect to VM from internet. 

resource "azurerm_public_ip" "pub1" {

  name                         = "pub1"

  location                     = "West US"

  resource_group_name          = "${azurerm_resource_group.test.name}"

  public_ip_address_allocation = "dynamic"

}

According to above, I am creating public IP instance called pub1 under same resource group. it’s IP allocation is set to Dynamic. If need it can be static as well.

Next step is to create network interface for the VM.

resource "azurerm_network_interface" "ni1" {

  name                = "acctni1"

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

ip_configuration {

    name                          = "lan1"

    subnet_id                     = "${azurerm_subnet.test.id}"

   private_ip_address_allocation = "dynamic"

   public_ip_address_id  = "${azurerm_public_ip.pub1.id}"

  }

In above azurerm_network_interface is the resource type for the network interface. the interface name we are creating is acctni1. the second part of code which starts with ip_configuration defines the IP configuration for the network interface. subnet_id defines the subnet it belongs to. private_ip_address_allocation defines the ip allocation method. It can be Dynamic or Static. public_ip_address_id associates with the public ip created in the previous step. If this is not done you will not be able to connect to VM remotely once it is deployed.    

Next thing we need for the VM is storage. Let’s start with creating a Storage Account 

resource "azurerm_storage_account" "asa1" {

  name                = "accsa"

  resource_group_name = "${azurerm_resource_group.test.name}"

  location            = "westus"

  account_type        = "Standard_LRS"

 }

azurerm_storage_account is the resource type and accsa is the name for the account. account_type defines the storage account type. it can be Standard_LRS, Standard_GRS, Standard_RAGRS, Standard_ZRS, or Premium_LRS. More info about these account types can find from https://docs.microsoft.com/en-us/azure/storage/storage-introduction .

as next step, we can create a new storage container under the storage account.

resource "azurerm_storage_container" "con1" {

  name                  = "vhds"

  resource_group_name   = "${azurerm_resource_group.test.name}"

  storage_account_name  = "${azurerm_storage_account.test.name}"

  container_access_type = "private"

}

In above azurerm_storage_container is the resource type and it name is vhds. resource_group_name defines the resource group it belongs to and storage_account_name defines storage account it belongs to. container_access_type can be private, blob or container. More info about these container types can find from https://docs.microsoft.com/en-us/azure/storage/storage-introduction

Following image shows what it looks like when using GUI option. 

tf4

By now we have most of the resources ready for the VM. Next step is to define image for the VM.

  storage_image_reference {

    publisher = " MicrosoftWindowsServer"

    offer     = " WindowsServer"

    sku       = " 2016-Datacenter"

    version   = "latest"

  }

In above I am using windows server 2016 datacenter as image for the VM. Publisher, offer, sku and version info need to provide in order to select correct image. For windows servers, you can find these info in https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage. For Linux, this info available at https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cli-ps-findimage

Next step is to add a hard disk,

storage_os_disk {

    name          = "myosdisk1"

    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/myosdisk1.vhd"

    caching       = "ReadWrite"

    create_option = "FromImage"

  }

  storage_data_disk {

    name          = "datadisk0"

    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/datadisk0.vhd"

    disk_size_gb  = "60"

    create_option = "Empty"

    lun           = 0

  }

Above create two disks. one is for OS and one is for data. vhd_uri defines the path for the VHD which is saved under the storage account created.

Last but not least we need to define the OS configuration data such as hostname and administrator account details.

  os_profile {

    computer_name  = "rebelpro1"

    admin_username = "rebeladmin"

    admin_password = "Password1234!"

  }

In above, computer_name specify the hostname of the VM. admin_username specify the local administrator name and admin_password specify the local administrator password.

Now we have all the components ready to deploy a new VM. Some of the components we just need to create one time. as example virtual networks, subnets, storage accounts not need to create for each VM unless there is valid requirement. Let’s put all these together in to a one script so it will make more sense. 

# Configure the Microsoft Azure Provider

provider "azurerm" {

  subscription_id = "d7xxxxxxxxxxxxxxxxxxxxxx"

  client_id       = "d9xxxxxxxxxxxxxxxxxxxxxx"

  client_secret   = "f1xxxxxxxxxxxxxxxxxxxxxx "

  tenant_id       = "05xxxxxxxxxxxxxxxxxxxxxx "

}

resource "azurerm_resource_group" "rg1" {

  name     = "acctestrg"

  location = "West US"

}

resource "azurerm_virtual_network" "vn1" {

  name                = "vn1"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

}

resource "azurerm_public_ip" "pub1" {

  name                         = "pub1"

  location                     = "West US"

  resource_group_name          = "${azurerm_resource_group.rg1.name}"

  public_ip_address_allocation = "dynamic"

}

resource "azurerm_subnet" "sub1" {

  name                 = "sub1"

  resource_group_name  = "${azurerm_resource_group.rg1.name}"

  virtual_network_name = "${azurerm_virtual_network.vn1.name}"

  address_prefix       = "10.0.2.0/24"

}

resource "azurerm_network_interface" "ni1" {

  name                = "ni1"

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

 

  ip_configuration {

    name                          = "config1"

    subnet_id                     = "${azurerm_subnet.sub1.id}"

    private_ip_address_allocation = "dynamic"

    public_ip_address_id  = "${azurerm_public_ip.pub1.id}"

  }

}

 resource "azurerm_storage_account" "storevm123" {

  name                = "storevm123"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

  location            = "westus"

  account_type        = "Standard_LRS"

 

  tags {

    environment = "demo"

  }

}

 resource "azurerm_storage_container" "cont1" {

  name                  = "vhds"

  resource_group_name   = "${azurerm_resource_group.rg1.name}"

  storage_account_name  = "${azurerm_storage_account.storevm123.name}"

  container_access_type = "private"

}

 resource "azurerm_virtual_machine" "vm1" {

  name                  = "vm1"

  location              = "West US"

  resource_group_name   = "${azurerm_resource_group.rg1.name}"

  network_interface_ids = ["${azurerm_network_interface.ni1.id}"]

  vm_size               = "Standard_DS2_v2"

 

   storage_image_reference {

    publisher = "MicrosoftWindowsServer"

    offer     = "WindowsServer"

    sku       = "2016-Datacenter"

    version   = "latest"

  }

   storage_os_disk {

    name          = "osdisk1"

    vhd_uri       = "${azurerm_storage_account.storevm123.primary_blob_endpoint}${azurerm_storage_container.cont1.name}/osdisk1.vhd"

    caching       = "ReadWrite"

    create_option = "FromImage"

  }

   storage_data_disk {

    name          = "datadisk1"

    vhd_uri       = "${azurerm_storage_account.storevm123.primary_blob_endpoint}${azurerm_storage_container.cont1.name}/datadisk1.vhd"

    disk_size_gb  = "60"

    create_option = "Empty"

    lun           = 0

  }

     os_profile {

    computer_name  = "rebelpro1"

    admin_username = "rebeladmin"

    admin_password = "Password1234!"

  }

   tags {

    environment = "demo"

  }

}

Let’s verify the resources using Azure portal.

As we can see it is created all the expected resource under the resource group acctestrg.

tf5

Also, we can see it is created the VM as expected.

tf6

In this post, we went through the process of creating Azure VM and related components using terraform. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter