Introducing change tracking and inventory features for Azure VM

In any infrastructure, users are using different applications, services, file shares in order to get their work done. each of these components may have periodic changes. In order to maintain integrity, provide faster IT support and identify risks, it is important to track all the changes against these components. Maintaining inventory (software level) will help engineers to verify if systems are running as expected with relevant software and services. Change tracking and inventory process can be done using software or workflows. 

Operation Management Suite (OMS) provides solution called “Change Tracking” to capture changes in cloud only, hybrid or on-premises only environments. It can track changes on Files, Registry Entries, Software, Windows services and Linux Daemons. 

ch1

Microsoft recently release “Change tracking and Inventory” solution which can implement in Azure VM level. However, this is not replacement for OMS solution. OMS can track changes on any environment whiles this new solution is just for cloud only servers. 

There are few things I like about this solution,

1. Easy to implement – with few clicks this feature can be enable in VM level.

2. No Agents – It doesn’t need agents to track changes or maintain inventory. 

3. No need to log in to VM – It is not required any user interaction or system credentials, in order to use these features. Engineers can view visualized data without login to VM. 

4. No scheduled scans – It tracks changes automatically. you do not need to manually create scan jobs or update schedules. (however, in background jobs are controlled by azure automation) 

Please note this is still in preview mode, there for not recommended to use in production environment. But it is not too early to try its capabilities.  

Let’s go ahead and see how we can enable and use this feature. 

1. Log in to Azure portal (https://portal.azure.com) as Global Administrator.

2. Go to Virtual Machines and click on the VM you like to get this feature enabled. 

3. In VM panel, under OPERATIONS section click on Inventory (preview) 

ch2

4. Then it will load up detail window, click on purple color bar like in below image to enable the feature. 

ch3

5. Then it will load up new window with information such as log analytic workspace id and automation account id. Click on Enable to proceed. 

ch4

6. Once feature is enabled we can see following window. It will take while to populate data.

ch5

7. It also enables the Change tracking (preview) feature. 

ch6

8. After sometime you will be able to see data under Inventory (preview) and Change tracking (preview) windows. Let’s start with Inventory (preview). When I load the window, first it lists down the software installed in the system. 

ch7

9. This includes information about windows updates and all other third-party applications. If it’s a software, it displays the version number and publisher’s info. As an example, I install acrobat reader and I can see its info as following. 

ch8

10. Under the Files tab we can see the files details. By default, it doesn’t scan for any files. In system, there are thousands of files, it is no point to add all those to inventory. Instead of that users can define what folders and files to monitor and add to inventory. If its folder, it will list all these files under that particular folder automatically. In order to do that, click on Edit Settings Option.

ch9

11. In next window, click on Windows Files tab. 

ch10

12. Then click on Add in next window. 

ch11

13. In next window, type a unique name for the file or folder under Item Name. Then type folder or file path under Enter Path. Once everything done, click on Save

ch12

14. Once its added it will show under Windows Files tab. If you need to disable inventory and change tracking for a file or folder all you need to do is click on it and click on false button under Enabled.

ch14

15. If it’s a Linux system, files and folder paths can add using Linux Files tab. 

ch15

16. When we add files here, it is automatically enable change tracking for those files and folders. So, you do not need to add it again under change tracking feature. 

17. Under Registry Files tab we can enable registry files tracking and inventory as well. It does have pre-defined registry path but at the moment I can see a way to add custom path. To enable feature, click on registry entry, then click on True under enabled. In the end click on Save

ch16

18. This also enable tracking for windows registry files under change tracking feature. 

19. Under Windows services tab, it lists all services in the system. It also shows its current status and startup status.

ch17

20. This is just for one VM, if you need to view multiple VM inventory data, you need to click on Manage multiple computers.

ch18

21. In new window, it lists the machines which has this feature enabled. 

ch19

22. If you need to add new VMs to the list, it can be done using Add Azure VM option. Then it will allow you to enable inventory feature. 

ch20

23. There is option to add non-azure virtual machines too. but that will lead you to OMS.

24. All other windows are familiar, only change here you can see if the same event is repeated in different computers. 

ch21

25. All the events in here also can view using log analytics. In order to access that, click on Log Analytics option in main window. 

ch22

26. Then it will load all the events and we can find relevant info using queries or just browsing through filters. 

ch23

27. Now we done with inventory feature and let’s move in to Change tracking (preview) feature. In order to access the feature just click on Change tracking (preview) option under operations. 

ch24

28. As soon as login it shows the changes for last 24 hours. It is shows as graph as well as a list. 

ch25

29. Using Time Range dropdown, we can define the time range for data. 

ch26

30. Using Change Types dropdown, we can select which type of data to view. 

ch27

31. The graph itself really useful to narrow down a change quickly. All you need do is move mouse over the timeline and then select the area you like to dig in to by dragging the cursor. Then it simple list down the results for that particular time. 

ch28

32. Using Manage multiple computers option we can view changes for multiple computers in same window. It works same way it works in inventory feature.

ch29

33. Edit Settings option also same as in inventory feature. So, I am not going to cover it here. 

34. In main window, there is option to manage connection with azure activity log. There you can enable integration with azure activity log. You can find more info about activity log in https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs

ch30

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm  to get updates about new blog posts.  

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Update Management for Azure VM

Keeping your operating systems up-to-date is critical as it will be the first step towards protecting your systems from emerging threats. it will also help to improve efficiency and user experience. Simplest way to update your windows operating systems to use “Windows Update” feature comes with every operating system. But this is not enough for corporates as it is important to manage windows updates in control manner. Microsoft has tools such as WSUS, SCCM to manage windows update in infrastructure. when it comes to hybrid or cloud only environment it is important update your virtual machines running on cloud as well. Microsoft Operation Management Suite (OMS)’s “Update Management” is a great way to manage updates in any environment (on-premises, cloud only or hybrid). It detects and report missing updates in your environment. It also allows to deploy those using Azure automation. 

update1

However, if you running an azure environment, now Microsoft have another solution which will help to manage updates for Azure VMs. This is NOT a replacement for OMS update management even though it works similar. it helps to manage updates in individual VM level or as group. This feature “Update Management” still in preview mode but it is not too early to try its capabilities. 

There are few things I like about this feature.

1. No Agents or additional configuration – This feature can enable under a VM with few clicks and it doesn’t require any additional configuration inside the VM. It doesn’t need any agent installation or any other configuration such as firewall changes. It’s simple and efficient. 

2. No need to log in to VM – This is ideal for MSPs as well. In order to manage updates, you do not need to log in to VM at all. No need to define passwords to install updates either. 

3. Reporting – It list down missing updates and categories those based on type. It lists info about failed deployments. So, everything been logged and visualized in easy way to understand. 

Let’s see how we can get this setup.

1. In order to enable this feature, you need to log in to Azure as global administrator. 

2. The click on Virtual Machines to list down VMs.

update2

3. Then click on the VM which you choose. 

4. From left hand side panel, click on Update Management (Preview)

update3

5. In next window click on purple bar (as in following image) to enable the feature. 

update4

6. Then it will load the page to enable to feature. As we can see it is also creating log analytic workspace as well as automation account. Click on Enable to proceed. 

update5

7. Once it is enabled, it will take 15-20 minutes to gather information about updates. Once it is finish we can see new data under Update Management (Preview) panel. 

update6

8. In Missing Update section, it shows update name, classification, published date and link to see more details about updates. 

update7

9. If we click on one of missing updates it will bring to us to the log search window and in there we can see more details about update. 

update8

10. In Update Management (Preview) panel, lets click Manage Multiple Computers Option. 

update9

11. In that window, we can see all the computers which have this feature enabled and their compliance status. 

update10

12. By clicking on each computer in list, we can see more detail about it using log search window.

update11

13. We also can add Azure VM to update management. To do that click on Add Azure VM option in Manage Multiple Computers panel. 

update12

14. It will list the VMs in account and click on the relevant VM you like to add. Then we can enable the feature under it. 

update13

15. Now we have list of missing updates. Next step is to schedule update. In order to do that go back to Update Management (Preview) panel and click on Schedule update deployments option. 

update14

16. In new window, first thing is to define name for the job. Under Update classification we can select which updates to consider for the schedule. 

update15

17. If need to exclude any updates, we can do that using updates to exclude option. In there we need to define relevant KB numbers. 

update16

18. Under the schedule settings we can define the time to apply updates. It can be either one time or recurring job. 

update17

19. Using maintenance window option we can set how long it should be in maintenance mode. 

update18

20. Once it’s done click on Create to create the schedule. 

update19

21. If you use the same Schedule update deployments option under Manage Multiple Computers window, we can create schedule for multiple computers. 

update20

22. Once schedule is created we can see it under Scheduled update deployments tab. 

update21

23. This completes the configuration part and once schedule run, we can verify it using Update Management (Preview) panel 

This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Project “Honolulu” for better windows server management experience

If you worked with windows server 2003 or earlier before, I am sure you know how painful it was to install roles and manage those. We had to go through “Add or Remove Windows Components” and many “MMC”. Also, it was recommended to run “Security Configuration Wizard” before install roles as security settings not come default with role installation. To address these difficulties Microsoft introduced “Server Manager” with windows server 2008. It replaced many of wizards’ server 2003 had. It made roles and feature management easy. It was further developed and was available with every server operating system released after windows server 2008. 

Project “Honolulu” from Microsoft is to bring server management in to next level. It is simple but powerful web based interface which can install on windows 10 or windows server 2016. It can use to configure and troubleshoot servers locally or remotely. 

Why it is good? 

1. Simplified one web console for server management – Instead of using multiple MMC to manage resources, Honolulu gives simple web based interface to do it. It also allows to go from simple role install to advanced troubleshooting using same console.  

2. Better interconnection – With Honolulu we can connect windows server 2016, windows server 2012 R2, windows server 2012, Hyper-V 2016/2012R2/2012 in to one console. It also allows to manage failover cluster, hyper-converged environments from same console. Microsoft also working with partners to refine their SDK and extension model.

3. No Agents or No additional configurations – To connect servers to console it is not required agents or any other additional configuration. Only requirement is to have connectivity between gateway server and member servers. 

4. Familiar tools packaged together – web based console allow you to access familiar tools from one place. For an example, you can access Server management, registry editor, firewall tools via console. Before we had to use different methods to open those MMC. This also make it easy to adopt without additional trainings. 

5. Flexible for integration – the design itself welcoming third parties to create modules and integrate it with Honolulu so those applications or services also can manage via same console. 

6. Can use to manage resources via internet – Honolulu web console (web server components) can publish to remote networks and allow engineers to manage servers without using traditional management methods such as VPN, RDP etc. 

Will it replace other management tools? 

At the moment System center and Operation Management Suite (OMS) provides advanced infrastructure management capabilities. Project Honolulu will be complementary to those existing tools but it is not mean to replace in any mean.

Is it got anything to do with Azure? 

No, it is not. It can use with in azure VMs too. Console doesn’t need internet access even to operate. 

When it will release? 

At the moment, it is in technical preview stage. It will be released in year 2018. However, this will not prevent you from testing and providing feedback to improve it further. 

Is it supporting all operating systems?

It can only install on windows server 2016 or windows 10. 

Version

Install Honolulu

Managed node

Managed HCI cluster

Windows Server 2016

Yes

Yes

Future

Windows Server version 1709

Yes

Yes

Yes, under insider program

Windows 10

Yes

N/A

N/A

Windows Server 2012 R2

No

Yes

N/A

Windows Server 2012

No

Yes

N/A

However, if you need to manage windows server 2012 R2 and 2012 you need to install Windows Management Framework (WMF) version 5.0 or higher first as required PowerShell features not available in earlier versions.  

Architecture 
 
Honolulu has two components.
 
Gateway – Gateway is to manage connected servers via Remote PowerShell and WMI over WinRM. 
Web Server – It is the UI for Honolulu and users can access it via HTTPS requests. It also can publish to remote networks to allow users to connect via web browser. 
 
Feedback is important 
 
The project is still in early stage. We all can contribute to make it perfect. Feel free to submit your feedback via https://aka.ms/HonoluluFeedback 
 
Let’s take a look!
 
Now it’s time to install and play with it. In my demo environment, I have two windows servers 2016 under domain therebeladmin.com. I am going install it on one server and get both servers connected to the console. 
 
1. Log in to the server as administrator 
3. Double click on .exe to install. In initial window accept the license terms and click Next to continue. 
4. In next window click on tick box to select “Allow project “Honolulu” to modify this machine’s trusted host’s settings”. In same window can select “create a desktop shortcut to launch project “Honolulu”” option to create desktop shortcut. 
 
hon1
 
5. In next window, we can define a port for management site. For demo purpose, we can use self-sign certificate to allow HTTPS requests. once selections are made, click Install to proceed. 
 
hon2
 
6. Once installation is completed, double click on shortcut to launch the console.
 
Note : Console is recommended to use on Edge or Chrome browser. If you using IE, it will give error saying to use it on recommended browser. 
 
7. In initial window, it will launch a tour to explain project Honolulu. You can either follow it or skip. 
8. In home page, it lists the servers added to console. By default, it has the server it is installed on. 
 
hon3
 
9. To add new server to the list, click on Add button.
 
hon4
 
10. Then it shows the list of connections types available. In demo, I am going to add single server so I am going to choose “Add Server Connection” 
 
hon5
 
11. In next window, it asks the name of the server. please provide server name and click on Submit
 
hon6
 
You need administrator account to add server to console. In my demo, I am using domain admin account to install and configure Honolulu. If you are in workgroup environment, it will give option to define admin account user name and credentials. 
 
hon7
 
We also can import multiple servers using .txt file. 
 
hon8
 
Once its added, it will show up on home page. 
 
hon9
 
12. In order to manage server, click on the server name in homepage. Then it will bring up the server overview page. 
 
hon10
 
In this page, it gives real-time information about server performance. It also provides data about server resources. not only that it also gives options to restart or shutdown server, access settings and edit computer name. 
 
hon11
 
13. Using Device tab, we can view the details about the server hardware resources. 
 
hon12
 
14. Certificate tab allows to view all the certificates in server. more importantly it shows certificates for local machine and current user in same window. If its traditional method we have to open this using MMC. 
 
hon13
 
15. Events tab shows all the events generated in server. 
 
hon14
 
16. Files tab works similar to file explorer. You can create folders, rename folders and upload files to folders using it. unfortunately, you can’t change permissions of the folders at the moment. 
 
hon15
 
17. Firewall Tab is one of my favorites. Now it is easy to see what each rule does. It also allows to modify rules if needed.  
 
hon16
 
hon17
 
18. Registry tab is also very useful. Using same console now we can add or modify registry entries. 
 
hon18
 
19. Roles and Features tab allow you to install/remove roles and features.
 
hon19
 
hon20
 
20. Services tab work similar to traditional services mmc. It can use to status of services, start services, stop services or change startup mode. 
 
hon21
 
21. Storage tab helps to manage allocated storage to server. 
 
hon22
 
In this blog post, I tried to go through each option but I like to encourage you to go and check its capabilities in details. It is easy to implement yet powerful. This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to manage Azure Storage using Azure CLI 2.0 – Part 02

This is last part of my blog post series which is covering Azure CLI 2.0 functions. If you didn’t read part 01 yet please read it before start on this. You can find it on http://www.rebeladmin.com/2017/10/step-step-guide-manage-azure-storage-using-azure-cli-2-0-part-01/ 

In my demo setup, I have two VM running. One is created using Azure Managed disks. In part 01 I explained how to add additional disk. It is currently having a 100GB additional disk attached. 

Expand Disks

Let’s see how we can expand disks using Azure CLI. Before do this, make sure you log in to Azure CLI using az login

let’s start with expanding azure managed disks. First, we can verify the VM’s storage configuration using, 

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore1

In there I have two disks. One is for OS called osdisk_6469626e28 and the other data disk called DataDisk01

We can’t increase the disk on a running VM. Not even a data disk. So first we need to deallocate the VM. We can do it using. 

az vm deallocate --resource-group rebeladminrg01 --name REBLEVM101

in above command –resource-group defines resource group VM belongs to. –name defines the VM name. 

clistore2

once it is completed we can increase the disk sizes. 

I need to expand os Disk size to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name osdisk_6469626e28 --size-gb 150

clistore3

I also like to expand data disk to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name DataDisk01 --size-gb 150

clistore4

in above commands, –resource-group defines resource group disks belongs to. –name defines the disk’s name. 

after finish, we can start the VM using,

az vm start --resource-group rebeladminrg01 --name REBLEVM101

once VM is up we can go in and expand the disk in OS level. 

clistore5

if you looking to expand disk for unmanaged disks it can be done via interface or Azure CLI 1.0. more info can find in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks-nodejs

the document itself for Linux vm but expand part work same way. 

Snapshots

We also can take snapshots of disk as quick recovery option. It is full copy of a disk in the time it’s taken. It can keep as a backup or attach to another machine for troubleshooting. 

In my demo, I am going to take snapshot of an azure managed OS disk. Before do that I need to find the disk ID. It can be done using

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore6

then we can take snapshot using,

az snapshot create -g rebeladminrg01 --source "/subscriptions/xxxxx/resourceGroups/REBELADMINRG01/providers/Microsoft.Compute/disks/osdisk_6469626e28" --name vm101osDisk-backup

clistore7

in above –source defines the disk id and –name defines the snapshot name.

if it is a unmanaged disk, snapshot works on different way. You can read more about it from https://docs.microsoft.com/en-us/azure/virtual-machines/linux/incremental-snapshots 

Convert to Managed Disks

if required we can convert vm with unmanaged disks to managed disk. To do that first we need to deallocate the VM.

az vm deallocate --resource-group rebeladminrg01 --name REBELVM102

then we can start the converting process using,

az vm convert --resource-group rebeladminrg01 --name REBELVM102

once process is finished it will start the VM. 

clistore8

clistore9

Manage blobs

We also can create, manage and delete blobs using Azure CLI. 

To create a container we can simply use,

az storage container create --name datastorage01 --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above –name defines the container name. AccountName specify the storage account name and AccountKey specify the auth key for the storage account. 

By default, the container data is set to private. If need it can set to public read access for blobs (blob) or public read and list access to whole container (container). It can define using –public-access

Once container is created we can upload blob using,

az storage blob upload --file C:\myzip1.zip --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the local file path. –container-name defines the container name it is uploading to. –name defines the blob name once it is uploaded. 

clistore10

to verify, we can list down the files in blob using,

az storage blob list --container-name datastorage01 --output table --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore11

we can download blob to local storage using,

az storage blob download --container-name datastorage01 --name myzip1.zip --file C:\myzip2.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the path and name it will have when downloaded to the local storage.

clistore12

we can delete a blob using command similar to,

az storage blob delete --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=1WzgTd/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore13

This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to manage Azure Storage using Azure CLI 2.0 – Part 01

This is another part of my blog post series which was covering Azure CLI 2.0 functions. If you not read those yet you can find it with following links.

Step-by-step guide to start with azure cli 2.0http://www.rebeladmin.com/2017/08/step-step-guide-start-azure-cli-2-0/

Step-by-step guide to create azure vm using azure cli 2.0http://www.rebeladmin.com/2017/08/step-step-guide-create-azure-vm-using-azure-cli-2-0/

In part 01 of this blog post, we are going to look in to managing disks using Azure CLI. 

First thing first, I am going to log in to Azure CLI with a privileged account. This can be done using az login

I have a windows VM setup under my subscription. I can view its details using az vm show --resource-group rebeladminrg01 --name REBLEVM101

In above --resource-group defines the resource group name and –name defines the VM name. 

sto1

In this VM, I have a disk with size of 128 GB. It is azure managed disk. 

sto2

I like to add couple of disks in to this VM. Adding “Azure Managed” disk is the simplest way. It simplifies the disk management process. The only thing you need to worry is disk type and size. 

az vm disk attach -g rebeladminrg01 --vm-name REBLEVM101 --disk DataDisk01 --new --size-gb 100

above creates a managed disk called DataDisk01 under rebeladminrg01 resource group. it is 100 GB in size. It also attached to REBLEVM101 VM.

We can verify it by running,

az disk show --name DataDisk01 --resource-group rebeladminrg01

sto4

if need we can also use “unmanaged” disks. First, I am going to create a new storage account for it. 

az storage account create --location westus --name rebelstorage01 --resource-group rebeladminrg01 --sku Standard_LRS

sto5

above creates a storage account called rebelstorage01 under westus region. Its created under rebeladminrg01 resource group. its Standard_LRS storage. 

Before configure the storage, first we need to set environment variables so the it can be use with commands. 

To do that need to type

az storage account show-connection-string --name rebelstorage01 --resource-group rebeladminrg01

then copy the connection string value and use it with

az storage container create --name data --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=oJOjFskwKlDBisEiGREBEsMRWnDbOA+q6stySqXKT1MsBiPZeJPThnfnkGgG9AgudKmJ/5CCl65cGcMIAZGQhg=="

above will create a container called data under the storage account. 

Let’s go ahead and add a new unmanaged disk to a VM. 

Note – You cannot add unmanaged disk to a VM created with managed disk. 

az vm unmanaged-disk attach -g rebeladminrg01 --vm-name REBELVM3 --new -n DataDisk6 --vhd-uri https://rebelstorage01.blob.core.windows.net/data/2.vhd --size-gb 100

in above rebeladminrg01 is the resource group where azure VM located. REBELVM3 is the VM name. I am creating a new disk called DataDisk6 on data/2.vhd path. Its size is 100 GB. 

sto6

In order to detach disk from VM we can use following commands. 

If its unmanaged disk we can use,

az vm unmanaged-disk detach --name DataDisk6 --resource-group rebeladminrg01 --vm-name REBELVM3

above command will detach unmanaged disk called DataDisk6 from REBELVM3 VM.

sto7

If its managed disk we can use,

az vm disk detach -g rebeladminrg01 --vm-name REBLEVM101 -n DataDisk02

above will remove data disk called DataDisk02 from REBLEVM101 VM.

sto8

This is the end of the part 01 of this post. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure AD Connect Staging Mode

Azure AD Connect is the tool use to connect on-premises directory service with Azure AD. It allows users to use same on-premises ID and passwords to authenticate in to Azure AD, Office 365 or other Applications hosted in Azure. Azure AD connect can install on any server if its meets following,

The AD forest functional level must be Windows Server 2003 or later. 

If you plan to use the feature password writeback, then the Domain Controllers must be on Windows Server 2008 (with latest SP) or later. If your DCs are on 2008 (pre-R2), then you must also apply hotfix KB2386717.

The domain controller used by Azure AD must be writable. It is not supported to use a RODC (read-only domain controller) and Azure AD Connect does not follow any write redirects.

It is not supported to use on-premises forests/domains using SLDs (Single Label Domains).

It is not supported to use on-premises forests/domains using "dotted" (name contains a period ".") NetBios names.

Azure AD Connect cannot be installed on Small Business Server or Windows Server Essentials. The server must be using Windows Server standard or better.

The Azure AD Connect server must have a full GUI installed. It is not supported to install on server core.

Azure AD Connect must be installed on Windows Server 2008 or later. This server may be a domain controller or a member server when using express settings. If you use custom settings, then the server can also be stand-alone and does not have to be joined to a domain.

If you install Azure AD Connect on Windows Server 2008 or Windows Server 2008 R2, then make sure to apply the latest hotfixes from Windows Update. The installation is not able to start with an unpatched server.

If you plan to use the feature password synchronization, then the Azure AD Connect server must be on Windows Server 2008 R2 SP1 or later.

If you plan to use a group managed service account, then the Azure AD Connect server must be on Windows Server 2012 or later.

The Azure AD Connect server must have .NET Framework 4.5.1 or later and Microsoft PowerShell 3.0 or later installed.

If Active Directory Federation Services is being deployed, the servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation.

If Active Directory Federation Services is being deployed, you need SSL Certificates.

If Active Directory Federation Services is being deployed, then you need to configure name resolution.

If your global administrators have MFA enabled, then the URL https://secure.aadcdn.microsoftonline-p.com must be in the trusted sites list. You are prompted to add this site to the trusted sites list when you are prompted for an MFA challenge and it has not added before. You can use Internet Explorer to add it to your trusted sites.

Azure AD Connect requires a SQL Server database to store identity data. By default a SQL Server 2012 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, you need to point the installation wizard to a different installation of SQL Server.

What is staging mode? 
 
In a given time, only one Azure AD connect instance can involve with sync process for a directory. But this gives few challenges. 
 
Disaster Recovery – If the server with Azure AD connect involves in a disaster it going to make impact on sync process. This can be worse if you using features such as password pass-through, single-sing-on, password writeback through AD connect.
Upgrades – If the system which running Azure AD connect needs upgrade or if Azure AD connect itself needs upgrade, will make impact for sync process. Again, the affordable downtime will be depending on the features and organization dependencies over Azure AD connect and its operations. 
Testing New Features – Microsoft keep adding new features to Azure AD connect. Before introduce those to production its always good to simulate and see how it will impact. But if its only one instance, it is not possible to do so. Even you have demo environment it may not simulate same impact as production in some occasions. 
 
Microsoft introduced the staging mode of Azure AD connect to overcome above challenges. With staging mode, it allows you to maintain another copy of Azure AD connect instance in another server. it can have same config as primary server. It will connect to Azure AD and receive changes and keep a latest copy to make sure the switch over is seamless as possible. However, it will not sync Azure AD connect configuration from primary server. it is engineer’s responsibility to update staging server AD connect configuration, if primary server AD connects config modified. 
 
Installation
 
Let’s see how we can configure Azure AD connect in staging mode.
 
1) Prepare a server according to guidelines given in prerequisites section to install Azure AD Connect. 
2) Review current configuration of Azure AD connect running on primary server. you can check this by Azure AD Connect | View current configuration 

sta1
 
sta2
 
3) Log in to server as Domain Administrator and download latest Azure Ad Connect from https://www.microsoft.com/en-us/download/details.aspx?id=47594
4) During the installation, please select customize option. 
 
sta3
 
5) Then proceed with the configuration according to settings used in primary server. 
6) At the last step of the configuration, select Enable staging mode: When selected, synchronization will not export any data to Ad or Azure AD and then click install
 
sta4
 
7) Once installation completed, in Synchronization Service (Azure AD Connect | Synchronization Service) we can confirm there is no sync jobs. 
 
sta5
 
Verify data
 
As I mentioned before, staging server allows to simulate export before it make as primary. This is important if you implement new configuration changes. 
 
In order to prepare a staged copy of export, 
 
1) Go to Start | Azure AD Connect | Synchronization Service | Connectors 
 
sta6
 
2) Select the Active Directory Domain Services connector and click on Run from the right-hand panel. 
 
sta7
 
3) Then in next window select Full Import and click OK.
 
sta8
 
4) Repeat same for Windows Azure Active Directory (Microsoft) 
5) Once both jobs completed, Select the Active Directory Domain Services connector and click on Run from the right-hand panel again. But this time select Delta Synchronization, and click OK.
 
sta9
 
6) Repeat same for Windows Azure Active Directory (Microsoft)
7) Once both jobs finished, go to Operation tab and verify if jobs were completed successfully. 
 
sta10
 
Now we have the staging copy, next step is to verify if the data is presented as expected. to do that we need to get help of a PowerShell script.  

 
Param(
    [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")]
    [string]$xmltoimport="%temp%\exportedStage1a.xml",
    [Parameter(Mandatory=$false, HelpMessage="Maximum number of users per output file")][int]$batchsize=1000,
    [Parameter(Mandatory=$false, HelpMessage="Show console output")][bool]$showOutput=$false
)

#LINQ isn't loaded automatically, so force it
[Reflection.Assembly]::Load("System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089") | Out-Null

[int]$count=1
[int]$outputfilecount=1
[array]$objOutputUsers=@()

#XML must be generated using "csexport "Name of Connector" export.xml /f:x"
write-host "Importing XML" -ForegroundColor Yellow

#XmlReader.Create won't properly resolve the file location,
#so expand and then resolve it
$resolvedXMLtoimport=Resolve-Path -Path ([Environment]::ExpandEnvironmentVariables($xmltoimport))

#use an XmlReader to deal with even large files
$result=$reader = [System.Xml.XmlReader]::Create($resolvedXMLtoimport) 
$result=$reader.ReadToDescendant('cs-object')
do 
{
    #create the object placeholder
    #adding them up here means we can enforce consistency
    $objOutputUser=New-Object psobject
    Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
    Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""

    $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
    if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}

    #object id
    $outID=$user.Attribute('id').Value
    if ($showOutput) {Write-Host ID: $outID}
    $objOutputUser.ID=$outID

    #object type
    $outType=$user.Attribute('object-type').Value
    if ($showOutput) {Write-Host Type: $outType}
    $objOutputUser.Type=$outType

    #dn
    $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
    if ($showOutput) {Write-Host DN: $outDN}
    $objOutputUser.DN=$outDN

    #operation
    $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
    if ($showOutput) {Write-Host Operation: $outOperation}
    $objOutputUser.operation=$outOperation

    #now that we have the basics, go get the details

    foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
    {
        $attrvalue=$attr.Attribute('name').Value
        $internalvalue= $attr.Element('value').Value

        switch ($attrvalue)
        {
            "userPrincipalName"
            {
                if ($showOutput) {Write-Host UPN: $internalvalue}
                $objOutputUser.UPN=$internalvalue
            }
            "displayName"
            {
                if ($showOutput) {Write-Host displayName: $internalvalue}
                $objOutputUser.displayName=$internalvalue
            }
            "sourceAnchor"
            {
                if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
                $objOutputUser.sourceAnchor=$internalvalue
            }
            "alias"
            {
                if ($showOutput) {Write-Host alias: $internalvalue}
                $objOutputUser.alias=$internalvalue
            }
            "proxyAddresses"
            {
                if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
                $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
            }
        }
    }

    $objOutputUsers += $objOutputUser

    Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)

    #every so often, dump the processed users in case we blow up somewhere
    if ($count % $batchsize -eq 0)
    {
        Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow

        #export the collection of users as as CSV
        Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
        $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation

        #increment the output file counter
        $outputfilecount+=1

        #reset the collection and the user counter
        $objOutputUsers = $null
        $count=0
    }

    $count+=1

    #need to bail out of the loop if no more users to process
    if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
    {
        break
    }

} while ($reader.Read)

#need to write out any users that didn't get picked up in a batch of 1000
#export the collection of users as as CSV
Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
$objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation

 
Save this as .ps1 on C Drive. 
 
1) Open PowerShell and type cd "C:\Program Files\Microsoft Azure AD Sync\Bin" (if your install path is different use the relevant path)
2) Then run .\csexport "myrebeladmin.onmicrosoft.com – AAD" C:\export.xml /f:x in here myrebeladmin.onmicrosoft.com -AAD should replace with your Azure AD connector name. This will export config to C:\export.xml
3) Then type .\analyze.ps1 -xmltoimport C:\export.xml in here analyze.ps1 is the script we saved in beginning of this section. 
4) Then it will create CSV file called processedusers1.csv and it’s contain all changes which will sync to Azure AD. 
 
However, this step is always not required. It can make as primary server without import and verify process. 
 
How to make it as primary Server?
 
In order to make staging server as primary server,
 
1) Go to Start | Azure AD Connect | Azure AD Connect
2) Then click on Configure in next page. 
3) In next page select option Configure staging mode and click Next
 
 
sta11
 
4) In next page provide the Azure AD login credentials for directory sync account. 
5) In next window, untick Enable staging mode and click Next
 
sta12
 
6) In next window select start the synchronization process… and click Configure
 
sta13
 
This completes the process of promoting staging server in to primary. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

What is Content Freshness protection in DFSR?

Healthy Replication is a must for active directory environment. SYSVOL folder in domain controllers contain policies and log on scripts. It is replicated between domain controllers to maintain up to date config (consistency). Before windows server 2008, it used FRS (File Replication Service) to replicate sysvol content among domain controllers. With Windows server 2008 FRS was deprecated and introduced Distributed File System (DFS) for replication.

A healthy replication required healthy communication between domain controllers. sometime the communication can interrupt due to domain controller failure or link failure. Based on the impact it is still possible that the communication re-established after period of time. Then it will try to resume replication and catch up with SYSVOL changes. In such scenario, we may see event 4012 in event viewer. 

The DFS Replication service stopped replication on the replicated folder at local path c:\xxx. It has been disconnected from other partners for 70 days, which is longer than the MaxOfflineTimeInDays parameter. Because of this, DFS Replication considers this data to be stale, and will replace it with data from other members of the replication group during the next replication. DFS Replication will move the stale files to the local Conflict folder. No user action is required.

With windows server 2008, Microsoft introduced a setting called content freshness protection to protect DFS shares from stale data. DFS also use a multi-master database similar to active directory. It also has tombstone time limit similar to Active Directory. The default value for this is 60 days. If there were no replication more than that time and resume replication in later time, it can have stale data. It is similar to lingering objects in AD. To protect from this, we can define value for MaxOfflineTimeInDays. if the number of days from last successful DFS replication is larger than MaxOfflineTimeInDays it will prevent the replication. 

We can review this value by running,

For /f %m IN ('dsquery server -o rdn') do @echo %m && @wmic /node:"%m" /namespace:\\root\microsoftdfs path DfsrMachineConfig get MaxOfflineTimeInDays

cf1

There is two ways to recover from this. First method is to increase the value of MaxOfflineTimeInDays. it can be done using,

wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=120

cf2

It is recommended to run this on all domain controllers to maintain same config. 

If you not willing to change this value, it still can recover using non-authoritative restore. It will remove all conflicting values and take an updated copy. 

I have already written an article about non-authoritative restore of SYSVOL and it can be find in http://www.rebeladmin.com/2017/08/non-authoritative-authoritative-sysvol-restore-dfs-replication/ 

This is not only for SYSVOL replication. It is valid for DFS replication in general. 

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to setup Fine-Grained Password Policies

In AD environment, we can use password policy to define passwords security requirements. These settings are located under Computer Configuration | Policies | Windows Settings | Security Settings | Account Policies

fine1

Before Windows server 2008, only one password policy can apply to the users. But in an environment, based on user roles it may require additional protection. As an example, for sales users 8-character complex password can be too much but it is not too much for domain admin account. With windows server 2008 Microsoft introduced Fine-Grained Password Policies. This allow to apply different password policies users and groups. In order to use this feature, 

1) Your domain functional level should be windows server 2008 at least.

2) Need Domain/Enterprise Admin account to create policies. 

Similar to group policies, sometime objects may end up with multiple password policies applied to it. but in any given time, an object can only have one password policy. Each Fine-Grained Password Policy have a precedence value. This integer value can define during the policy setup. Lower precedence value means the higher priority. If multiple policies been applied to an object, the policy with lower precedence value wins. Also, policy linked to user object directly, always wins. 

We can create the policies using Active Directory Administrative Centre or PowerShell. In this demo, I am going to use PowerShell method. 

New-ADFineGrainedPasswordPolicy -Name "Tech Admin Password Policy" -Precedence 1 `

-MinPasswordLength 12 -MaxPasswordAge "30" -MinPasswordAge "7" `

-PasswordHistoryCount 50 -ComplexityEnabled:$true `

-LockoutDuration "8:00" `

-LockoutObservationWindow "8:00" -LockoutThreshold 3 `

-ReversibleEncryptionEnabled:$false

In above sample I am creating a new fine-grained password policy called “Tech Admin Password Policy”. New-ADFineGrainedPasswordPolicy is the cmdlet to create new policy. Precedence to define precedence. LockoutDuration and LockoutObservationWindow values are define in hours. LockoutThreshold value defines the number of login attempts allowed. 

More info about the syntax can find using,

Get-Help New-ADFineGrainedPasswordPolicy

Also, can view examples using 

Get-Help New-ADFineGrainedPasswordPolicy -Examples

fine2

Once policy is setup we can verify its settings using, 

Get-ADFineGrainedPasswordPolicy –Identity “Tech Admin Password Policy” 

fine3

Now we have policy in place. Next step is to attach it to groups or users. In my demo, I am going to apply this to a group called “IT Admins”

Add-ADFineGrainedPasswordPolicySubject -Identity "Tech Admin Password Policy" -Subjects "IT Admins"

I also going to attach it to s user account R143869

Add-ADFineGrainedPasswordPolicySubject -Identity "Tech Admin Password Policy" -Subjects "R143869"

We can verify the policy using following,

Get-ADFineGrainedPasswordPolicy -Identity "Tech Admin Password Policy" | Format-Table AppliesTo –AutoSize

fine4

This confirms the configuration. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

When AD password will expire?

In Active Directory environment users have to update their passwords when its expire. In some occasions, it is important to know when user password will expire.

For user account, the value for the next password change is saved under the attribute msDS-UserPasswordExpiryTimeComputed

We can view this value for a user account using a PowerShell command like following, 

Get-ADuser R564441 -Properties msDS-UserPasswordExpiryTimeComputed | select Name, msDS-UserPasswordExpiryTimeComputed 

In above command, I am trying to find out the msDS-UserPasswordExpiryTimeComputed attribute for the user R564441. In output I am listing value of Name attribute and msDS-UserPasswordExpiryTimeComputed

ex1

In my example, it gave 131412469385705537 but it’s not mean anything. We need to convert it to readable format. 

I can do it using,

Get-ADuser R564441 -Properties msDS-UserPasswordExpiryTimeComputed | select Name, {[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}

In above the value was converted to datetime format and now its gives readable value. 

ex2

We can further develop this to provide report or send automatic reminders to users. I wrote following PowerShell script to generate a report regarding all the users in AD. 

$passwordexpired = $null

$dc = (Get-ADDomain | Select DNSRoot).DNSRoot

$Report= "C:\report.html"

$HTML=@"

<title>Password Validity Period For $dc</title>

<style>

BODY{background-color :LightBlue}

</style>

"@

$passwordexpired = Get-ADUser -filter * –Properties "SamAccountName","pwdLastSet","msDS-UserPasswordExpiryTimeComputed" | Select-Object -Property "SamAccountName",@{Name="Last Password Change";Expression={[datetime]::FromFileTime($_."pwdLastSet")}},@{Name="Next Password Change";Expression={[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}}

$passwordexpired | ConvertTo-Html -Property "SamAccountName","Last Password Change","Next Password Change"-head $HTML -body "<H2> Password Validity Period For $dc</H2>"|

Out-File $Report

     Invoke-Expression C:\report.html

This creates HTML report as following. It contains user name, last password change time and date and time it going to expire. 

ex3

The attributes value I used in here is SamAccountName, pwdLastSet and msDS-UserPasswordExpiryTimeComputed. pwdLastSet attribute holds the value for last password reset time and date. 

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts. 

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to configure Azure MFA with ADFS 2016

Multifactor authentication (MFA) is commonly use to protect applications, web services which is publish to internet. It helps to verify the authenticity of the authentication requests. There are many multifactor service providers. Some are cloud based and some are required on-premises installations.  

Azure MFA first was introduced to use with Azure services and later developed further to support on-premises workload protections too. It is possible to configure Azure MFA with ADFS 2.0 and ADFS 3.0, however the configuration required to install additional MFA server for that. With ADFS 4.0 (windows server 2016) this is made simple and we can integrate Azure MFA without need of additional server. 

In this post, I am going to walk you through the integration of Azure MFA with ADFS 2016. 

Before we start we need to look in to the prerequisites. 

1. Valid Azure subscription.

2. Azure Global Administrator account 

3. Existing Federate Azure AD setup. More info about this configuration can find in https://docs.microsoft.com/en-gb/azure/active-directory/connect/active-directory-aadconnect-get-started-custom#configuring-federation-with-ad-fs 

4. Windows Server 2016 AD FS installed in on-premises

5. Enterprise Administrator Account to configure MFA

6. Users with Azure MFA enabled – http://www.rebeladmin.com/2016/01/step-by-step-guide-to-configure-mfa-multi-factor-authentication-for-azure-users/

7. Windows Azure Active Directory module for Windows PowerShell installed in ADFS server

Create Certificate in each ADFS server to use with Azure MFA 

First step of the configuration is to generate a certificate for Azure MFA. This needs to perform on every ADFS server in the farm. In order to generate the certificate, you can use following on PowerShell. 

$certbase64 = New-AdfsAzureMfaTenantCertificate -TenantID “Your Tenant ID”

Please replace “Your Tenant ID” with actual azure tenant ID. You can find tenant ID by running Login-AzureRmAccount on Azure AD PowerShell. 

Once it is generated, the certificate will be under local computer certificates. 

cert1

Add new credentials to connect with Auth Client SPN

Now, we have the certificate, but we need to tell Azure Multi-Factor Auth Client to use it as

a credential to connect with AD FS.

Before that, we need to connect to the Azure AD using Azure PowerShell. We can do that

using this:

Connect-MsolService

Then, it will prompt for login and make sure to use Azure Global Administrator account to connect.

After that execute the command,

New-MsolServicePrincipalCredential -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -Type asymmetric -Usage verify -Value $certbase64

In the above command, AppPrincipalId defines the GUID for Azure Multi-Factor Auth Client.

Configure ADFS farm to use Azure MFA

Now we have the components ready and next step is to configure ADFS farm to use Azure AD. In order to do that run the following PowerShell command.

Set-AdfsAzureMfaTenant -TenantId “Your Tenant ID” -ClientId 981f26a1-7f43-403b-a875-f8b09b8cd720

In above command replace “Your Tenant ID” with your Azure Tennant id. ClientId in the command represent the GUID for Azure Multi-Factor Auth Client.

cert2

Once it is completed restart the ADFS service. 

Enable Azure MFA globally

Last step of the configuration is to enable Azure MFA for authentication. In order to do that log in to ADFS server and go to Server Manager > Tools > AD FS Management. Then, in the MMC, go to Service > Authentication Methods > Then in the Actions panel, click on Edit Primary Authentication Method.

cert3

This opens up the window to configure global authentication methods. It has two tabs, and we can see Azure MFA on both.

cert4

By selecting each box, you can enable MFA for intranet and extranet. 

This completes the configuration. now you can use Azure MFA with your ADFS farm. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter