Category Archives: MICROSOFT

Project “Honolulu” for better windows server management experience

If you worked with windows server 2003 or earlier before, I am sure you know how painful it was to install roles and manage those. We had to go through “Add or Remove Windows Components” and many “MMC”. Also, it was recommended to run “Security Configuration Wizard” before install roles as security settings not come default with role installation. To address these difficulties Microsoft introduced “Server Manager” with windows server 2008. It replaced many of wizards’ server 2003 had. It made roles and feature management easy. It was further developed and was available with every server operating system released after windows server 2008. 

Project “Honolulu” from Microsoft is to bring server management in to next level. It is simple but powerful web based interface which can install on windows 10 or windows server 2016. It can use to configure and troubleshoot servers locally or remotely. 

Why it is good? 

1. Simplified one web console for server management – Instead of using multiple MMC to manage resources, Honolulu gives simple web based interface to do it. It also allows to go from simple role install to advanced troubleshooting using same console.  

2. Better interconnection – With Honolulu we can connect windows server 2016, windows server 2012 R2, windows server 2012, Hyper-V 2016/2012R2/2012 in to one console. It also allows to manage failover cluster, hyper-converged environments from same console. Microsoft also working with partners to refine their SDK and extension model.

3. No Agents or No additional configurations – To connect servers to console it is not required agents or any other additional configuration. Only requirement is to have connectivity between gateway server and member servers. 

4. Familiar tools packaged together – web based console allow you to access familiar tools from one place. For an example, you can access Server management, registry editor, firewall tools via console. Before we had to use different methods to open those MMC. This also make it easy to adopt without additional trainings. 

5. Flexible for integration – the design itself welcoming third parties to create modules and integrate it with Honolulu so those applications or services also can manage via same console. 

6. Can use to manage resources via internet – Honolulu web console (web server components) can publish to remote networks and allow engineers to manage servers without using traditional management methods such as VPN, RDP etc. 

Will it replace other management tools? 

At the moment System center and Operation Management Suite (OMS) provides advanced infrastructure management capabilities. Project Honolulu will be complementary to those existing tools but it is not mean to replace in any mean.

Is it got anything to do with Azure? 

No, it is not. It can use with in azure VMs too. Console doesn’t need internet access even to operate. 

When it will release? 

At the moment, it is in technical preview stage. It will be released in year 2018. However, this will not prevent you from testing and providing feedback to improve it further. 

Is it supporting all operating systems?

It can only install on windows server 2016 or windows 10. 

Version

Install Honolulu

Managed node

Managed HCI cluster

Windows Server 2016

Yes

Yes

Future

Windows Server version 1709

Yes

Yes

Yes, under insider program

Windows 10

Yes

N/A

N/A

Windows Server 2012 R2

No

Yes

N/A

Windows Server 2012

No

Yes

N/A

However, if you need to manage windows server 2012 R2 and 2012 you need to install Windows Management Framework (WMF) version 5.0 or higher first as required PowerShell features not available in earlier versions.  

Architecture 
 
Honolulu has two components.
 
Gateway – Gateway is to manage connected servers via Remote PowerShell and WMI over WinRM. 
Web Server – It is the UI for Honolulu and users can access it via HTTPS requests. It also can publish to remote networks to allow users to connect via web browser. 
 
Feedback is important 
 
The project is still in early stage. We all can contribute to make it perfect. Feel free to submit your feedback via https://aka.ms/HonoluluFeedback 
 
Let’s take a look!
 
Now it’s time to install and play with it. In my demo environment, I have two windows servers 2016 under domain therebeladmin.com. I am going install it on one server and get both servers connected to the console. 
 
1. Log in to the server as administrator 
3. Double click on .exe to install. In initial window accept the license terms and click Next to continue. 
4. In next window click on tick box to select “Allow project “Honolulu” to modify this machine’s trusted host’s settings”. In same window can select “create a desktop shortcut to launch project “Honolulu”” option to create desktop shortcut. 
 
hon1
 
5. In next window, we can define a port for management site. For demo purpose, we can use self-sign certificate to allow HTTPS requests. once selections are made, click Install to proceed. 
 
hon2
 
6. Once installation is completed, double click on shortcut to launch the console.
 
Note : Console is recommended to use on Edge or Chrome browser. If you using IE, it will give error saying to use it on recommended browser. 
 
7. In initial window, it will launch a tour to explain project Honolulu. You can either follow it or skip. 
8. In home page, it lists the servers added to console. By default, it has the server it is installed on. 
 
hon3
 
9. To add new server to the list, click on Add button.
 
hon4
 
10. Then it shows the list of connections types available. In demo, I am going to add single server so I am going to choose “Add Server Connection” 
 
hon5
 
11. In next window, it asks the name of the server. please provide server name and click on Submit
 
hon6
 
You need administrator account to add server to console. In my demo, I am using domain admin account to install and configure Honolulu. If you are in workgroup environment, it will give option to define admin account user name and credentials. 
 
hon7
 
We also can import multiple servers using .txt file. 
 
hon8
 
Once its added, it will show up on home page. 
 
hon9
 
12. In order to manage server, click on the server name in homepage. Then it will bring up the server overview page. 
 
hon10
 
In this page, it gives real-time information about server performance. It also provides data about server resources. not only that it also gives options to restart or shutdown server, access settings and edit computer name. 
 
hon11
 
13. Using Device tab, we can view the details about the server hardware resources. 
 
hon12
 
14. Certificate tab allows to view all the certificates in server. more importantly it shows certificates for local machine and current user in same window. If its traditional method we have to open this using MMC. 
 
hon13
 
15. Events tab shows all the events generated in server. 
 
hon14
 
16. Files tab works similar to file explorer. You can create folders, rename folders and upload files to folders using it. unfortunately, you can’t change permissions of the folders at the moment. 
 
hon15
 
17. Firewall Tab is one of my favorites. Now it is easy to see what each rule does. It also allows to modify rules if needed.  
 
hon16
 
hon17
 
18. Registry tab is also very useful. Using same console now we can add or modify registry entries. 
 
hon18
 
19. Roles and Features tab allow you to install/remove roles and features.
 
hon19
 
hon20
 
20. Services tab work similar to traditional services mmc. It can use to status of services, start services, stop services or change startup mode. 
 
hon21
 
21. Storage tab helps to manage allocated storage to server. 
 
hon22
 
In this blog post, I tried to go through each option but I like to encourage you to go and check its capabilities in details. It is easy to implement yet powerful. This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step guide to manage Azure Storage using Azure CLI 2.0 – Part 02

This is last part of my blog post series which is covering Azure CLI 2.0 functions. If you didn’t read part 01 yet please read it before start on this. You can find it on http://www.rebeladmin.com/2017/10/step-step-guide-manage-azure-storage-using-azure-cli-2-0-part-01/ 

In my demo setup, I have two VM running. One is created using Azure Managed disks. In part 01 I explained how to add additional disk. It is currently having a 100GB additional disk attached. 

Expand Disks

Let’s see how we can expand disks using Azure CLI. Before do this, make sure you log in to Azure CLI using az login

let’s start with expanding azure managed disks. First, we can verify the VM’s storage configuration using, 

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore1

In there I have two disks. One is for OS called osdisk_6469626e28 and the other data disk called DataDisk01

We can’t increase the disk on a running VM. Not even a data disk. So first we need to deallocate the VM. We can do it using. 

az vm deallocate --resource-group rebeladminrg01 --name REBLEVM101

in above command –resource-group defines resource group VM belongs to. –name defines the VM name. 

clistore2

once it is completed we can increase the disk sizes. 

I need to expand os Disk size to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name osdisk_6469626e28 --size-gb 150

clistore3

I also like to expand data disk to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name DataDisk01 --size-gb 150

clistore4

in above commands, –resource-group defines resource group disks belongs to. –name defines the disk’s name. 

after finish, we can start the VM using,

az vm start --resource-group rebeladminrg01 --name REBLEVM101

once VM is up we can go in and expand the disk in OS level. 

clistore5

if you looking to expand disk for unmanaged disks it can be done via interface or Azure CLI 1.0. more info can find in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks-nodejs

the document itself for Linux vm but expand part work same way. 

Snapshots

We also can take snapshots of disk as quick recovery option. It is full copy of a disk in the time it’s taken. It can keep as a backup or attach to another machine for troubleshooting. 

In my demo, I am going to take snapshot of an azure managed OS disk. Before do that I need to find the disk ID. It can be done using

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore6

then we can take snapshot using,

az snapshot create -g rebeladminrg01 --source "/subscriptions/xxxxx/resourceGroups/REBELADMINRG01/providers/Microsoft.Compute/disks/osdisk_6469626e28" --name vm101osDisk-backup

clistore7

in above –source defines the disk id and –name defines the snapshot name.

if it is a unmanaged disk, snapshot works on different way. You can read more about it from https://docs.microsoft.com/en-us/azure/virtual-machines/linux/incremental-snapshots 

Convert to Managed Disks

if required we can convert vm with unmanaged disks to managed disk. To do that first we need to deallocate the VM.

az vm deallocate --resource-group rebeladminrg01 --name REBELVM102

then we can start the converting process using,

az vm convert --resource-group rebeladminrg01 --name REBELVM102

once process is finished it will start the VM. 

clistore8

clistore9

Manage blobs

We also can create, manage and delete blobs using Azure CLI. 

To create a container we can simply use,

az storage container create --name datastorage01 --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above –name defines the container name. AccountName specify the storage account name and AccountKey specify the auth key for the storage account. 

By default, the container data is set to private. If need it can set to public read access for blobs (blob) or public read and list access to whole container (container). It can define using –public-access

Once container is created we can upload blob using,

az storage blob upload --file C:\myzip1.zip --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the local file path. –container-name defines the container name it is uploading to. –name defines the blob name once it is uploaded. 

clistore10

to verify, we can list down the files in blob using,

az storage blob list --container-name datastorage01 --output table --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore11

we can download blob to local storage using,

az storage blob download --container-name datastorage01 --name myzip1.zip --file C:\myzip2.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the path and name it will have when downloaded to the local storage.

clistore12

we can delete a blob using command similar to,

az storage blob delete --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=1WzgTd/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore13

This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step guide to manage Azure Storage using Azure CLI 2.0 – Part 01

This is another part of my blog post series which was covering Azure CLI 2.0 functions. If you not read those yet you can find it with following links.

Step-by-step guide to start with azure cli 2.0http://www.rebeladmin.com/2017/08/step-step-guide-start-azure-cli-2-0/

Step-by-step guide to create azure vm using azure cli 2.0http://www.rebeladmin.com/2017/08/step-step-guide-create-azure-vm-using-azure-cli-2-0/

In part 01 of this blog post, we are going to look in to managing disks using Azure CLI. 

First thing first, I am going to log in to Azure CLI with a privileged account. This can be done using az login

I have a windows VM setup under my subscription. I can view its details using az vm show --resource-group rebeladminrg01 --name REBLEVM101

In above --resource-group defines the resource group name and –name defines the VM name. 

sto1

In this VM, I have a disk with size of 128 GB. It is azure managed disk. 

sto2

I like to add couple of disks in to this VM. Adding “Azure Managed” disk is the simplest way. It simplifies the disk management process. The only thing you need to worry is disk type and size. 

az vm disk attach -g rebeladminrg01 --vm-name REBLEVM101 --disk DataDisk01 --new --size-gb 100

above creates a managed disk called DataDisk01 under rebeladminrg01 resource group. it is 100 GB in size. It also attached to REBLEVM101 VM.

We can verify it by running,

az disk show --name DataDisk01 --resource-group rebeladminrg01

sto4

if need we can also use “unmanaged” disks. First, I am going to create a new storage account for it. 

az storage account create --location westus --name rebelstorage01 --resource-group rebeladminrg01 --sku Standard_LRS

sto5

above creates a storage account called rebelstorage01 under westus region. Its created under rebeladminrg01 resource group. its Standard_LRS storage. 

Before configure the storage, first we need to set environment variables so the it can be use with commands. 

To do that need to type

az storage account show-connection-string --name rebelstorage01 --resource-group rebeladminrg01

then copy the connection string value and use it with

az storage container create --name data --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=oJOjFskwKlDBisEiGREBEsMRWnDbOA+q6stySqXKT1MsBiPZeJPThnfnkGgG9AgudKmJ/5CCl65cGcMIAZGQhg=="

above will create a container called data under the storage account. 

Let’s go ahead and add a new unmanaged disk to a VM. 

Note – You cannot add unmanaged disk to a VM created with managed disk. 

az vm unmanaged-disk attach -g rebeladminrg01 --vm-name REBELVM3 --new -n DataDisk6 --vhd-uri https://rebelstorage01.blob.core.windows.net/data/2.vhd --size-gb 100

in above rebeladminrg01 is the resource group where azure VM located. REBELVM3 is the VM name. I am creating a new disk called DataDisk6 on data/2.vhd path. Its size is 100 GB. 

sto6

In order to detach disk from VM we can use following commands. 

If its unmanaged disk we can use,

az vm unmanaged-disk detach --name DataDisk6 --resource-group rebeladminrg01 --vm-name REBELVM3

above command will detach unmanaged disk called DataDisk6 from REBELVM3 VM.

sto7

If its managed disk we can use,

az vm disk detach -g rebeladminrg01 --vm-name REBLEVM101 -n DataDisk02

above will remove data disk called DataDisk02 from REBLEVM101 VM.

sto8

This is the end of the part 01 of this post. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Azure AD Connect Staging Mode

Azure AD Connect is the tool use to connect on-premises directory service with Azure AD. It allows users to use same on-premises ID and passwords to authenticate in to Azure AD, Office 365 or other Applications hosted in Azure. Azure AD connect can install on any server if its meets following,

The AD forest functional level must be Windows Server 2003 or later. 

If you plan to use the feature password writeback, then the Domain Controllers must be on Windows Server 2008 (with latest SP) or later. If your DCs are on 2008 (pre-R2), then you must also apply hotfix KB2386717.

The domain controller used by Azure AD must be writable. It is not supported to use a RODC (read-only domain controller) and Azure AD Connect does not follow any write redirects.

It is not supported to use on-premises forests/domains using SLDs (Single Label Domains).

It is not supported to use on-premises forests/domains using "dotted" (name contains a period ".") NetBios names.

Azure AD Connect cannot be installed on Small Business Server or Windows Server Essentials. The server must be using Windows Server standard or better.

The Azure AD Connect server must have a full GUI installed. It is not supported to install on server core.

Azure AD Connect must be installed on Windows Server 2008 or later. This server may be a domain controller or a member server when using express settings. If you use custom settings, then the server can also be stand-alone and does not have to be joined to a domain.

If you install Azure AD Connect on Windows Server 2008 or Windows Server 2008 R2, then make sure to apply the latest hotfixes from Windows Update. The installation is not able to start with an unpatched server.

If you plan to use the feature password synchronization, then the Azure AD Connect server must be on Windows Server 2008 R2 SP1 or later.

If you plan to use a group managed service account, then the Azure AD Connect server must be on Windows Server 2012 or later.

The Azure AD Connect server must have .NET Framework 4.5.1 or later and Microsoft PowerShell 3.0 or later installed.

If Active Directory Federation Services is being deployed, the servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation.

If Active Directory Federation Services is being deployed, you need SSL Certificates.

If Active Directory Federation Services is being deployed, then you need to configure name resolution.

If your global administrators have MFA enabled, then the URL https://secure.aadcdn.microsoftonline-p.com must be in the trusted sites list. You are prompted to add this site to the trusted sites list when you are prompted for an MFA challenge and it has not added before. You can use Internet Explorer to add it to your trusted sites.

Azure AD Connect requires a SQL Server database to store identity data. By default a SQL Server 2012 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, you need to point the installation wizard to a different installation of SQL Server.

What is staging mode? 
 
In a given time, only one Azure AD connect instance can involve with sync process for a directory. But this gives few challenges. 
 
Disaster Recovery – If the server with Azure AD connect involves in a disaster it going to make impact on sync process. This can be worse if you using features such as password pass-through, single-sing-on, password writeback through AD connect.
Upgrades – If the system which running Azure AD connect needs upgrade or if Azure AD connect itself needs upgrade, will make impact for sync process. Again, the affordable downtime will be depending on the features and organization dependencies over Azure AD connect and its operations. 
Testing New Features – Microsoft keep adding new features to Azure AD connect. Before introduce those to production its always good to simulate and see how it will impact. But if its only one instance, it is not possible to do so. Even you have demo environment it may not simulate same impact as production in some occasions. 
 
Microsoft introduced the staging mode of Azure AD connect to overcome above challenges. With staging mode, it allows you to maintain another copy of Azure AD connect instance in another server. it can have same config as primary server. It will connect to Azure AD and receive changes and keep a latest copy to make sure the switch over is seamless as possible. However, it will not sync Azure AD connect configuration from primary server. it is engineer’s responsibility to update staging server AD connect configuration, if primary server AD connects config modified. 
 
Installation
 
Let’s see how we can configure Azure AD connect in staging mode.
 
1) Prepare a server according to guidelines given in prerequisites section to install Azure AD Connect. 
2) Review current configuration of Azure AD connect running on primary server. you can check this by Azure AD Connect | View current configuration 

sta1
 
sta2
 
3) Log in to server as Domain Administrator and download latest Azure Ad Connect from https://www.microsoft.com/en-us/download/details.aspx?id=47594
4) During the installation, please select customize option. 
 
sta3
 
5) Then proceed with the configuration according to settings used in primary server. 
6) At the last step of the configuration, select Enable staging mode: When selected, synchronization will not export any data to Ad or Azure AD and then click install
 
sta4
 
7) Once installation completed, in Synchronization Service (Azure AD Connect | Synchronization Service) we can confirm there is no sync jobs. 
 
sta5
 
Verify data
 
As I mentioned before, staging server allows to simulate export before it make as primary. This is important if you implement new configuration changes. 
 
In order to prepare a staged copy of export, 
 
1) Go to Start | Azure AD Connect | Synchronization Service | Connectors 
 
sta6
 
2) Select the Active Directory Domain Services connector and click on Run from the right-hand panel. 
 
sta7
 
3) Then in next window select Full Import and click OK.
 
sta8
 
4) Repeat same for Windows Azure Active Directory (Microsoft) 
5) Once both jobs completed, Select the Active Directory Domain Services connector and click on Run from the right-hand panel again. But this time select Delta Synchronization, and click OK.
 
sta9
 
6) Repeat same for Windows Azure Active Directory (Microsoft)
7) Once both jobs finished, go to Operation tab and verify if jobs were completed successfully. 
 
sta10
 
Now we have the staging copy, next step is to verify if the data is presented as expected. to do that we need to get help of a PowerShell script.  

 
Param(
    [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")]
    [string]$xmltoimport="%temp%\exportedStage1a.xml",
    [Parameter(Mandatory=$false, HelpMessage="Maximum number of users per output file")][int]$batchsize=1000,
    [Parameter(Mandatory=$false, HelpMessage="Show console output")][bool]$showOutput=$false
)

#LINQ isn't loaded automatically, so force it
[Reflection.Assembly]::Load("System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089") | Out-Null

[int]$count=1
[int]$outputfilecount=1
[array]$objOutputUsers=@()

#XML must be generated using "csexport "Name of Connector" export.xml /f:x"
write-host "Importing XML" -ForegroundColor Yellow

#XmlReader.Create won't properly resolve the file location,
#so expand and then resolve it
$resolvedXMLtoimport=Resolve-Path -Path ([Environment]::ExpandEnvironmentVariables($xmltoimport))

#use an XmlReader to deal with even large files
$result=$reader = [System.Xml.XmlReader]::Create($resolvedXMLtoimport) 
$result=$reader.ReadToDescendant('cs-object')
do 
{
    #create the object placeholder
    #adding them up here means we can enforce consistency
    $objOutputUser=New-Object psobject
    Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
    Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
    Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""

    $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
    if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}

    #object id
    $outID=$user.Attribute('id').Value
    if ($showOutput) {Write-Host ID: $outID}
    $objOutputUser.ID=$outID

    #object type
    $outType=$user.Attribute('object-type').Value
    if ($showOutput) {Write-Host Type: $outType}
    $objOutputUser.Type=$outType

    #dn
    $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
    if ($showOutput) {Write-Host DN: $outDN}
    $objOutputUser.DN=$outDN

    #operation
    $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
    if ($showOutput) {Write-Host Operation: $outOperation}
    $objOutputUser.operation=$outOperation

    #now that we have the basics, go get the details

    foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
    {
        $attrvalue=$attr.Attribute('name').Value
        $internalvalue= $attr.Element('value').Value

        switch ($attrvalue)
        {
            "userPrincipalName"
            {
                if ($showOutput) {Write-Host UPN: $internalvalue}
                $objOutputUser.UPN=$internalvalue
            }
            "displayName"
            {
                if ($showOutput) {Write-Host displayName: $internalvalue}
                $objOutputUser.displayName=$internalvalue
            }
            "sourceAnchor"
            {
                if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
                $objOutputUser.sourceAnchor=$internalvalue
            }
            "alias"
            {
                if ($showOutput) {Write-Host alias: $internalvalue}
                $objOutputUser.alias=$internalvalue
            }
            "proxyAddresses"
            {
                if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
                $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
            }
        }
    }

    $objOutputUsers += $objOutputUser

    Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)

    #every so often, dump the processed users in case we blow up somewhere
    if ($count % $batchsize -eq 0)
    {
        Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow

        #export the collection of users as as CSV
        Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
        $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation

        #increment the output file counter
        $outputfilecount+=1

        #reset the collection and the user counter
        $objOutputUsers = $null
        $count=0
    }

    $count+=1

    #need to bail out of the loop if no more users to process
    if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
    {
        break
    }

} while ($reader.Read)

#need to write out any users that didn't get picked up in a batch of 1000
#export the collection of users as as CSV
Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
$objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation

 
Save this as .ps1 on C Drive. 
 
1) Open PowerShell and type cd "C:\Program Files\Microsoft Azure AD Sync\Bin" (if your install path is different use the relevant path)
2) Then run .\csexport "myrebeladmin.onmicrosoft.com – AAD" C:\export.xml /f:x in here myrebeladmin.onmicrosoft.com -AAD should replace with your Azure AD connector name. This will export config to C:\export.xml
3) Then type .\analyze.ps1 -xmltoimport C:\export.xml in here analyze.ps1 is the script we saved in beginning of this section. 
4) Then it will create CSV file called processedusers1.csv and it’s contain all changes which will sync to Azure AD. 
 
However, this step is always not required. It can make as primary server without import and verify process. 
 
How to make it as primary Server?
 
In order to make staging server as primary server,
 
1) Go to Start | Azure AD Connect | Azure AD Connect
2) Then click on Configure in next page. 
3) In next page select option Configure staging mode and click Next
 
 
sta11
 
4) In next page provide the Azure AD login credentials for directory sync account. 
5) In next window, untick Enable staging mode and click Next
 
sta12
 
6) In next window select start the synchronization process… and click Configure
 
sta13
 
This completes the process of promoting staging server in to primary. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

What is Content Freshness protection in DFSR?

Healthy Replication is a must for active directory environment. SYSVOL folder in domain controllers contain policies and log on scripts. It is replicated between domain controllers to maintain up to date config (consistency). Before windows server 2008, it used FRS (File Replication Service) to replicate sysvol content among domain controllers. With Windows server 2008 FRS was deprecated and introduced Distributed File System (DFS) for replication.

A healthy replication required healthy communication between domain controllers. sometime the communication can interrupt due to domain controller failure or link failure. Based on the impact it is still possible that the communication re-established after period of time. Then it will try to resume replication and catch up with SYSVOL changes. In such scenario, we may see event 4012 in event viewer. 

The DFS Replication service stopped replication on the replicated folder at local path c:\xxx. It has been disconnected from other partners for 70 days, which is longer than the MaxOfflineTimeInDays parameter. Because of this, DFS Replication considers this data to be stale, and will replace it with data from other members of the replication group during the next replication. DFS Replication will move the stale files to the local Conflict folder. No user action is required.

With windows server 2008, Microsoft introduced a setting called content freshness protection to protect DFS shares from stale data. DFS also use a multi-master database similar to active directory. It also has tombstone time limit similar to Active Directory. The default value for this is 60 days. If there were no replication more than that time and resume replication in later time, it can have stale data. It is similar to lingering objects in AD. To protect from this, we can define value for MaxOfflineTimeInDays. if the number of days from last successful DFS replication is larger than MaxOfflineTimeInDays it will prevent the replication. 

We can review this value by running,

For /f %m IN ('dsquery server -o rdn') do @echo %m && @wmic /node:"%m" /namespace:\\root\microsoftdfs path DfsrMachineConfig get MaxOfflineTimeInDays

cf1

There is two ways to recover from this. First method is to increase the value of MaxOfflineTimeInDays. it can be done using,

wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=120

cf2

It is recommended to run this on all domain controllers to maintain same config. 

If you not willing to change this value, it still can recover using non-authoritative restore. It will remove all conflicting values and take an updated copy. 

I have already written an article about non-authoritative restore of SYSVOL and it can be find in http://www.rebeladmin.com/2017/08/non-authoritative-authoritative-sysvol-restore-dfs-replication/ 

This is not only for SYSVOL replication. It is valid for DFS replication in general. 

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step guide to setup Fine-Grained Password Policies

In AD environment, we can use password policy to define passwords security requirements. These settings are located under Computer Configuration | Policies | Windows Settings | Security Settings | Account Policies

fine1

Before Windows server 2008, only one password policy can apply to the users. But in an environment, based on user roles it may require additional protection. As an example, for sales users 8-character complex password can be too much but it is not too much for domain admin account. With windows server 2008 Microsoft introduced Fine-Grained Password Policies. This allow to apply different password policies users and groups. In order to use this feature, 

1) Your domain functional level should be windows server 2008 at least.

2) Need Domain/Enterprise Admin account to create policies. 

Similar to group policies, sometime objects may end up with multiple password policies applied to it. but in any given time, an object can only have one password policy. Each Fine-Grained Password Policy have a precedence value. This integer value can define during the policy setup. Lower precedence value means the higher priority. If multiple policies been applied to an object, the policy with lower precedence value wins. Also, policy linked to user object directly, always wins. 

We can create the policies using Active Directory Administrative Centre or PowerShell. In this demo, I am going to use PowerShell method. 

New-ADFineGrainedPasswordPolicy -Name "Tech Admin Password Policy" -Precedence 1 `

-MinPasswordLength 12 -MaxPasswordAge "30" -MinPasswordAge "7" `

-PasswordHistoryCount 50 -ComplexityEnabled:$true `

-LockoutDuration "8:00" `

-LockoutObservationWindow "8:00" -LockoutThreshold 3 `

-ReversibleEncryptionEnabled:$false

In above sample I am creating a new fine-grained password policy called “Tech Admin Password Policy”. New-ADFineGrainedPasswordPolicy is the cmdlet to create new policy. Precedence to define precedence. LockoutDuration and LockoutObservationWindow values are define in hours. LockoutThreshold value defines the number of login attempts allowed. 

More info about the syntax can find using,

Get-Help New-ADFineGrainedPasswordPolicy

Also, can view examples using 

Get-Help New-ADFineGrainedPasswordPolicy -Examples

fine2

Once policy is setup we can verify its settings using, 

Get-ADFineGrainedPasswordPolicy –Identity “Tech Admin Password Policy” 

fine3

Now we have policy in place. Next step is to attach it to groups or users. In my demo, I am going to apply this to a group called “IT Admins”

Add-ADFineGrainedPasswordPolicySubject -Identity "Tech Admin Password Policy" -Subjects "IT Admins"

I also going to attach it to s user account R143869

Add-ADFineGrainedPasswordPolicySubject -Identity "Tech Admin Password Policy" -Subjects "R143869"

We can verify the policy using following,

Get-ADFineGrainedPasswordPolicy -Identity "Tech Admin Password Policy" | Format-Table AppliesTo –AutoSize

fine4

This confirms the configuration. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

When AD password will expire?

In Active Directory environment users have to update their passwords when its expire. In some occasions, it is important to know when user password will expire.

For user account, the value for the next password change is saved under the attribute msDS-UserPasswordExpiryTimeComputed

We can view this value for a user account using a PowerShell command like following, 

Get-ADuser R564441 -Properties msDS-UserPasswordExpiryTimeComputed | select Name, msDS-UserPasswordExpiryTimeComputed 

In above command, I am trying to find out the msDS-UserPasswordExpiryTimeComputed attribute for the user R564441. In output I am listing value of Name attribute and msDS-UserPasswordExpiryTimeComputed

ex1

In my example, it gave 131412469385705537 but it’s not mean anything. We need to convert it to readable format. 

I can do it using,

Get-ADuser R564441 -Properties msDS-UserPasswordExpiryTimeComputed | select Name, {[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}

In above the value was converted to datetime format and now its gives readable value. 

ex2

We can further develop this to provide report or send automatic reminders to users. I wrote following PowerShell script to generate a report regarding all the users in AD. 

$passwordexpired = $null

$dc = (Get-ADDomain | Select DNSRoot).DNSRoot

$Report= "C:\report.html"

$HTML=@"

<title>Password Validity Period For $dc</title>

<style>

BODY{background-color :LightBlue}

</style>

"@

$passwordexpired = Get-ADUser -filter * –Properties "SamAccountName","pwdLastSet","msDS-UserPasswordExpiryTimeComputed" | Select-Object -Property "SamAccountName",@{Name="Last Password Change";Expression={[datetime]::FromFileTime($_."pwdLastSet")}},@{Name="Next Password Change";Expression={[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}}

$passwordexpired | ConvertTo-Html -Property "SamAccountName","Last Password Change","Next Password Change"-head $HTML -body "<H2> Password Validity Period For $dc</H2>"|

Out-File $Report

     Invoke-Expression C:\report.html

This creates HTML report as following. It contains user name, last password change time and date and time it going to expire. 

ex3

The attributes value I used in here is SamAccountName, pwdLastSet and msDS-UserPasswordExpiryTimeComputed. pwdLastSet attribute holds the value for last password reset time and date. 

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts. 

Step-by-Step guide to configure Azure MFA with ADFS 2016

Multifactor authentication (MFA) is commonly use to protect applications, web services which is publish to internet. It helps to verify the authenticity of the authentication requests. There are many multifactor service providers. Some are cloud based and some are required on-premises installations.  

Azure MFA first was introduced to use with Azure services and later developed further to support on-premises workload protections too. It is possible to configure Azure MFA with ADFS 2.0 and ADFS 3.0, however the configuration required to install additional MFA server for that. With ADFS 4.0 (windows server 2016) this is made simple and we can integrate Azure MFA without need of additional server. 

In this post, I am going to walk you through the integration of Azure MFA with ADFS 2016. 

Before we start we need to look in to the prerequisites. 

1. Valid Azure subscription.

2. Azure Global Administrator account 

3. Existing Federate Azure AD setup. More info about this configuration can find in https://docs.microsoft.com/en-gb/azure/active-directory/connect/active-directory-aadconnect-get-started-custom#configuring-federation-with-ad-fs 

4. Windows Server 2016 AD FS installed in on-premises

5. Enterprise Administrator Account to configure MFA

6. Users with Azure MFA enabled – http://www.rebeladmin.com/2016/01/step-by-step-guide-to-configure-mfa-multi-factor-authentication-for-azure-users/

7. Windows Azure Active Directory module for Windows PowerShell installed in ADFS server

Create Certificate in each ADFS server to use with Azure MFA 

First step of the configuration is to generate a certificate for Azure MFA. This needs to perform on every ADFS server in the farm. In order to generate the certificate, you can use following on PowerShell. 

$certbase64 = New-AdfsAzureMfaTenantCertificate -TenantID “Your Tenant ID”

Please replace “Your Tenant ID” with actual azure tenant ID. You can find tenant ID by running Login-AzureRmAccount on Azure AD PowerShell. 

Once it is generated, the certificate will be under local computer certificates. 

cert1

Add new credentials to connect with Auth Client SPN

Now, we have the certificate, but we need to tell Azure Multi-Factor Auth Client to use it as

a credential to connect with AD FS.

Before that, we need to connect to the Azure AD using Azure PowerShell. We can do that

using this:

Connect-MsolService

Then, it will prompt for login and make sure to use Azure Global Administrator account to connect.

After that execute the command,

New-MsolServicePrincipalCredential -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -Type asymmetric -Usage verify -Value $certbase64

In the above command, AppPrincipalId defines the GUID for Azure Multi-Factor Auth Client.

Configure ADFS farm to use Azure MFA

Now we have the components ready and next step is to configure ADFS farm to use Azure AD. In order to do that run the following PowerShell command.

Set-AdfsAzureMfaTenant -TenantId “Your Tenant ID” -ClientId 981f26a1-7f43-403b-a875-f8b09b8cd720

In above command replace “Your Tenant ID” with your Azure Tennant id. ClientId in the command represent the GUID for Azure Multi-Factor Auth Client.

cert2

Once it is completed restart the ADFS service. 

Enable Azure MFA globally

Last step of the configuration is to enable Azure MFA for authentication. In order to do that log in to ADFS server and go to Server Manager > Tools > AD FS Management. Then, in the MMC, go to Service > Authentication Methods > Then in the Actions panel, click on Edit Primary Authentication Method.

cert3

This opens up the window to configure global authentication methods. It has two tabs, and we can see Azure MFA on both.

cert4

By selecting each box, you can enable MFA for intranet and extranet. 

This completes the configuration. now you can use Azure MFA with your ADFS farm. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO)

I am sure most of you aware what is single sign-on (SSO) in Active Directory infrastructure and how it works. When we extend identity infrastructures to Azure by using Azure AD, it also allows to extend Single Sign-On capabilities to authenticate in to cloud workloads. it can be done using on-premises ADFS farm. Password Hash Synchronization or Pass-through Authentication allow users to use same user name and password to log in to cloud applications but this is not a “Seamless” access. Even they are using same user name and password, when log in to Azure workloads it will prompt for password. 

In my below example, I have an Azure AD instance integrated with on-premises AD using Pass-through Authentication. In there I have a user R272845. I logged in to a domain joined computer with this user and try to access application published using Azure. when I type the URL and press enter, it redirects me to Azure AD login page.

sso1

sso2

Azure Active Directory Seamless Single Sign-On is a feature which allow users to authenticate in to Azure AD without providing password again when login from domain join/ corporate device. This can be integrated with Password Hash Synchronization or Pass-through Authentication. This is still on preview which means cannot use in production environment yet. However, if it doesn’t work in environment, it will always issue the typical Azure AD authentication page, so it will not prevent you from accessing any application. This feature is not supported if you using ADFS option already.

According to Microsoft, following can list as key features of Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO)

Users are automatically signed into both on-premises and cloud-based applications.

Users don't have to enter their passwords repeatedly.

No additional components needed on-premises to make this work.

Works with any method of cloud authentication – Password Hash Synchronization or Pass-through Authentication.

Can be rolled out to some or all your users using Group Policy.

Register non-Windows 10 devices with Azure AD without the need for any AD FS infrastructure. This capability needs you to use version 2.1 or later of the workplace-join client.

Seamless SSO is an opportunistic feature. If it fails for any reason, the user sign-in experience goes back to its regular behavior – i.e, the user needs to enter their password on the sign-in page.

It can be enabled via Azure AD Connect.

It is a free feature, and you don't need any paid editions of Azure AD to use it.

It is supported on web browser-based clients and Office clients that support modern authentication on platforms and browsers capable of Kerberos authentication

According to Microsoft, following environments are supported. 

OS\Browser

Internet Explorer

Edge

Google Chrome

Mozilla Firefox

Safari

Windows 10

Yes

No

Yes

Yes, additional config required

N/A

Windows 8.1

Yes

N/A

Yes

Yes, additional config required

N/A

Windows 8

Yes

N/A

Yes

Yes, additional config required

N/A

Windows 7

Yes

N/A

Yes

Yes, additional config required

N/A

Mac OS X

N/A

N/A

Yes

Yes, additional config required

Yes, additional config required

The current release (at the time this blog post was written) do not support edge browser. Also this feature will not work when users use private browser mode on Firefox or when users have Enhanced Protection mode enabled in IE. 

How it works?

Before we look in to configuration, let’s go ahead and see how it’s really works. In following example, user is trying to access cloud based application (integrated with azure) using his on-premises username, password and domain joined device. 

Also, it is important to know what happen in corporate infrastructure when seamless SSO enabled.

System will create AZUREADSSOACCT computer object in on-premises AD to represent Azure AD

AZUREADSSOACCT computer account’s Kerberos decryption key is shared with Azure AD.

Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in which is https://autologon.microsoftazuread-sso.com and https://aadg.windows.net.nsatc.net 

sso3

1. User is accessing the application URL using his browser. He is doing it using his domain joined device in corporate network.

2. If user is not sign in already, it is pointed to Azure AD sign in page and then user type his user name.

3. Azure AD challenge back user via browser using 401 response to provide Kerberos ticket.

4. Browser request a Kerberos ticket for AZUREADSSOACCT computer object from on-premises AD. This account will be created in on premise AD as part of the process in order to represent Azure AD. 

5. On-premises AD locate the AZUREADSSOACCT computer object and return the Kerberos ticket to the browser encrypted using computer object’s secret. 

6. The browser forwards Kerberos ticket to Azure AD.

7. Azure AD decrypts the Kerberos ticket using Kerberos decryption key (This was shared with azure AD when SSO feature enable)

8. After evaluation, Azure AD pass the response back to the user (if required additional steps such as MFA required).

9. User allowed to access the application. 

Prerequisites

In order to implement this feature, we need the following,

1. Domain Admin / Enterprise Admin account to install and configure Azure AD Connect in on-premises 

2. Global Administrator Account for Azure subscription – in order to create custom domain, configure AD connect etc.

3. Latest Azure AD Connect https://www.microsoft.com/en-us/download/details.aspx?id=47594 – if you have older Azure AD connect version installed, you need to upgrade it to latest before we configure this feature.

4. Azure AD Connect can communicate with *.msappproxy.net URLs and over port 443. If connection is control via IP addresses, the range of azure IP addresses can find in here https://www.microsoft.com/en-us/download/details.aspx?id=41653 

5. Add is https://autologon.microsoftazuread-sso.com and https://aadg.windows.net.nsatc.net to browser intranet zone. If users are using IE and chrome, this can be done using group policy. I have written blog post before how to create policy targeting IE. You can find it here

6. Firefox need above URL added to the trusted Kerberos site list to do Kerberos authentication. To do that go to Firefox browser > Type about:config in address bar > in list look for network.negotiate-auth.trusted-uris > right click and select modify > type “https://autologon.microsoftazuread-sso.com, https://aadg.windows.net.nsatc.net" and click ok

7. if its MAC os, device need to be joined to AD. More details can be found in here

Configure Azure AD Seamless SSO
 
Configuration of this feature is straight forward, basically it’s just putting a one tick box. 
 
If its fresh Azure AD connect installation, select the customize option under express settings.
 
sso4
 
Then in User Sign-in page select the appropriate sign-in option and then select Enable single sign-on option.
 
sso5
 
If you have existing Azure AD connect instance running, double click on Azure AD connect short cut. In initial window click on Configure.
 
sso6
 
In additional task page click on Change user sign-in and then click on Next.
 
sso7
 
In next window, type the Azure AD sync account user name and password and click on Next.
 
sso8
 
Then under the User Sign-in page select Enable single sign-on option and then click Next
 
sso9
 
In next page, enter the credentials for on-premises domain admin account and click Next.  
 
sso10
 
At the end click on Configure to complete the process. 
 
sso11
 
This completes the configuration and next step is to verify if its configured SSO. First thing is to check if its create computer object called AZUREADSSOACCT under on-premises AD. You will be able to find it under default Computers OU.
 
sso12
 
Then log in to Azure Portal and go to Azure Active Directory > Azure AD Connect then under the user sign-in option we can see seamless sign-on option is enabled. 
 
sso13
 
This means it’s all good. Next step is to check if its working as expected. in order to do that I am login to corporate device with same user I used earlier which is R272845 and try to access same app url. 
 
This time, all I needed to type was the user name and it log me in. nice!!!!
 
sso14
 
Note – before testing make sure you added the two Azure AD urls to intranet zone as I mentioned in prerequisites section. 
 
Hope this information was useful and if you have any questions feel free to contact me on rebeladm@live.com

Azure Active Directory Pass-through Authentication

When organizations want to use same user name and passwords to log in to on-premises and cloud workloads (azure), there are two options. One is to sync user name and password hashes from on-premises active directory to azure AD. Other option is to deploy ADFS farm on-premises and use it to authenticate cloud based logins. But it needs additional planning and resources. On-premises AD uses hash values (which are generated by a hash algorithm) as passwords. They are NOT saved as clear text, and it is almost impossible to revert it to the original password even someone have hash value. There is misunderstanding about this as some people still think Azure AD password sync uses clear text passwords. Every two minutes, the Azure AD connect server retrieves password hashes from the on-premises AD and syncs it with Azure AD on a per user-basis in chronological order. In technical point of view, I do not see a reason why people should avoid password hash sync to azure AD. However, there are company policies and compliance requirements which do not accept any form of identity sync to external system even on hash format. Azure Active Directory Pass-through Authentication is introduced by Microsoft to answer these requirements. It allows users to authenticate in to cloud workloads using same passwords they are using in on-premises without syncing their password hash values to Azure AD. This feature is currently on preview. Which means it’s still not supported on production environment. But it is not too early to try it in development environments.

According to Microsoft, following can list as key features of Pass-through Authentication.

Users use the same passwords to sign into both on-premises and cloud-based applications.

Users spend less time talking to the IT helpdesk resolving password-related issues.

Users can complete self-service password management tasks in the cloud.

No need for complex on-premises deployments or network configuration.

Needs just a lightweight agent to be installed on-premises.

No management overhead. The agent automatically receives improvements and bug fixes.

On-premises passwords are never stored in the cloud in any form.

The agent only makes outbound connections from within your network. Therefore, there is no requirement to install the agent in a perimeter network, also known as a DMZ.

Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including Multi-Factor Authentication (MFA), and by filtering out brute force password attacks.

Additional agents can be installed on multiple on-premises servers to provide high availability of sign-in requests.

Multi-forest environments are supported if there are forest trusts between your AD forests and if name suffix routing is correctly configured.

It is a free feature, and you don't need any paid editions of Azure AD to use it.

It can be enabled via Azure AD Connect.

It protects your on-premises accounts against brute force password attacks in the cloud.

How it works?

Let’s see how it really works. In following example, user is trying to access cloud based application (integrated with azure) using his on-premises username and the password. This organization is using pass-through authentication.

pt1

1. User is accessing the application URL using his browser. 

2. In order to authenticate to the application, user is directed to Azure Active Directory sign-in page. User then type the user name, password and click on sign-in button. 

3. Azure AD receives the data and it encrypt the password using public key which is used to verify the data authenticity. Then it places it’s in a queue where it will wait till pass-through agent retrieves it.   

4. On-premises pass-through agent retrieves the data from Azure AD queue (using an outbound connection) 

5. Agent decrypt the password using private key available for it. 

6. Agent validates the user name and password information with on-premises Active Directory. It uses same mechanism as ADFS. 

7. On-premises AD evaluate the request and provides the response. It can be success, failure, password-expire or account lockout. 

8. Pass-through agent passes the response back to Azure AD. 

9. Azure AD evaluate response and pass it back to user.

10. If response was success, user is allowed to access the application. 

Prerequisites
 
In order to implement this feature, we need the following,
 
1. Domain Admin / Enterprise Admin account to install and configure Azure AD Connect in on-premises 
2. Global Administrator Account for Azure subscription – in order to create custom domain, configure AD connect etc. 
3. On-premises servers running windows server 2012 R2 or latest to install Azure AD connect and pass-through agent.
4. Latest Azure AD Connect https://www.microsoft.com/en-us/download/details.aspx?id=47594 – if you have older Azure AD connect version installed, you need to upgrade it to latest before we configure this feature. 
5. Allow outbound communication to Azure via TCP port 80 and 443 from servers which will have Azure AD connect and authentication agents. You can find azure datacenter ip ranges from https://www.microsoft.com/en-us/download/details.aspx?id=41653
 

Configure Azure Active Directory Pass-through Authentication
 
Once we have all the prerequisites ready, we can look in to configuration. if you running Azure AD connect for first time make sure to use custom method.
 
pt1-1
 
Then in User sign-in option, select pass-through authentication and continue. 
 
pt1-2
 
If you running it already in servers, first run as Azure AD Connect as administrator. Then click on Configure.
 
pt2
 
Then in next page, select Change user sign-in option and click Next.
 
pt3
 
In next window type the Azure AD sync account login details and then click Next.
 
pt4
 
In next window, select pass-through authentication and click Next
 
pt5
 
Note– If you have Azure AD App Proxy Connector installed on same Azure AD connect server you will receive error saying, Pass-through authentication cannot be configured on this machine because Azure AD Connect agent is already installed. To fix it uninstall the Azure AD proxy connector and then reconfigure AD connect. After that you can reinstall Azure AD App proxy Connector. 
 
Once it finishes the configuration click on configure to complete the process. 
 
pt6
 
Once process is completed, log in to Azure Portal and then go to Azure Active Directory > Azure AD Connect. In there we can see pass-through authentication is enabled. 
 
pt7
 
And if you click on there it will shows the connected agents status. 
 
pt8
 
At this stages users from on-premises should be able to sign in to their cloud applications by using pass-through authentication.
 
in order to add high availability, we can install agent in multiple domain join servers. it can download from pass-through authentication page.
 
pt9
 
This completes the implementation of pass-through authentication and hope this post was useful. If you have any questions, feel free to contact me on rebeladm@live.com