Non-Authoritative and Authoritative SYSVOL Restore (DFS Replication)

Healthy SYSVOL replication is key for every active directory infrastructure. when there is SYSVOL replication issues you may notice,

1. Users and systems are not applying their group policy settings properly. 

2. New group policies not applying to certain users and systems. 

3. Group policy object counts is different between domain controllers (inside SYSVOL folders)

4. Log on scripts are not processing correctly

Also, same time if you look in to event viewer you may able to find events such as,

Event Id

Event Description

2213

The DFS Replication service stopped replication on volume C:. This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication.

Recovery Steps

1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.

2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:

wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid=”xxxxxxxx″ call ResumeReplication

5002

The DFS Replication service encountered an error communicating with partner <FQDN> for replication group Domain System Volume.

5008

The DFS Replication service failed to communicate with partner <FQDN> for replication group Home-Replication. This error can occur if the host is unreachable, or if the DFS Replication service is not running on the server.

5014

The DFS Replication service is stopping communication with partner <FQDN> for replication group Domain System Volume due to an error. The service will retry the connection periodically.

Some of these errors can be fixed with simple server reboot or running commands describe in the error ( ex – event 2213 description) but if its keep continuing we need to do Non-Authoritative or Authoritative SYSVOL restore.

Non-Authoritative Restore 

If it’s only one or few domain controller (less than 50%) which have replication issues in a given time, we can issue a non-authoritative replication. In that scenario, system will replicate the SYSVOL from the PDC. 

Authoritative Restore

If more than 50% of domain controllers have SYSVOL replication issues, it possible that entire SYSVOL got corrupted. In such scenario, we need to go for Authoritative Restore. In this process, first we need to restore SYSVOL from backup to PDC and then replicate over or force all the domain controllers to update their SYSVOL copy from the copy in PDC. 

SYSVOL can replicate using FRS too. This is deprecated after windows server 2008, but if you migrated from older Active Directory environment you may still have FRS for SYSVOL replication. It also supports for Non-Authoritative and Authoritative restore but in this demo, I am going to talk only about SYSVOL with DFS replication. 

Non-Authoritative DFS Replication 

In order to perform a non-authoritative replication,

1) Backup the existing SYSVOL – This can be done by copying the SYSVOL folder from the domain controller which have DFS replication issues in to a secure location. 

2) Log in to Domain Controller as Domain Admin/Enterprise Admin

3) Launch ADSIEDIT.MSC tool and connect to Default Naming Context

sys1

4) Brows to DC=domain,DC=local > OU=Domain Controllers > CN=(DC NAME) > CN=DFSR-LocalSettings > Domain System Volume > SYSVOL Subscription

5) Change value of attribute msDFSR-Enabled = FALSE

sys2

6) Force the AD replication using,

repadmin /syncall /AdP

7) Run following to install the DFS management tools using (unless this is already installed), 

Add-WindowsFeature RSAT-DFS-Mgmt-Con

8) Run following command to update the DFRS global state,

dfsrdiag PollAD

9) Search for the event 4114 to confirm SYSVOL replication is disabled. 

Get-EventLog -Log "DFS Replication" | where {$_.eventID -eq 4114} | fl

10) Change the attribute value back to msDFSR-Enabled=TRUE (step 5)

11) Force the AD replication as in step 6

12) Update DFRS global state running command in step 8

13) Search for events 4614 and 4604 to confirm successful non-authoritative synchronization. 

sys3

All these commands should run from domain controllers set as non-authoritative. 

Authoritative DFS Replication 

In order to perform to initiate authoritative DFS Replication,

1) Log in to PDC FSMO role holder as Domain Administrator or Enterprise Administrator

2) Stop DFS Replication Service (This is recommended to do in all the Domain Controllers)

3) Launch ADSIEDIT.MSC tool and connect to Default Naming Context

4) Brows to DC=domain,DC=local > OU=Domain Controllers > CN=(DC NAME) > CN=DFSR-LocalSettings > Domain System Volume > SYSVOL Subscription

5) Update the given attributes values as following, 

msDFSR-Enabled=FALSE

msDFSR-options=1

sys4

6) Modify following attribute on ALL other domain controller.

msDFSR-Enabled=FALSE

7) Force the AD replication using,

repadmin /syncall /AdP

8) Start DFS replication service in PDC

9) Search for the event 4114 to verify SYSVOL replication is disabled.

10) Change following value which were set on the step 5,

msDFSR-Enabled=TRUE

11) Force the AD replication using,

repadmin /syncall /AdP

12) Run following command to update the DFRS global state,

dfsrdiag PollAD

13) Search for the event 4602 and verify the successful SYSVOL replication. 

14) Start DFS service on all other Domain Controllers

15) Search for the event 4114 to verify SYSVOL replication is disabled.

16) Change following value which were set on the step6. This need to be done on ALL domain controllers. 

msDFSR-Enabled=TRUE

17) Run following command to update the DFRS global state,

dfsrdiag PollAD

18) Search for events 4614 and 4604 to confirm successful authoritative synchronization. 

Please note you do not need to run Authoritative DFS Replication for every DFS replication issue. It should be the last option.

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com 

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step Guide to Start with Azure CLI 2.0

There are many ways to create, manage, remove resources from Azure subscription. For the users who prefer GUI has Azure Classic portal and Azure Resource Manager. For PowerShell lovers Azure has Azure PowerShell module. Apart from that there are other methods such as terraform (I already wrote articles about it, if you want to know more about it, search for “terraform” in the blog) which simplifies Azure resource management. Azure CLI is also a command-line tool introduced by Microsoft which can use to manage azure resources. This is allowing to use from multiple platform such as Linux, Mac OS and Windows. This blog post is to explain how we can configure windows system to use Azure CLI. 

There are two ways which we can use to connect to Azure CLI. 

Using Azure Portal

Azure also allow to use web based version of Azure CLI with name of “Cloud Shell”. This is easily can open through the browser. In order to access it,

1) Log in to Azure Portal

2) Click on Cloud Shell icon on top right-hand side

cli1

3) When you do this for first time it will ask to create Azure file share. You can select relevant subscription and click on “Create Storage

cli2

4) Once it is created the storage, it will load up the shell access through the browser. 

cli3

Using Windows Computer

We also can use Azure CLI from the local computer. as I said this is not only supported to use with windows systems. it is supported to use with Linux and Mac OS. In this demo, I am going to demonstrate how to configure it with windows system. 

Azure CLI uses python so out configuration will be based on python installation. 

1) Log in to computer as an administrator

2) Go to https://www.python.org/downloads/ and download python

cli4

3) Once file is downloaded, run it as administrator to install. During the installation, make sure to select option “Add Python 3.6 to PATH” option. Then it will allow to use python commands without navigating to installation location. 

cli5

4) Once installation completed, open windows command-line and type python –version. this will confirm the python installation. (it is recommended to open command line as administrator, otherwise it will say PATH records are not added as we ran the installation as Administrator) 

cli6

5) Next step is to install Azure CLI libraries. In order to do that run pip install –user azure-cli

cli7

6) Once it is completed, move to C:\Users\[Admin User]\AppData\Roaming\Python\Python36\Scripts and run command az . This will verify the Azure CLI integration. If it needs to run from anywhere add it to the PATH. 

cli8

7) Now let’s try to log in to Azure using Azure CLI. In order to do that we can use az login -u azureusername -p password. the problem on this method is that password need to type in as clear text. Instead of that we can use browser based more secure login. To do that type az login in command-line. 

The it gives a link and code to use for authentication. 

cli9

8) Once it is open in browser it asks for the verification code. Once its enter click on Continue

cli10

In next page, it verifies the Azure login and then confirm the connection.

cli11

When we go back to Azure CLI, we can see its successfully logged in and showing the subscription data. 

cli12

This confirms the successful connection to Azure using Azure CLI. This is the end of this post and in next post let’s see how we can add, manage, remove azure resources via Azure CLI. Hoep this was helpful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Setting up Azure Virtual Machines with Terraform

In my previous article about terraform, I explain what is terraform and what it can do. Also, I explain how to set it up and how we can use it with Azure to simplify infrastructure configuration. If you didn’t read it before you can view it using this link  

In this post, we are going to look further in to Azure infrastructure setup using terraform.

Before that lets look in to sample configuration of an Azure resource and see how syntax been used.

resource "azurerm_resource_group" "test" {

  name     = "acctestrg"

  location = "West US"

}

 resource "azurerm_virtual_network" "test" {

  name                = "acctvn"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

}

Above code is to create an Azure resource group and Azure virtual network. In the code azurerm_resource_group and azurerm_virtual_network defines the azure resource type. The text test defines the name for that resource instance. This is not the azure resource group or azure virtual network name. This is the instance name. so, if you have another resource group it can be test2. Actual resource names are defined using name attribute. So, in above code the actual resource name for resource group is acctestrg and for virtual network its acctvn.

In above example, new virtual network is need placed under the acctestrg resource group. in the code it is defined using,

resource_group_name = "${azurerm_resource_group.test.name}"

in there, by azurerm_resource_group.test it defines the related resource group instance. In our example, it is test. Then using .name it calls for the attribute value of name under that particular resource group.

In the plan stage terraform creates the execution plan. It does not process the code top to bottom. It evaluates the code and then build the plan logically. There for it no longer consider the resource order. Let’s try it with an example, 

resource "azurerm_virtual_network" "test" {

  name                = "acctvn"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

}

 resource "azurerm_resource_group" "test2" {

  name     = "acctestrg2"

  location = "West US"

}

 resource "azurerm_virtual_network" "test2" {

  name                = "acctvn2"

  address_space       = ["11.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test2.name}"

}

 resource "azurerm_resource_group" "test" {

  name     = "acctestrg"

  location = "West US"

}

In above example, I am creating two resources group and two virtual networks. If you look in to highlighted sections, I placed the code related to virtual network before creating resources group. But when I run terraform plan it creates the execution plan in correct order.

tf1 

And once it is executed, it creates the expected resources.

tf2

As next step on demo, let’s see how we can create virtual machines in Azure using terraform.

resource "azurerm_virtual_machine" "testvm" {

  name                  = "acctvm"

  location              = "West US"

  resource_group_name   = "${azurerm_resource_group.test.name}"

  network_interface_ids = ["${azurerm_network_interface.test.id}"]

  vm_size               = "Standard_A0"

above code is an example to create a VM in azure. In code sample, azurerm_virtual_machine defines the resource type. testvm is the resource instance name. acctvm is the name of the virtual machine. According to code the resource will deploy under West US region. resource_group_name defines the resource group it belongs to. network_interface_ids defines the network interface id for the VM. vm_size defines the Azure VM template. The template list for the region can list down using following Azure CLI command.

az vm list-sizes --location west-us

This will list down the all available VM sizes in West US region.

tf3

Azure VM also need other components such as virtual network, storages, operating system so on. Let’s see how we can add these to the configuration.

In earlier on the post, I share samples for creating a resources group and virtual network. The next step of it will be to add a subnet under the virtual network.

resource "azurerm_subnet" "sub1" {

  name                 = "acctsub1"

  resource_group_name  = "${azurerm_resource_group.test.name}"

  virtual_network_name = "${azurerm_virtual_network.test.name}"

  address_prefix       = "10.0.2.0/24"

}

In above I am creating a subnet 10.0.2.0/24 under virtual network and resources group I already have. In code, azurerm_subnet defines the resource type. sub1 is the instance name and acctsub1 is the subnet name. resource_group_name defines on which resources group it belongs to. virtual_network_name defines which azure virtual network it associated with. address_prefix specifies the subnet value.

Now we have subnet also associated with network. We also need public IP address in order to connect to VM from internet. 

resource "azurerm_public_ip" "pub1" {

  name                         = "pub1"

  location                     = "West US"

  resource_group_name          = "${azurerm_resource_group.test.name}"

  public_ip_address_allocation = "dynamic"

}

According to above, I am creating public IP instance called pub1 under same resource group. it’s IP allocation is set to Dynamic. If need it can be static as well.

Next step is to create network interface for the VM.

resource "azurerm_network_interface" "ni1" {

  name                = "acctni1"

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.test.name}"

ip_configuration {

    name                          = "lan1"

    subnet_id                     = "${azurerm_subnet.test.id}"

   private_ip_address_allocation = "dynamic"

   public_ip_address_id  = "${azurerm_public_ip.pub1.id}"

  }

In above azurerm_network_interface is the resource type for the network interface. the interface name we are creating is acctni1. the second part of code which starts with ip_configuration defines the IP configuration for the network interface. subnet_id defines the subnet it belongs to. private_ip_address_allocation defines the ip allocation method. It can be Dynamic or Static. public_ip_address_id associates with the public ip created in the previous step. If this is not done you will not be able to connect to VM remotely once it is deployed.    

Next thing we need for the VM is storage. Let’s start with creating a Storage Account 

resource "azurerm_storage_account" "asa1" {

  name                = "accsa"

  resource_group_name = "${azurerm_resource_group.test.name}"

  location            = "westus"

  account_type        = "Standard_LRS"

 }

azurerm_storage_account is the resource type and accsa is the name for the account. account_type defines the storage account type. it can be Standard_LRS, Standard_GRS, Standard_RAGRS, Standard_ZRS, or Premium_LRS. More info about these account types can find from https://docs.microsoft.com/en-us/azure/storage/storage-introduction .

as next step, we can create a new storage container under the storage account.

resource "azurerm_storage_container" "con1" {

  name                  = "vhds"

  resource_group_name   = "${azurerm_resource_group.test.name}"

  storage_account_name  = "${azurerm_storage_account.test.name}"

  container_access_type = "private"

}

In above azurerm_storage_container is the resource type and it name is vhds. resource_group_name defines the resource group it belongs to and storage_account_name defines storage account it belongs to. container_access_type can be private, blob or container. More info about these container types can find from https://docs.microsoft.com/en-us/azure/storage/storage-introduction

Following image shows what it looks like when using GUI option. 

tf4

By now we have most of the resources ready for the VM. Next step is to define image for the VM.

  storage_image_reference {

    publisher = " MicrosoftWindowsServer"

    offer     = " WindowsServer"

    sku       = " 2016-Datacenter"

    version   = "latest"

  }

In above I am using windows server 2016 datacenter as image for the VM. Publisher, offer, sku and version info need to provide in order to select correct image. For windows servers, you can find these info in https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage. For Linux, this info available at https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cli-ps-findimage

Next step is to add a hard disk,

storage_os_disk {

    name          = "myosdisk1"

    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/myosdisk1.vhd"

    caching       = "ReadWrite"

    create_option = "FromImage"

  }

  storage_data_disk {

    name          = "datadisk0"

    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/datadisk0.vhd"

    disk_size_gb  = "60"

    create_option = "Empty"

    lun           = 0

  }

Above create two disks. one is for OS and one is for data. vhd_uri defines the path for the VHD which is saved under the storage account created.

Last but not least we need to define the OS configuration data such as hostname and administrator account details.

  os_profile {

    computer_name  = "rebelpro1"

    admin_username = "rebeladmin"

    admin_password = "Password1234!"

  }

In above, computer_name specify the hostname of the VM. admin_username specify the local administrator name and admin_password specify the local administrator password.

Now we have all the components ready to deploy a new VM. Some of the components we just need to create one time. as example virtual networks, subnets, storage accounts not need to create for each VM unless there is valid requirement. Let’s put all these together in to a one script so it will make more sense. 

# Configure the Microsoft Azure Provider

provider "azurerm" {

  subscription_id = "d7xxxxxxxxxxxxxxxxxxxxxx"

  client_id       = "d9xxxxxxxxxxxxxxxxxxxxxx"

  client_secret   = "f1xxxxxxxxxxxxxxxxxxxxxx "

  tenant_id       = "05xxxxxxxxxxxxxxxxxxxxxx "

}

resource "azurerm_resource_group" "rg1" {

  name     = "acctestrg"

  location = "West US"

}

resource "azurerm_virtual_network" "vn1" {

  name                = "vn1"

  address_space       = ["10.0.0.0/16"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

}

resource "azurerm_public_ip" "pub1" {

  name                         = "pub1"

  location                     = "West US"

  resource_group_name          = "${azurerm_resource_group.rg1.name}"

  public_ip_address_allocation = "dynamic"

}

resource "azurerm_subnet" "sub1" {

  name                 = "sub1"

  resource_group_name  = "${azurerm_resource_group.rg1.name}"

  virtual_network_name = "${azurerm_virtual_network.vn1.name}"

  address_prefix       = "10.0.2.0/24"

}

resource "azurerm_network_interface" "ni1" {

  name                = "ni1"

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

 

  ip_configuration {

    name                          = "config1"

    subnet_id                     = "${azurerm_subnet.sub1.id}"

    private_ip_address_allocation = "dynamic"

    public_ip_address_id  = "${azurerm_public_ip.pub1.id}"

  }

}

 resource "azurerm_storage_account" "storevm123" {

  name                = "storevm123"

  resource_group_name = "${azurerm_resource_group.rg1.name}"

  location            = "westus"

  account_type        = "Standard_LRS"

 

  tags {

    environment = "demo"

  }

}

 resource "azurerm_storage_container" "cont1" {

  name                  = "vhds"

  resource_group_name   = "${azurerm_resource_group.rg1.name}"

  storage_account_name  = "${azurerm_storage_account.storevm123.name}"

  container_access_type = "private"

}

 resource "azurerm_virtual_machine" "vm1" {

  name                  = "vm1"

  location              = "West US"

  resource_group_name   = "${azurerm_resource_group.rg1.name}"

  network_interface_ids = ["${azurerm_network_interface.ni1.id}"]

  vm_size               = "Standard_DS2_v2"

 

   storage_image_reference {

    publisher = "MicrosoftWindowsServer"

    offer     = "WindowsServer"

    sku       = "2016-Datacenter"

    version   = "latest"

  }

   storage_os_disk {

    name          = "osdisk1"

    vhd_uri       = "${azurerm_storage_account.storevm123.primary_blob_endpoint}${azurerm_storage_container.cont1.name}/osdisk1.vhd"

    caching       = "ReadWrite"

    create_option = "FromImage"

  }

   storage_data_disk {

    name          = "datadisk1"

    vhd_uri       = "${azurerm_storage_account.storevm123.primary_blob_endpoint}${azurerm_storage_container.cont1.name}/datadisk1.vhd"

    disk_size_gb  = "60"

    create_option = "Empty"

    lun           = 0

  }

     os_profile {

    computer_name  = "rebelpro1"

    admin_username = "rebeladmin"

    admin_password = "Password1234!"

  }

   tags {

    environment = "demo"

  }

}

Let’s verify the resources using Azure portal.

As we can see it is created all the expected resource under the resource group acctestrg.

tf5

Also, we can see it is created the VM as expected.

tf6

In this post, we went through the process of creating Azure VM and related components using terraform. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure resource setup simplified with terraform

This week I was testing Terraform , a simple tool which can use to automate Azure resource deployment.

It will be easier to explain terraform with a real-world example. I am developing a web application and as my resource provider I am using Azure. my first requirement is to setup a development environment. For that I need at least one web server, one database server and connectivity between these two servers. to setup the environment, I log in to the portal and then setup resource group, storage account and virtual network. After that I start to build servers. after I complete it, I setup web server application and database server. so even its looks like straight forward, its takes time. later on, in the development process I also required a test platform where I can try my application with different operating systems. Also, I like to test application by adding more components such as load balancers, web servers to the environment. These testing environments are temporally. So, each and every time I need to setup environment with all the different components and once testing process completes, needs to destroy it. when I need to sell it as a solution to people, I face another challenge as not everyone wants to run it on Azure. Even if it’s another service provider or on-premises environment, application should test in similar environment prior to sell as a solution. Each provider has their own way of setting up things.

On this given scenario I faced few challenges,

To setup required resources for application takes time as each component need to configure in certain way. 

To setup integration between components, I need to log in to different systems and adjust the settings. A single mistake can cause hours of disruptions to the project. 

Due to the complexity of setting up environments, I may end up keep running test environments longer which probably increase my development cost. 

How Terraform can help?

Using Terraform I can deploy the whole environment by executing a single script file. This script is basically a set of instructions explaining how to setup each and every component. If I need to setup a new server in azure from scratch there is a procedure for it. we need to setup relevant resource groups, network components, storage accounts before we start to build the server. terraform itself understand these dependencies and build the environment according to that. This also helps to standardize the resource setup process.

Terraform also can use to configure application settings as part of the environment setup. That means we do not need to log in to systems to make initial software configurations. This will prevent the human errors.

Once we setup an environment using terraform we can change it, destroy it using a single command. As an example, let’s assume we setup a test environment with two web servers (using terraform). I have a new requirement to add new web server to the same environment. To do that all I need to do is modify the same script and add new entry for new webserver. once I execute it, it will automatically detect the current environment and only add the missing components. When destroy, it is again a single command and it will remove each component in the proper order. As example, it will understand that before remove resource group, it need to remove all other components under it.

As setup and destroy process of resources is easy with terraform, we do not need to keep running non-critical resources. As an example, if I need to give a POC or show a demo to a customer, all I need to do is to execute the pre-created terraform script when needed and destroy it afterwards.

Terraform support different service providers. It is not only for cloud based solutions. It also supports on-premises solutions. As an example, terraform can use with Azure Pack and Azure stack to do the same thing in on-premises Hyper-V environment. It also supports to SaaS application configurations. The supported providers list can be found in here https://www.terraform.io/docs/providers/index.html

Terraform mainly have three functions.

Plan – Before execute the configuration, it should go for the planning stage. In here terraform will build the execution plan based on the configuration provided by the engineer. It will explain what will be created when configuration is executed.

Apply – In this phase it will apply execute the execution plan created on the “planning” stage. It will also report back once its completed the resource setup. If there were errors, it will also explain it in details.

Destroy – This is basically to undo the execution plan. By calling this, we can destroy all the resource created by a particular terraform configuration file.

I think it’s enough with the theory, let’s see why it’s so cool.

In my demo, I am going to show how to setup terraform and how to use it to create resource in azure. 

Setup Terraform

In my demo, I am going to use windows 10 as the system. Terraform also supported on Linux, Mac and solaris systems.

1) Go to this link and download the file relevant to windows architecture. 

2) Then create a folder and move the downloaded terraform.exe file. 

3) Next step is to setup the Binary path for the terraform so system knows when we use the terraform commands. To do that, run the PowerShell console as Administrator and then type

$env:Path += ";C:\terraform"

In here C:\terraform is the folder where I saved the terraform.exe

terra1

4) As next step, we can confirm terraform setup by running terraform in the PowerShell console. 

terra2
 
This confirms the Terraform setup and next step to configure Azure side to support terraform. 
 
Retrieve Required info from Azure

Terraform uses Azure ARM API to connect and manage azure resources. To connect to Azure, terraform need to provide following Azure ARM environment variables using configuration file.

ARM_SUBSCRIPTION_ID

ARM_CLIENT_ID

ARM_CLIENT_SECRET

ARM_TENANT_ID

To get ARM_CLIENT_ID, ARM_CLIENT_SECRET and ARM_TENANT_ID we need to create a Service Principal in Azure.

To do the we can use Azure Cloud Shell.  

1) Log in to Azure Portal ( https://portal.azure.com ) as a Global Administrator

2) Click on Cloud Shell Button. 

terra3

3) Then it will open the shell in the same window. If it’s your first time using this feature, it will ask to create a storage account. 

terra4

4) Next step is to fine the Subscription Id. To do that type following and press enter. 

az account list

Then it will provide an output like following. In there “id” represent the Subscription ID we required. 

terra5

5) Next step is to create the Service Principal. In order to do that use,

az ad sp create-for-rbac –role="Contributor" –scopes="/subscriptions/xxxxxxxxxxxx"

in above command xxxxxxxxxxxx should replace with the Subscription ID we found in the previous step. 

Then it gives an output similar to following

terra6

In above image 

appId is equal to Client ID.

Password is equal to Client Secret

Tenant is equal to Tenant ID

Now we have all the information we need in order to connect to Azure trough terraform. 

Create first configuration

Next step is to create first terraform configuration file. The file is using the extension of .tf. You can use your favorite text editor to create the file. I am using Visual Studio code and it can be download from https://code.visualstudio.com/

The file no need to save on the same folder where your terraform.exe file. However, you need to navigate to that folder before execute the terraform commands. In my demo, it is C:\terraform

My first configuration is following

 

# Configure the Microsoft Azure Provider

provider "azurerm" {

  subscription_id = "xxxxxxx"

  client_id       = " xxxxxxx "

  client_secret   = " xxxxxxx "

  tenant_id       = " xxxxxxx "

}

resource "azurerm_resource_group" "myterrapro1" {

  name     = "myterrapro1"

  location = "West US"

}

resource "azurerm_virtual_network" "myterrapro1network" {

  name                = "myterrapro1vn"

  address_space       = ["10.11.12.0/24"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.myterrapro1.name}"

}

In above code,

provider "azurerm"

define the service provider as Azure ARM.

 subscription_id = "xxxxxxx"

                client_id       = " xxxxxxx "

                client_secret   = " xxxxxxx "

                tenant_id       = " xxxxxxx "

the above values should replace by the values we found through the Azure.

resource "azurerm_resource_group" "myterrapro1" {

  name     = "myterrapro1"

  location = "West US"

}

The above saying to create new azure resource group called myterrapro1 in West US region.

resource "azurerm_virtual_network" "myterrapro1network" {

  name                = "myterrapro1vn"

  address_space       = ["10.11.12.0/24"]

  location            = "West US"

  resource_group_name = "${azurerm_resource_group.myterrapro1.name}"

in the next section, it is creating an Azure Virtual Network called myterrapro1vn. It got address space allocated as 10.11.12.0/24. This will create under resource group called myterrapro1. This virtual network also will be created under West US region.

Once script is ready, save it with .tf extension.

Then launch PowerShell as Administrator. Then change the folder path to where script file is saved. In my demo, its C:\terraform. After that type following,

terraform plan

This is step where terraform build the execution plan.

terra7

This output shows what will happen when apply the execution plan.

To apply the plan type following and press enter.

terraform apply

once process is started, it will also show the progress of setting up resources.

terra8

according to above image we can see its successfully created the resources. we can confirm it using the Azure portal.

terra9

terra10

Now we know how to create resource. Let’s see how to destroy the resources we created.

In order to do that you do not need to change anything in the script. All you need to do is to issue the command,

terraform destroy

once we run the command it will ask to confirm. Type yes and press enter to proceed with the destroy process.

terra11

As we can see it remove all the resource in the configuration file. Once it done we can also log in to azure portal and confirm.

Isn’t this cool?

In this blog post I explained what is terraform and how we can use it to simplify resource setup in azure. in next blog post I will share more examples to show its capabilities.

Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Protect cloud Identities with Azure Active Directory Identity Protection

Symantec released their latest Internet Security Threat Report in early June. This report includes data about infrastructure threats for year 2016. It says, for year 2016, near 1.1 billion identities has been exposed. Also for last 8 years total identity breach is around 7.1 billion which is almost equal to total world population.

aip1

In Identity infrastructure breach, most of the time advisories get in to the system using a legitimate user name and password belong to an identity in that infrastructure. This initial breach can be result of malware, phishing or pass-the-hash attack. If it’s a “privileged” account, it makes easier for advisories to gain control over identity infrastructure. But it’s not always a must. All they need is some sort of a breach. Latest reports show after an initial breach it only takes less than 48 hours to gain full control over identity infrastructure.

When we look in to it from identity infrastructure end, if someone provides a legitimate user name and password it allows access to the system. This can be from the user or an advisory. But by default, system will not know that. In local AD infrastructure, solutions like Microsoft Advanced Threat Analytics, Microsoft Identity Management helps to identify and prevent inauthentic use of identities. Azure Active Directory Identity Protection is a feature comes with Azure AD Premium, which can use to protect your workloads from inauthentic use of cloud identities.  It mainly has following benefits. 

Detect vulnerabilities which affect the cloud identities using adaptive machine learning algorithms and heuristics.

Issue alerts and reports to detect/identify potential identity threats and allow administrators to take actions accordingly. 

Based on policies, force automated actions such as Block Access, MFA authentication or Password reset when it detects a suspicious login attempt. 

According to Microsoft https://docs.microsoft.com/en-us/azure/active-directory/active-directory-identityprotection Azure AD Identity protection has following capabilities.

Detecting vulnerabilities and risky accounts:

Providing custom recommendations to improve overall security posture by highlighting vulnerabilities

Calculating sign-in risk levels

Calculating user risk levels

Investigating risk events:

Sending notifications for risk events

Investigating risk events using relevant and contextual information

Providing basic workflows to track investigations

Providing easy access to remediation actions such as password reset

Risk-based conditional access policies:

Policy to mitigate risky sign-ins by blocking sign-ins or requiring multi-factor authentication challenges.

Policy to block or secure risky user accounts

Policy to require users to register for multi-factor authentication

Azure Active Directory Identity Protection detect and report following as vulnerabilities,

User logins without Multi-Factor Authentication

Use of unmanaged cloud apps – These are the applications which is not managed using Azure Active Directory. 

Risk events detect by Azure Privileged Identity Management – This is another additional service which can use to manage and monitor privileged accounts associated with Azure Active Directory, Office 365, EMS etc.

More info about these events can find on here 

Azure Active Directory Identity Protection also reports about Risk events. These are Azure Active Directory based events. These also use to create policies in Azure Active Directory Identity Protection. It detects six types of risk events. 
 
Users with leaked credentials
Sign-ins from anonymous IP addresses
Impossible travel to atypical locations
Sign-ins from unfamiliar locations
Sign-ins from infected devices
Sign-ins from IP addresses with suspicious activity
 
More information about Azure risk events can find here 
 
Let’s see how we start using Azure Active Directory Identity Protection. Before we start we need to have following, 
 
1) Azure Active Directory Premium P2 Subscription
2) Global Administrator account  
 
Once you have above, log in to Azure as Global Administrator.
 
1) Go to New | then type Azure AD Identity Protection 
 
aip2
 
2) On Azure AD Identity Protection page click on Create
 
aip3
 
3) Then it loads up a new page with Azure Directory details and feature details. Click on Create again to proceed. 
 
aip4
 
4) Once it is done, go on and search for Azure AD Identity Protection under more services. Then it will show the dashboard for the feature. 
 
aip5
 
5) In my demo environment, it is immediately detect that I have 257 user accounts without MFA. 
 
aip6
 
6) The beauty of this is, once it detects a problem it guides you to address the issue. If I double click on the MFA alert, I gets the Multi-factor authentication registration policy settings page, where I can enforce users to use MFA. 
 
aip7
 
Under the user assignment, I can force it for all users or group of users. 
 
aip8
 
Under the controls, by default it selected the Require Azure MFA registration option. 
 
aip9
 
Under the Review option we can evaluate the impact. 
 
aip10
 
Once configuration is done, we can enforce the policy using Enforce Policy option. 
 
aip11
 
7) Using Sign-in risk policy, we can define actions system need to take in an event it detects sing-in risk from user. In order to define policy settings, click on Sign-in risk policy in Azure AD Identity Protection Dashboard
 
aip12
 
8) Then it loads up the policy configuration page. 
 
aip13
 
Under the Assignments we can decide which users this policy applies to. It can be either All users or group of users. we also can exclude users from the selection. 
 
aip14
 
Under the Conditions, we can select the level of sing-in risk events need to consider in the policy. there are three options to select with which is low and above, medium and above or High.
 
aip15
 
Under the Access option we can define what system should do once it detect risk events. It can either block access completely or allow access with additional security conditions such as Require multi-factor authentication.
 
aip16
 
Using Review option we can see what kind of impact policy will make ( if there are any existing risk events). 
 
Once all these done, use Enforce Policy option to enforce it. 
 
aip17
 
9) In similar way using User risk policy we can define policy settings to handle risky users. 
 
10) We also can configure Azure AD Identity Protection to send alerts when it detects an event. To do that go to Alerts option in Azure AD Identity Protection Dashboard
 
aip18
 
In configuration page, we can define the lowest alert level that need to consider. So, any event equal to that level or above will send out as alert to the selected recipients. 
 
aip19
 
11) Azure AD Identity Protection also can send automated weekly email report with events it detected. In order to configure it go to Weekly Digest option in Azure AD Identity Protection Dashboard
 
aip20
 
In configuration page, we can select which recipients it should goes to. 
 
aip21
 
In this article, I explained what is Azure AD Identity Protection and how we can use it to protect cloud identities. Hope this was useful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Conditional Access Policies with Azure Active Directory

When it comes to manage access to resources in infrastructure, there are two main questions we usually ask.

  1. “Who” is the user and “What” resources?
  2. Is it allow or deny access? 

Answers to above questions are enough to define the base rules. But depending on the tools and technologies that can use to manage the access, we will have additional questions which will help us define accurate rules. As an example, Sales manager walks up to the IT department and says “Peter need to access “Sales” folder in the file server”. So, based on the statement, we know the user is “Peter” and resources is “Sales” folder in the “File Server”. Also, we know the user “Peter” needs to “Allow” access to the folder. However, since we are going to use NTFS permission, we know that we can make the permissions more accurate than that. When sales manager says “Allow” peter to access “Sales” folder he didn’t define it as “Read & Write” or “Read Only”. He didn’t also define if he need same permission to all the sub folders in the “Sales” folder. Based on answer to those, we can define more granular level rules.

Access control to resource in an infrastructure happens in many different levels with many different tools and technologies. The first level of control happens in the network perimeter level. Using firewall rules, we can handle “in” and “out” network traffic to/from company infrastructure. If user pass that level, then it will verify the access based in users and groups. After that it comes to applications and other resources. But problem we have as engineers is to manage all these separately. Let’s go back to our previous example. In there we only consider about NTFS permission. If “Peter” is a remote worker and he connect to internal network using Remote desktop services, first we need to define firewall rules to allow his connection. Then if multi-factor authentication required for remote workers, I need to configure and defines rules in there. Also, when user logs in, he will not have same permission he has in company workstation. So, those session host permissions need to be adjusted too. So, as we can see even its sounds simple, we have to deal with many different systems and rules which cannot combine in to “one”.

So far, we looked in to on-premises scenarios. When it comes to cloud, the operation model is different. We cannot apply the same tools and technologies we used to manage access in on-premises. Microsoft Azure’s answer for simplifying access management to workloads is “Conditional Access”. This allow manage access to applications based on “Conditions”.  When it comes to public cloud mostly we allowing access to applications from networks we do not trust. There for, using “Conditions” we can define policies for users which they need to comply, in order to get access to the applications.

In Condition Access Policy, there are two main section.

Assignments –  This is where we can define conditions applying to user environment such as users and groups, applications, device platform, login locations etc.

Access Control –  This is to control access for the users and groups when they comply with the conditions specified in the “assignments” section. it can be either allow access or deny access.

cac1

Let’s see what conditions we can applies using conditional access policies. 

Assignments 

Under the assignment section there are three main options which can use to define conditions. 

1) Users and Groups

2) Cloud apps

3) Conditions 

User and Groups

Under the user and groups option we can define the users and groups targeted by the condition access policy. 

We can select define target as “All” or selected number of users and groups. 

cac2

We also can explicitly select groups and exclude individuals from it. 

cac3

  

Cloud Apps

  

Under the cloud app option we can select the applications which is targeted for the policy. these applications can be Azure apps or on-premises applications which is published via Azure Active Directory using Azure App Proxy. Similar to users and groups, we also can explicitly allow access to a large group and exclude specific entities. 

cac4

Condition 

Using options under this category we can specify the conditions related to user’s login environment. This category has 4 sub-categories. 

1) Sign-in risk

2) Device Platforms

3) Locations

4) Client Apps

It is not required to use all these sub-categories for each and every policy. By default, all these are in disabled mode. 

 

Sign-in risk

Azure Active Directory monitor user login in behavior based on six types of risk events. These events are explained in details on https://docs.microsoft.com/en-us/azure/active-directory/active-directory-reporting-risk-events#risk-level . As an example, I am usually login to azure from IP addresses belongs to Canada. I log in to azure at 8am from Toronto. After 5 minutes, its detects a login from Germany. In typical scenario, it’s not possible unless I use a remote login. From Azure point of view, it will detect as malicious activity and will rate as “Medium” risk event. In this sub-category, we can define what level of sign-in-risks need to consider. 

cac5

Note – If you need enable the policy, you need to first click on “Yes” under configure option.

cac6

Device Platforms

Device platforms are categorized based on the operating systems. it can be,

Android

iOS

Windows Phone

Windows

cac7

We also can explicitly allow all and then exclude specific platforms.

Locations

Locations are defined based on IP addresses. If it’s only for “trusted” IP addresses, make sure to define trusted IP addresses using the given option.

cac8

Client Apps

Client apps are the form that users access the apps. It can be using web, mobile apps or desktop clients. Exchange ActiveSync is available when Exchange Online is the only cloud app selected. 

cac9

Access Controls 

There are two categories which can use to add the access control conditions to the policies. 

1) Grant

2) Session

Grant

In this category, we can specify the allow or deny access. Under the allow access, we can add further conditions such as,

Require multi-factor authentication

Require device to be marked as compliant

Require domain joined device

cac10

MFA

Multi-factor authentication is additional layer of security to confirm the authenticity of the login attempt. Even policy set to allow access, using this option we still can force user to use MFA. This is allowed to use Azure MFA or on-premises MFA solution (via ADFS).

 

Compliant

 

Using Microsoft Intune, we can define rules to categorize the user devices are compliant or not according to company standards. if this option is used, only the devices which is compliant will consider.

 

Domain Joined

 

If this option is used, it will only consider connection from Azure Active Directory domain joined devices.

Once you define the options, it can either force to use all the options or only to consider “one” of the selected. 

cac11

Sessions

This is still on preview mode. This is basically to provide additional information about session to the cloud app so it can confirm authenticity of the session. Not every cloud app supports this option yet. 

cac12

Demo

By now we know what are the conditions we can use to define a condition access policy. Let’s see how we can configure a policy with a real-world example. Before we start, we need to look in to prerequisites for the task. In order to setup condition access policies we need following.

1. Valid Azure Active Directory Premium Subscription

2. Azure Administrator Account to create policies

In my demo, I have a user called “Berngard Saller”. He is allowed to access an on-premises application which is published using Azure Application Proxy (http://www.rebeladmin.com/2017/06/azure-active-directory-application-proxy-part-02/). 

cac13
 
I need a condition access policy to block his access to this app if he is login from a device which use “Android OS”. 
So, let’s start,
 
1. Log in to the Azure portal as Administrator.
2. Click on Azure Active Directory | Conditional Access
 
cac14
 
3. In new page, Click on + New Policy
 
cac15
 
4. In next window, first provide a Name for the policy
 
cac16
 
5. Then click on Users and Groups | Select Users and Groups | Select. Then from the list of users select the appropriate user (in my demo its user Berngard Saller) and then click on Select button. 
 
 
cac17
 
6. Then in next window click on Done
 
cac18
 
7. Next step is to define the App. To do that, Click on Cloud Apps | Selected Apps | Select. Then from the list select the relevant app (in my demo its webapp1) and then click on Select button. 
 
cac19
 
8. Then in next window click on Done
 
9. As next step, go to Conditions | Device Platforms | Click on Yes to enable Condition | Select Device Platforms | Android. then click on Done button. 
 
cac20
 
10. Then in next window click on Done
 
11. Next step is to define Access Controls. To do that Click on Grant | Block Access. Then click on Select button. 
 
cac21
 
12. Now we have the condition policy ready. Click On under Enable Policy and click on Create button to create the policy. 
 
cac22
 
Now we have the policy ready. The next step is to test. 
 
cac23
 
When I access the app from windows system, I have allowed access. 
 
cac24
 
But when I do it from android mobile it denied access as expected. 
 
cac25
 
As we can see conditional access simplifies the access control to workloads in Azure. 
 
This is the end of this post and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Mastering Active Directory

This is my 14th year in IT. During that time, I was working with different companies. I was working on different positions. I was working with many different technologies. But there was one thing that never changed. It’s my love for Microsoft Active Directory. From the day I heard about Active Directory and its capabilities, I spent countless hours reading, listing about it. I spent countless hours with my labs testing its strength. I also learned from my mistakes.  When I was working with different customers I noticed that even though people know about Active Directory, they do not use most of its features to protect, improve efficiency or to improve management of their identity infrastructure. This is due to two main reasons. First is, they think the technology that involves is too “complex”. Second is, they believe they do not have enough “skills” to do it.

In 2010, my website rebeladmin.com was born. The idea of it was to explain Active Directory’s capabilities, features, in simplified way and provide Step-by-Step guides to implement and configure Active Directory related services and features. This is a non-profit website and has more than 400 articles so far. As result of this community contribution I was awarded with Microsoft Most Valuable Professional (MVP) award. It started a new era of my life and I was able to bring my message to communities in strong way.

Today I reached another milestone in my life. Today is the day my first book going public. “Mastering Active Directory” is the fruit of my knowledge and experience about Active Directory.

Book Name: Mastering Active Directory

ISBN : 978-1-78728-935-2

Number of Pages: 730

From today, the book is available for purchase worldwide through,

Packt Publisherhttps://www.packtpub.com/networking-and-servers/mastering-active-directory  

Amazon –  http://amzn.to/2swlewD  

coverfront

Please send all your inquiries and feedbacks to rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure Active Directory Application Proxy – Part 02

In Part 01 of this series I have explained what is Azure AD application proxy and how it works. If you didn’t read it yet you can find it in http://www.rebeladmin.com/2017/06/azure-active-directory-application-proxy-part-01/

In this part of the series I am going to demonstrate how we can configure Azure AD application proxy.

Demo Setup

In my demo environment I have following,

1. Azure AD Premium Subscription

2. Active Directory 2016 on-premises setup 

3. Web application running on IIS

Enable Azure AD proxy

Before we install application proxy connector, we need to enable application proxy. This only need to enable when setup first application proxy.

1. Log in to Azure as Global Administrator

2. Then open Azure Active Directory 

adapp1

3. In next window click on Application proxy

adapp2

4. In next window click on Enable Application Proxy. Then it will explain about feature and click on Yes to enable. 

adapp3

Install Application Connector

Next step in configuration is to install Application Connector. I am going to install this on same application server.

1. Log in to Azure as Global Administrator

2. Then go to Azure Active Directory | Application Proxy 

3. Then in window click on Download connector 

adapp4

4. It will redirect to a page where you can download the connector. After Accepting terms click Download

adapp5

5. Once file is downloaded, double click on AADApplicationProxyConnectorInstaller.exe to start the connector installation. 

adapp6

6. Then it will open up a wizard. Agree to licenses terms and click on install to proceed. 

adapp7

7. During the installation, it asks for Azure login details. Provide an account which have azure global admin privileges. 

adapp8

8. After login details validates it will continue with the setup. Once it completes we ready to publish the application. 

adapp9

Publish Application

Next stage of the configuration is to publish the application.

1. Log in to Azure as Global Administrator

2. Then go to Azure Active Directory | Enterprise Applications 

adapp10

3. Then in next window, click on New Application 

adapp11

4. In categories page, Click on All and then click on on-premises application 

adapp12

5. Then it’s opens a new window where we can provide configuration data for application.

adapp13

In this form,

Name – Unique name to identify the application

Internal Url – Internal Url for the application. 

External Url – This is auto generated by azure and this url will be the one use to access the application via internet. If need certain url changes can be made. 

All other values we can leave default unless there is specify requirement. 

Once information added, click on Add button to publish the application. 

adapp14

6. Once application is published, we can see it under Enterprises Application

adapp15

Testing

Now we have everything ready. Next step is to verify if its working as expected. by default, application do not have any users assigned. So, before we test, we need to allow application access. 

1. Log in to Azure as Global Administrator

2. Then go to Azure Active Directory | Enterprise Applications | All Applications

3. Click on the web app that we published on previous section. 

4. Then click on Users and Groups

adapp16

5. Then click on Add User in next window

adapp17

6. From the list select the users and click on Select

adapp18

7. Click on Assign to complete the process. 

8. Now under the users you can see the assigned users and groups. 

adapp19

9. Now everything ready! Type the public URL in your browser which is generated during application publish process. For our demo, it was https://webapp1-myrebeladmin.msappproxy.net/webapp1/ . As expected it goes to the Azure login page. 

adapp20

10. Log in using a user account assigned for the app. 

11. After successfully authentication I can see my local web app content! 

adapp21

So as expected, we were able to publish a local application to internet without any DNS, firewall or application configuration change.

Hope this was helpful and if you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure Active Directory Application Proxy – Part 01

Today I am going to explain about another great feature which comes with Azure Active Directory. Rebeladmin Corp. does have a CRM application which use by its employees.  This is web based app and hosted in internal network. This app uses windows authentication. From internal, users can log in to it with SSO. Rebeladmin Corp. also uses some application hosted in Azure as well as Office 365. These applications are currently used by its users internal and externally. There was recent requirement that users also want to access this CRM application from external. So, what we can do to allow access from external?

1. Setup VPN, so users can dial in to VPN and access the application through it. 

2. Use ADFS to provide multi factor authentication and SSO from external. ADFS proxy server can place in DMZ to provide more secure connection from external

3. Use Remote Desktop Gateway and Remote Desktop Servers to host application for external access

4. If users are connecting from specific networks, allow direct access to application using edge firewall rules. 

All above solutions can work but it need additional configurations and resource such as,

1. Firewall rules to control traffic between different network segments (DMZ, LAN, WAN) 

2. Public IP addresses and DNS entries for internet facing components

3. Additional servers to host server roles and applications such as ADFS proxy, ADFS Farm, RDG etc. 

4. Additional licenses cost

5. Additional maintenance cost

6. Skills to configure these additional server roles, firewall rules etc. 

Azure Active Directory Application Proxy can integrate on-premises applications with Azure Active Directory and provide secure access with minimum changes to the existing infrastructure. It doesn’t need VPN, additional firewall rules or any other additional servers’ roles. This experience is similar to accessing applications hosted in azure or accessing office 365. This great feature was there for a while but still lots of people do not use this and some not even aware there is such feature available. Whole point of this blog series in to make public aware of this and encourage them to use this.

Why it’s good?

1. This allows organizations to use existing Azure security features to protect the on-premises workloads similar to azure SaaS workloads. 

2. All the components are hosted in cloud so less maintenance. 

3. Simple to setup and no need additional skills to setup different server roles or applications. 

4. If users already familiar with Azure or other Microsoft hosted solutions, the access experience will be similar. Users will not need to train to use different tools to access the hosted applications. (VPN, Remote Desktop etc.)

5. No requirement for public IP address or public DNS entries. It will use public url which is generated during the configuration process and its from Azure. 

However, not every application supported for this method. According to Microsoft it only supports,

Any web application which uses windows authentication or form-based authentication. 

Applications hosted behind RDG (Remote Desktop Gateway)

Web APIs

How it works?

Let’s see how it’s really works in real world.

post-ad app proxy

1. User accessing the published Url for the application from the internet. This URL is similar to application url which is hosted in Azure. This is the azure generate public URL for on premises app. 

2. Then its redirected to log in page and will be authenticate using Azure AD.

3. After successful authentication, it generates a token and send it to user. 

4. Then request is forwarded to Azure AD application proxy. Then it extracts User principle name (UPN) and security principal name (SPN) from the token.

5. Then the request is forwarded to application proxy connector which is hosted in on-premises. This is act as a broker service between application proxy module and web application. 

6. In next step, application proxy connector requests Kerberos ticket which can use to authenticate web application. This request is made on behalf of the user. 

7. On-premise AD issue Kerberos ticket.

8. Kerberos ticket used to authenticate in to web app. 

9. After successful authentication web app send response to application proxy connector. 

10. Application proxy connector send response to the user and he/she can view the web application content. 

Prerequisites

To implement this we need the followings,

Azure AD Basic or Premium Subscription 

Healthy Directory Sync with on-premises AD

Server to install Azure Application Proxy Connector (this can be same server which host web application) 

Supported web application (earlier I mentioned what type of applications are supported)

In next part of this blog series will look in to configuration of Azure AD application proxy. Hope this was helpful and if you have any questions feel free to contact me on rebeladm@live.com   

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

MANAGE AZURE ACTIVE DIRECTORY WITH POWERSHELL – PART 02

In previous part of this blog serious, I have explained how we can install Azure AD PowerShell module and how it can use it to manage Azure Active Directory directly using PowerShell Commands. If you not read it yet you can find it using http://www.rebeladmin.com/2017/02/manage-azure-active-directory-powershell-part-01/

In this post, I am going to explain about another set of cmdlets and the ways to use.

Some of the commands which we use for on-premises Active Directory Management works for Azure Active Directory too. only difference is the cmdlet itself. As an example, in on-premises AD, we use New-ADUser to add user, in Azure AD it becomes New-​Msol​User. If you like to know further about command and its use, easiest way to start is using following commands.

More information about a command can view using,

Get-Help New-​Msol​User -Detailed

Technical Information about thecommand can view using,

Get-Help New-​Msol​User -Full

Online information about the command can view using,

Get-Help New-Msol​User -Online

We also can view some example for the command using,

Get-Help New-Msol​User -Example

power1

We can simply create new user using,

New-MsolUser -UserPrincipalName "jeffm@therebeladmin.com" -DisplayName "Jeff Mak" -FirstName "Jeff" -LastName "Mak" -PasswordNeverExpires $true

power2

In order to create a user, you need to connect to Azure AD with a user who has “Global Admin” role.

In above command UserPrincipalName specify the UPN and user password s set not to expire.

It is obvious sometime we need to change password of an existing account.

Set-MsolUserPassword -UserPrincipalName "jeffm@therebeladmin.com" -NewPassword "pa$$word"

The above command will reset the password for the jeffm@therebeladmin.com in to new password.

Instead of specifying password, following command will generate random password and force user to reset it on next login.

Set-MsolUserPassword -UserPrincipalName "jeffm@therebeladmin.com" -ForceChangePassword $true

power3

Azure Active Directory does have predefined administrative roles with different capabilities. This allows administrators to assign permissions to users to do only certain tasks.

More details about these administrative roles and their capabilities can found on https://docs.microsoft.com/en-us/azure/active-directory/active-directory-assign-admin-roles

We can list down these administrative roles using

Get-MsolRole

power4

According to requirements, we can add users to these administrative roles.

Add-MsolRoleMember -RoleName "User Account Administrator" -RoleMemberObjectId "e74c79ec-250f-4a47-80dd-78022455e383"

Above command will add user with object id e74c79ec-250f-4a47-80dd-78022455e383 to the role.

In order to view existing members of different administrator roles, we can use command similar to below.

$RoleMembers = Get-MsolRole -RoleName "User Account Administrator"

Get-MsolRoleMember -RoleObjectId $RoleMembers.ObjectId

This will list down the users with User Account Administrator role assigned.

power5

Apart from the roles, AD also have security groups.

New-MsolGroup -DisplayName "HelpDesk" -Description "Help Desk Users"

Above command creates a group called HelpDesk

power6

power7

A group contains members. We can add members to group using commands similar to below.

Add-MsolGroupMember -GroupObjectId a53cc08c-6ffa-4bd6-8b03-807740e100f1 -GroupMemberType User -GroupMemberObjectId e74c79ec-250f-4a47-80dd-78022455e383

This will add user with object id e74c79ec-250f-4a47-80dd-78022455e383 to group with object id a53cc08c-6ffa-4bd6-8b03-807740e100f1.

We can list down the users of the group using

Get-MsolGroupMember -GroupObjectId a53cc08c-6ffa-4bd6-8b03-807740e100f1

power8

We can view all the groups and their group ids using

Get-MsolGroup

power9

In order to remove member from the security group we can use Remove-MsoLGroupMember cmdlet.

Remove-MsoLGroupMember -GroupObjectId a53cc08c-6ffa-4bd6-8b03-807740e100f1 -GroupMemberType User -GroupmemberObjectId e74c79ec-250f-4a47-80dd-78022455e383

In order to remove a user from administrator role we can use Remove-MsolRoleMember cmdlet.

Remove-MsolRoleMember -RoleName "User Account Administrator" -RoleMemberType User -RoleMemberObjectId "e74c79ec-250f-4a47-80dd-78022455e383"

Above command will remove user with object id e74c79ec-250f-4a47-80dd-78022455e383 from the group User Account Administrator

This is the end of the part 2 of this series. In next part, we will look further in to Azure AD management with PowerShell.

If you have any questions feel free to contact me on rebeladm@live.com

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter