Category Archives: Active Directory

How Active Directory Authentication Works?

AD DS security is key for any environment as it is foundation of identity protection. Before look in to improvements of AD DS security in an environment, it is important to understand how Active Directory authentication works with Kerberos. In this post I am going to explain how AD authentication works behind the scene.

In infrastructure, there are different types of authentication protocols been used. Active Directory uses Kerberos version 5 as authentication protocol in order to provide authentication between server and client. Kerberos v5 became default authentication protocol for windows server from windows server 2003. It is an open standard and it provides interoperability with other systems which uses same standards.

Kerberos protocol is built to protect authentication between server and client in an open network where other systems also connected. The main concept behind authentication is, two parties agreed on a password (secret) and both use it to identify and verify their authenticity. 

In above example, Dave and Server A have regular communications. They exchange confidential data between them more often. In order to protect this communication, they agreed on a common secret 1234 to use to verify their identities before exchange data. When Dave make initial communication, he passes his secret to server A and say “I’m Dave”. Server A checks the secret to see if it’s true. If its correct its identify him as Dave and allowed further communication. 
Communication between Dave and Server A, happens in open network which means there are other connected systems. Sam is a user connected in same network where dave is in. He is aware about communication between Dave and Server A. He has interest about data exchange between them and like to get his hands on those. He starts to listen to traffic between these two hosts to find out the secret they use. Once he founds it, he starts to communicate to Server A and says he is Dave and also provides the secret 1234. From Server A side, it doesn’t see different between message from dave and sam now as both provides the correct secret. 
Kerberos solved this challenge by using shared symmetric cryptographic key instead of the secrets. It uses same key to encryption and decryption. Kerberos name came from three headed strong dog in Greek mythology. As the three-headed dog, Kerberos protocol has three main components. 
1) A Client
2) A Server
3) A trusted authority to issue secret keys
This trusted authority is called as Key Distribution Center (KDC). Before we look in to Kerberos in detail, better to understand how typical key exchange works. 
if we revisit our scenario, now we have a KDC in place. Instead of communicating to server A direct, now Dave goes to KDC and says he needs to access server A. Now it needs a symmetric key to start communication with Server A. This key only should use by Dave and Server A. KDC accept the request and generates a key (Key D+S) and then distribute it to Dave and Server A. By the looks of it seems quite straight forward, but in server point of view there are few challenges. In order to accept connection from Dave, it need to know Key D+S, so it need to keep this key stored in Server A. we only considering about one connection in here. But if there are hundred connections, it need to store all the keys involves. This will cost resources for server A. However, actual Kerberos protocol operation is more efficient than this.    
In Active Directory environment KDC is installed as part of the domain controller. KDC is responsible for two main functions. 
1) Authentication Service (AS)
2) Ticket Granting Service (TGS)
In example, when Dave logs in to the system, it needs to prove KDC that he is exactly the same person that he claims to be. When he logins, it sends its user name to KDC along with “long-time key”. long-time key is generated based on Dave’s password. Kerberos client on Dave’s PC accept his password and generate cryptographic key. KDC also maintain a copy of this key in its database. Once KDC receives the request it checks its user name and long-term key with its records. If it’s all good, KDC response to Dave with a session key. This is called as Ticket Granting Ticket (TGT). TGT contain two things,
1) Copy of session key that KDC use to communicate with Dave. This is encrypted with KDC’s long-term key.
2) Copy of session key that Dave can use to communicate with KDC. This is encrypted with Dave’s long-term key so only Dave can decrypt it. 
Once Dave receive this key, he can use its long-term key to decrypt the session key. After that, for all the future communication with KDC will be based on this session key. This session key is temporally and have its TTL (Time to Live) value. 
This session key is saved in Dave’s computer volatile memory. Now it’s time to request access to Server A. Dave has to contact KDC again, but this time it uses the session key provided by KDC. This request included TGT, timestamp encrypted by the session key and service ID (the service which running on server A). Once KDC receives it, it uses its long-term key to decrypt TGT and retrieve the session key. then using session key, it decrypts the time stamp. If its change is less than 5 minutes it proves its came from Dave and it’s not the same request from previous time. Once KDC confirms as a legitimate request, it creates another ticket and this is called as service ticket. It contains two keys. One for Dave and one for Server A. both keys included requester’s name (Dave), recipient, time stamp, TTL value, a new session key (which will share between Dave and Server A). one key of this is encrypted using Dave’s long-term key. the other key is encrypted using Server A’s long-term key. at the end both together encrypted using session key between KDC and Dave. Finally ticket it ready and send over to Dave. Dave decrypt the ticket using session key. he also found “his” key and decrypt it using his long-term key. it revels the new session key which need to share between him and Server A. Then he creates a new request including Server A’s key retrieved from the service ticket, timestamp encrypted using the new session key created by KDC. Once Dave send it over to Server A, it decrypts its key using its long-term key and retrieve session key. Using session key, it can decrypt the timestamp to verify the authenticity of the request. As we can see, in this process it is not server A’s responsibility to keep track of the key use by client and its client’s responsibility to keep the relevant keys. 
Let’s go ahead and recap what we learn about Kerberos authentication, 
1) Dave sends user name and his long-term key to KDC (Domain Controller).
2) KDC, checks user name and long-term key with its database and verify identity. Then its generates TGT (Ticket Granting Ticket). It includes copy of session key which KDC use to communicate with Dave. This is encrypted with KDC’s long-term key. It also includes copy of session key Dave can use to communicate with KDC. 
3) Response to Dave with TGT. 
4) Dave decrypts his key using his long-term key and retrieve session key. His system creates new request including TGT, timestamp encrypted by the session key and service ID. Once request is generated, it sends to KDC. 
5) KDC uses its long-term key to decrypt TGT and retrieve the session key. then session key can use to decrypt the timestamp. Then it creates a service ticket. This ticket includes two keys. One for server A and one for Dave. Dave’s key is encrypted using his long-term key and Server A’s key is encrypted using Server A’s long-term key. At the end both encrypted using session key used by KDC and Dave. 
6) Sends Service Ticket to Dave.
7) Dave decrypt the ticket using session key and retrieve his key. Then he goes ahead and decrypt it to get new key which can use to establish connection with server A. At the end it creates another request including Server A’s key (Which created on step 5), timestamp which is encrypted by session key which Dave decrypted earlier on this step. Once everything ready, system send it to Server A. 
8) Server A go ahead and decrypt his key using its long-term key and retrieve session key. then using it, server A can decrypt the time stamp to verify request’s authenticity. Once everything green, it allowed connection between Dave and Server A. 
There are few other things which need to fulfil in order to complete above process. 
Connectivity – Server, Client and KDC need to have reliable connection between them to process requests and responses. 
DNS – Clients uses DNS to locate KDC and Servers. There for functioning DNS with correct records are required. 
Time Synchronization – As we can see, process uses timestamp to verify the authenticity of the requests. It only allows up to 5 minutes’ time difference. There for its must to have accurate time synchronization with PDC or other time source. 
Service Principle Names (SPN) – Clients uses SPN to locate services in AD environment. If there is no SPN for the services, clients and KDC cannot locate them when required. When setup services, make sure to setup SPNs as well.
This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts. 

Step-by-Step Guide to Setup Two-Tier PKI Environment

In previous posts on PKI blog series we learned about theory part of PKI. If you didn’t read those yet, please go ahead and read those before start on deployment part. 

How PKI Works? 

Active directory certificate service components

PKI Deployment Models

In this post I am going to demonstrate how we can setup PKI using Two-Tier model. I have use this model as it is the recommended model for mid and large organizations. 

The above figure explains the setup I am going to do. In there I have one domain controller, one standalone root CA and one Issuing CA. all are running with windows server 2016 with latest patch level. 
Setup Standalone Root CA
First step is to setup the standalone root CA. This is not a domain member server and it is operating in workgroup level. By configuring it on separate VLAN will add additional security to the root CA as it will not be able to talk to each other directly even its online. 
Once server is ready log in to the server as member of local administrator group. First task is to install the AD CS role service. It can be done using,
Add-WindowsFeature ADCS-Cert-Authority -IncludeManagementTools
Once role service is installed, next step is to configure the role and get the CA up and running. 
Install-ADcsCertificationAuthority -CACommonName “REBELAdmin Root CA” -CAType StandaloneRootCA -CryptoProviderName “RSA#Microsoft Software Key Storage Provider” -HashAlgorithmName SHA256 -KeyLength 2048 -ValidityPeriod Years -ValidityPeriodUnits 20
The above command will configure the CA. in the command CACommonName defines the common name for the CA. CAType defines the CA operation type. In our case, it is a StandaloneRootCA. The option for it will be EnterpriseRootCA, EnterpriseSubordinateCA or StandaloneSubordinateCA. CryptoProviderName specify the cryptographic service provider and in the demo, I am using the Microsoft default service provider. HashAlgorithmName defines the hashing algorithm use by the CA. The option for it will be change based on the CSP we choose. SHA1 is no longer counted as secure algorithm as recommended to use SHA256 or above. KeyLength specify the key size for the algorithm. In this demo, I am using 2048 key. ValidityPeriod defines the validity period of CA certificates. It can be hours, days, weeks, months or years. ValidityPeriodUnits is followed by the ValidityPeriod and specify how many hours, days, weeks, months or years it will valid. In our demo, we are using 20 years. 
Now we have the root CA up and running. But before we use it we need to do certain configuration changes. 
As I mentioned early this is a standalone root CA and it is not part of the domain. However, CDP (Certificate Revocation List Distribution Points) and AIA (Authority Information Access) locations which required by CA will be storing in DC. Since those use DN names with domain, root CA need to be aware of the domain information to publish it properly. It will retrieve this information via registry key. 
certutil.exe –setreg ca\DSConfigDN CN=Configuration,DC=rebeladmin,DC=com
CDP Location
CDP is stands for Certificate Revocation List Distribution Points and it is defined the location where CRL can retrieve. This is web based location and should be able to access via HTTP. This list will be used by the certificate validator to verify the given certificate is not in revocation list.  
Before we do this, we need to prepare the web server for that task. This task will use the same server built for online issuing CA.
The web server can be install using, 
Install-WindowsFeature Web-WebServer -IncludeManagementTools
Next step is to create a folder and create share so that can be use as the virtual directory.
mkdir C:\CertEnroll 

New-smbshare -name CertEnroll C:\CertEnroll -FullAccess SYSTEM,"rebeladmin\Domain Admins" -ChangeAccess "rebeladmin\Cert Publishers"
As part of the exercise it will set share permissions to rebeladmin\Domain Admins (Full Access) and rebeladmin\Cert Publishers (Change Access).
After that load the IIS manager and add a Virtual Directory CertEnroll with the above path. 
Last but not least we need to create a DNS record to use this publication point using FQDN. In this demo, I am using This will allow to access the new distribution point using
Now everything ready and we can publish the CDP settings using,
certutil -setreg CA\CRLPublicationURLs "1:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl \n10:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10\n2:"
The single numbers in the command refers to the options and numbers with % refers to the variables.





No Changes


Publish CRL to the given location


Attach CDP extensions of issued certificates


Include in CRL to find the delta CRL locations


Specify if need to publish all CRL info to AD when publishing manually


Delta CRL location


Include IDP extension of issued CRL

All these settings also can specify using GUI. In order to access it, got to Server Manager > Tools > Certificate Authority > Right click and select properties of the server > Go to the Extension Tab

There you can add all the above using GUI.


GUI Reference




DNS Name of the CA server



NetBIOS name of the CA server



Given Name for the CA



Renewal Extension of the CA



DN of the Configuration Container in AD



Truncated Name of the CA ( 32 Characters )



Inserts a name suffix at the end of the file name before publishing a CRL



When this called, this will replace the CRLNameSuffix with a separate suffix to use the delta CRL



Object Class identifier for CDP



Object Class identifier for a CA

AIA Location 

AIA (Authority Information Access) is an extension which is in certificate and it defines the location where application or service can retrieve issuing CA’s certificate. This is also a web based path and we can use the same location we used for the CDP.

This can be set using,

certutil -setreg CA\CACertPublicationURLs "1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11\n2:"

The Options are very much similar to the CDP with few smaller changes.




No Changes


Publish CA certificate to given location


Attach AIA extensions of issued certificates


Attach Online Certificate Status Protocol (OCSP) Extensions

CA Time Limits

When we setup the CA we have defined CA validity period as 20 years. but it doesn’t mean every certificate it issue will have 20 years’ valid period. Root CA will issue certificates only to Issuing CAs. Certificate request, approval and renewal processes are manual. There for typically these certificates will have longer validity periods. In demo, I will set it for 10 years. 

certutil -setreg ca\ValidityPeriod "Years"

certutil -setreg ca\ValidityPeriodUnits 10

CRL Time Limits

CRL also got some time limits associated. 

Certutil -setreg CA\CRLPeriodUnits 13

Certutil -setreg CA\CRLPeriod "Weeks"

Certutil -setreg CA\CRLDeltaPeriodUnits 0

Certutil -setreg CA\CRLOverlapPeriodUnits 6

Certutil -setreg CA\CRLOverlapPeriod "Hours"

In above commands, 

CRLPeriodUnits – Specify the number of days, weeks, months or years CRL will be valid. 

CRLPeriod – Specify whether CRL validity period is measured by days, weeks, months or years. 

CRLDeltaPeriodUnit – Specify the number of days, weeks, months or years delta CRL will be valid. For offline CA this should be disabled. 

CRLOverlapPeriodUnits – This specify the number of days, weeks, months or years that CRL can overlap. 

CRLOverlapPeriod – This Specify whether CRL overlapping validity period is measured by days, weeks, months or years.

Now we have all the settings submitted and in order to apply changes cert service need to be restarted. 

Restart-Service certsvc


Next step is to create new CRL and it can generate using,

certutil -crl

Once it’s done, there will be two files under C:\Windows\System32\CertSrv\CertEnroll

Publish Root CA Data in to Active Directory 
in above list we have two files. One is end with .crt. This is the root CA certificate. In order to trust it by other clients in domain, it need to publish to the active directory. To do that first need to copy this file from root CA to Active Directory server. Then log in to AD as domain admin or enterprise admin and run,
certutil –f –dspublish "REBEL-CRTROOT_REBELAdmin Root CA.crt" RootCA
The next file is ends with .crl. This is the root CA’s CRL. This also need to publish to AD. So, everyone in domain aware of that too. In order to do that copy the file from root CA to domain controller and run the command,
certutil –f –dspublish "REBELAdmin Root CA.crl"
Setup Issuing CA
Now we finished with the root CA setup and next step is to setup Issuing CA. Issuing CA will be running from a domain member server and it will be AD integrated. In order to perform the installation, need to log in to the server as domain admin or enterprise admin. 
First task will be to install the AD CS role. 
Add-WindowsFeature ADCS-Cert-Authority -IncludeManagementTools
I also going to use same server for the web enrollment role service from the same service. So it can be add using
Add-WindowsFeature ADCS-web-enrollment
After that we can configure the role service using,
Install-ADcsCertificationAuthority -CACommonName “REBELAdmin IssuingCA” -CAType EnterpriseSubordinateCA -CryptoProviderName “RSA#Microsoft Software Key Storage Provider” -HashAlgorithmName SHA256 -KeyLength 2048
In order to configure the web enrollment role service
Issue Certificate for Issuing CA
In order to get AD CS running on issuing CA, it needs the certificate issued from the parent CA which is the root CA we just deployed. During the role configuration process, it automatically creates the certificate request under the C:\ and exact file name will be listed in command output from the previous command.
The file need to copy from the issuing CA to the root CA and the execute the command,
certreq -submit "REBEL-CA1.rebeladmin.com_REBELAdmin IssuingCA.req"
As I explain before any request to root CA will process manually and this request also will be waiting for manual approval. In order to approve the certificate, go to Server Manager > Tools > Certificate Authority > Pending Certificates and the right click on the certificate > All Tasks > Issue

Once it is issued, it need to be export and import in to Issuing CA. 
certreq -retrieve 2 "C:\REBEL-CA1.rebeladmin.com_REBELAdmin_IssuingCA.crt"
above command will export the certificate. The number 2 is the “Request ID” in the CA mmc. 
Once it exports, move the file to issuing CA and from there run command,
Certutil –installcert "C:\REBEL-CA1.rebeladmin.com_REBELAdmin_IssuingCA.crt"

start-service certsvc
Post Configuration Tasks 
Similar to the root CA, after the initial service setup, we need to define some configuration values.
CDP Location
It is similar to the root CA and I am going to use already created web location for it. 
certutil -setreg CA\CRLPublicationURLs "1:%WINDIR%\system32\CertSrv\CertEnroll\%3%8%9.crl\n2:\n3:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10"
AIA Location
Similar way AIA location also specified using,
certutil -setreg CA\CACertPublicationURLs "1:%WINDIR%\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:\n3:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11"
CA and CRL Time Limits
CA and CRL Time limits also need to adjust,
certutil -setreg CA\CRLPeriodUnits 7
certutil -setreg CA\CRLPeriod “Days”
certutil -setreg CA\CRLOverlapPeriodUnits 3
certutil -setreg CA\CRLOverlapPeriod “Days”
certutil -setreg CA\CRLDeltaPeriodUnits 0
certutil -setreg ca\ValidityPeriodUnits 3
certutil -setreg ca\ValidityPeriod “Years”
Once all these done, in order to complete the configuration, restart the cert service using,
Restart-Service certsvc
Last but not least run,
certutil -crl

To generate CRLs. 
Once all done we can run PKIView.msc to verify the configuration. 
Note – PKIVIEW was first introduced with windows 2003 and it gives visibility over enterprise PKI configuration. It also verifies the certificates and CRL for each CA to maintain integrity.
Certificate Templates
Now we have the working PKI and we can turn off the standalone root CA. it only should bring online if the Issuing CA certificates are expired or PKI compromised to generate new certificates.
Once CA is ready, objects and services can use it for certificates. CA comes with predefined “Certificates Templates”. These can use to build custom certificate templates according to organization requirements and publish it to AD. 
Certificate Templates MMC can access using Run > MMC > File > Add/Remove Snap In > Certificate Templates
To create a custom template, Right Click on a template and click on Duplicate Template
Then it will open up the properties window and can change the settings of the certificate template to match the requirements. Some common settings to change in templates are,
Template display name (General Tab) – Display name of the template
Template Name (General Tab) – Common Name of the template
Validity Period (General Tab) – Certificate Validity Period
Security – Authenticated Users or Groups must have “Enroll” permission to request certificates. 
Next step before use it to issue the certificate via CA. Then the members of the domain can request certificates based on that. 
To do that, go to Certificate Authority MMC > Certificate Templates > Right click on it > New > Certificate Template to Issue
Then from the list select the Template to issue and click Ok

Request Certificate
Based on the published certificates templates, users can request certificates from issuing CA. I have log in to an end user PC and going to request a certificate based on template we just created in previous step. 
Go to Run > MMC > Add/Remove Snap in > Certificates and Click Add Button
From the list select the “Computer Account” to manage certificates for Computer Object. This is depended on the template. Once selected, in next window select Local computer as the target. 
Note – If the user is not an administrator, with default permission it will only allow to open the “Current User” snap in. To open Computer Account, MMC need to Run as Administrator 
Once MMC is loaded, go to Personal container, right click and then follow All Tasks > Request New Certificate
It will open new window and click next till it reach request certificate window. In there we can see the new template. Click on check box to select and then click on link with Yellow warning sign to provide additional details which required for the certificate. 
Provide the required fields and click ok to proceed. Most of the time its Common name which required if its computer certificate. 
Once it’s done, click on “Enroll” to request certificate. Then it will automatically process the certificate request and issue the certificate. Once its issued it can be found under the Personal Certificate Container. 
As we can see a valid certificate is issued. Same time record of this issued certificate can be found under Issuing CA’s Certificate Authority MMC > Issued Certificate 
In this exercise, we learned how to setup two-tier PKI correctly. After setup, as any other system regular maintenance is required to keep up the good health. Also, it is important to have proper documentation about the setup, certificate templates and procedures to issue, renew and revoke different types of certificates.
This marks the end of this PKI blog series. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

PKI Deployment Models

In my previous posts, we learn how PKI works and what are the PKI components. You can find those articles in following links,

How PKI Works? 

Active directory certificate service components

In this post we are going to look in to different PKI deployment models. In several places in above articles, I have mentioned about the PKI hierarchy and components in it such as root CAs, Intermediate CAs and Issuing CAs. Based on the business and operation requirements, PKI topology will also change. There are three deployments models we can use to address the PKI requirements. 

Single-Tier Model

This is also called as one-tier model and it is the simplest deployment model for PKI. This is NOT recommended to use in any production network as its single point of failure of entire PKI. 


In this model, single CA will act as root CA and Issuing CA. as I explain before the root CA is the highest trusted CA in PKI hierarchy. Any compromise to root CA will be compromise entire PKI. In this model its one server, so any compromise on server will easily compromise entire PKI as it doesn’t need to spread through different hierarchy levels. This is model is easy to implement and easy to manage. Because of that event it’s not recommended, this model exists in corporate networks. 

Some CA aware applications, required certificates in order to function. System center operation manager (SCOM) is one of the good example. It uses certificates on to secure web interfaces, to authenticate management servers and many more. If the organization doesn’t have internal CA, the options are to purchase certificates from vendor or to deploy a new CA. In similar situations engineers usually use this single-tier model as its only use for a specific application or task. 



Less resources and Low Cost to manage as the its all running from single server. It also reduces the license cost for the operating systems.

High possibility of getting compromise as root CA is online and running entire PKI related roles from one single server. If someone get access to private key of root CA he has complete ownership over the PKI

Faster deployment and it is possible to get CA running in short period of time.

Lack of redundancy as Certificate issuing and management all depend on single server and availability of it will decide the availability of PKI


It is not scalable and it will need restructure the hierarchy if need to add more role servers.


All the certificate issuing and management done by one server and all the work requests has to handle by it. it creates a performance bottleneck.

Two-Tier Model 

This is the most commonly used PKI deployment model in corporate networks. By design the root CA need to keep offline and it will prevent private key of root certificate been compromised. root CA will issue certificates for subordinate CAs and Subordinate CAs are responsible for issuing certificates for objects and services. 

In event of Subordinate CAs certificate expire, Offline root CA will need to bring online to renew the certificate. Root CA doesn’t need to be a domain member and it should be operating in workgroup level (standalone CA). There for the certificate enrollment, approval and renew will be manual process. This is scalable solution and number of issuing CAs can be increase based on workloads. This allows to extend the CA boundaries to multiple sites too. In Single-Tier model if PKI got compromised, in order to recover all the issues certificates, need to be manually removed from the devices. In Two-Tier model, all need to do is revoke the certificates issued by CA and then publish CRL (Certificate Revocation List) and then reissue the certificates. 



Improved PKI security as root CA offline and it’s been protected by private key been compromised.  

High Maintenance – Need to maintain multiple systems and need skills to process the manual certificates request/approval/renewal between root CA and subordinate CA

Flexible scalability – Can start small and expand by adding additional subordinate CAs when required.

Cost – Cost of resources and licenses are high compare to Single-Tier model 

Restrict Issuing CA impact in CA hierarchy by controlling certificates scope. It will prevent issuing “rouge” certificates.

Manual certificate renewal process between root CA and subordinate CAs adds additional risks as if administrators forgot to renew it on time it can bring whole PKI down.

Improved performances as workloads can shared among multiple subordinate CAs.


Flexible maintenance capabilities as less dependencies.


Three-Tier Model 

Three-Tier model is the highest in the model list which operates with greater security, scalability and control. Similar to two-tier model it also has offline root CA and online Issuing CAs. Addition to that there will be offline intermediate CAs which operates between root CA and subordinate CAs. Main reason for it is to operate intermediate CAs as Policy CAs. In larger organizations, different departments, different sites, different operation units can have different certificate requirements. As an example, a certificate issued to a perimeter network will required manual approval process while others users in corporate network prefer auto approval. IT team prefer to have advanced Cryptography provider for its certificates and large key while other users operates with default RSA algorithm. All these different requirements are defined by the Policy CA and it publish the relevant templates, procedures to the other CAs. 

This model add another layer of security to the hierarchy. However, if you not using CA policies the intermediate tier will not use. It is just can be a waste of money and resources. there for most of the organizations prefer Two-tier model to start with and then expand as required. 
In this model both root CA and Intermediate CAs are operates as standalone CA. root CA only will issue certificates to intermediate CAs and those only will issue certificate to Issuing CAs. 



Improved security as it adds another layer of CAs to the certificate verification.

Cost – Cost of resources and licensee are high as its need to maintain three layers. It also increases the operation cost.

Greater scalability as each tier can span horizontally.

High Maintenance – When number of servers increases the efforts need to maintain those also increases. Both tiers which operates standalone CAs required additional maintenance as its not supported for automatic certificate request/approval/renewal. 

In event of compromise of issuing CA, intermediate CA can revoke the compromised CA with minimum impact to existing setup

Implementation Complexity is high compare to other models.

High Performance setup as workloads are distributed and administrative boundaries are well defined by intermediate CAs.


Improved control over certificate policies and allow enterprise to have tailored certificates.


High availability as dependencies further reduced.


This marks the end of this blog post. In next post we are going to look in to deployment of AD CS. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Active Directory Certificate Service Components

In my previous post I explained what is PKI and how it works. You can find it using 

Active Directory Certificate Service is the Microsoft solution for PKI, It is collection of role services and those can use to design the PKI for your organization. In this post we are going to look in to each of these role service and their responsibilities. 

Certificate Authority (CA)

CA role service holders responsible for issue, store, manage and revoke certificates. PKI setup can have multiple CAs. There are mainly two types of CA which can identify in PKI. 

Root CA

Root CA is the most trusted CA in PKI environment. Compromise of root CA will possibly compromise entire PKI. There for security of the root CA is critical and most organization only bring those online when they need to issue or renew a certificate. Root CA also capable of issue certificates to any object or services but considering security and hierarchy of the PKI it used to issue certificates only to Subordinate CAs. 

Subordinate CA

In PKI, Subordinate CAs are responsible for issues, store, manage and revoke certificates for objects or services. These publish certificate templates and users can create their certificate requests based on those templates. Once CA receives the request it will process it and issue the certificate. PKI can have multiple subordinate CAs. Each subordinate server should have its own certificate from the root CA. the validity period of these certificates is normally longer than ordinary certificates. It also need to renew its certificate from root CA when it reaches the end of validity period. Subordinate CA can have more subordinate CAs below it. in such situation, Subordinate CA also responsible for issuing certificates for its more subordinate CAs. These Subordinate CAs which have more Subordinate CAs called as Intermediate CA. These will not be responsible for issues certificates to users, devices or services. Then it will become more subordinate CA’s responsibilities. The more subordinate servers which issues certificates will call as Issuing CA.


Certificate Enrollment Web Service

Certificate enrolment web service allow users, computer or services to request certificate or renew certificate via web browser, even it is not domain-joined or temporally out of corporate network. If it is domain-joined and in corporate network, they can use auto enrollments or template based request process to retrieve certificate. This web service will remove the dependencies to use other enrollment mechanism. 

Certificate Enrollment Policy Web Service

This role service is works with certificate enrollment web service and allow user, computer or services to perform policy-based certificate enrollment. Similar to enrollment web services, the client computers can be non-domain joined computer or domain joined devices which is out of company network boundaries. When client request for policy information, enrollment policy web service query the AD DS using LDAP for the policy information and then deliver it to client via HTTPS. This information will be cashed and use for similar requests. Once user has the policy information then he/she can request certificate using certificate enrollment web service. 

Certificate Authority Web Enrollment 

This is similar to a web interface for Certificate Authority. Users, computers or services can request certificates using web interface. Using the interface users also can download the root certificates and intermediate certificates in order to validate the certificate. This also can use to request certificate revocation list (CRL). This list includes all the certificates which is expires or revoked with in its PKI. If any presented certificate match entry in the CRL, it will be automatically refused. 

Network Device Enrollment Service

Network devices such as routers, switch and firewalls can have device certificates to verify authenticity of traffic pass through it. majority of these devices are not Domain-Joined and their operation system are also very unique and do no support typical windows computer functions. In order to request or retrieve certificates, it uses Simple certificate enrollment protocol (SCEP). It allows network devices to have x.509 version 3 certificates similar to other domain-joined devices. This is important as if devices going to use IPsec it must needs have x.509 version 3 certificate.  

Online Responder

Online responder is responsible for producing information about certificate status. When I explain about CA web enrollment I explained about certificate revocation list (CRL). CRL includes the entire list of certificates which is expired and revoked with in the PKI. The list will keep growing based on the number of certificates it managed. Instead of using bulk data, online responder will response to individual requests from users to verify status of a particular certificate. This is more efficient than CRL method as requests is focused to find out status of one certificate in given time. 

Certificate Authority Types

Based on the installation mode the CAs can be divide in to two types which is Standalone CA and Enterprise CA. The best way to explain capabilities of both types is to compare them. 


Standalone CA

Enterprise CA

AD DS Dependency

Not Depend on AD DS, it can install on member server or stand-alone server in workgroup

Only can install in Member server

Operate Offline

Can Stay Offline

Cannot be Offline

Customized Certificate Templates

Do Not support, only support standard templates


Supported Enrollment Methods

Manual or Web Enrollment

Auto, Manual or Web Enrollment

Certificate Approval Process


Manual or Automatic based on the Policy

User input for the Certificate fields


Retrieved from AD DS

Certificate Issuing and Managing using AD DS



Standalone CA mostly use for as the root CA. in previous section I have explain how important is root CA security is. In Standalone CA it support to keep the server offline and bring it online when it need to issue certificate or renew certificate. Since root CA are only used to issue certificates to subordinate CA. so the manual processing and approval are manageable as it may only have to do in every few years’ time. This type also valid for public CAs. Issuing CA are involving with day to day certificate issuing, managing, storing, renewing and revoking process. Depending on the infrastructure size it can be hundreds or thousands who use these issuing CAs. If the request and approval process is manual it may take much manpower to maintain it. there for in corporate networks it always recommended to use enterprise CA type. Enterprise CAs allows engineers to create certificate templates with specific requirements and publish these via AD DS. End users can request the certificates based on these templates. Enterprise CAs are only can install on Windows server Enterprise or Data Center version only. 

This marks the end of this blog post. In next post we are going to look in to deployment models of AD CS. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

How PKI Works ?

When I talk to customers, engineers, most of them know SSL is “more secure” and works with TCP 443. But most of them do not really know what is a certificate and how this encryption and decryption works. It is very important to know how it’s exactly works then the deployment and management becomes easy. Most of the PKI related issues I have worked on are related to misunderstanding on core technologies, components and concepts related to it, rather than service level issues. 

Symmetric-key vs Asymmetric-key

There are two type of cryptographic methods use to encrypt the data in computer world. Symmetric method works exactly the same way your door lock works. You have one key to lock or open the door. This is also called as shared secret and private key. VPN connections, Backup Software are some of the examples where still uses Symmetric-key to encrypt data.

Asymmetric-key method is in other hand uses key pair to do the encryption and decryption. It includes two keys one is public key and the other one is private key. Public key is always distributed to public and any one can have it. Private key is unique for the object and it will not distribute to others. Any message encrypts using a public key only can decrypt using its private key. Any message encrypts using private key only can decrypt using public key. PKI uses the Asymmetric-key method for digital encryption and digital signature. 

Digital Encryption 

Digital encryption mean, the data transfer between two parties will be encrypted and sender will ensure it only can open from the expected receiver. Even another unauthorized party gain access to that encrypted data, they will not be able to decrypt the data. Best way to explain it will following example, 


We have an employee in organization called Sean. In PKI environment, he owns two keys which is public key and private key. It can use to encryption and signature process. Now he has a requirement to receive set of confidential data from compony account manager Chris. He doesn’t want anyone else to have this confidential data. The best way to do this to encrypt the data which going to send from Chris to Sean. 


In order to encrypt the data, Sean sends his public key to Chris. There is no issue with providing public key to any party. Then Chris uses this public key to encrypt the data he is sending over to Sean. This encrypted data only can open using Sean’s private key. He is the only one have this private key. This verifies the receiver and his authority over the data. 

Digital Signature 

Digital signature verifies the authenticity of the service or data. It is similar to signing a document to prove its authenticity. As an example, before purchase anything from amazon, we can check its digital certificate and it will verify the authenticity of the website and prove it’s not a phishing website. Let’s look in to it further with a use case. In previous scenario, Sean successfully decrypted the data he received from Chris. Now Sean wants to send some confidential data back to Chris. It can be encrypt using same method using Chris’s public key. But issue is Chris is not part of the PKI setup and he do not have key pair. Only thing Chris need to verify the sender is legitimate and its same user he claims to be. If Sean can certify it using digital signature and if Chris can verify it, the problem is solved. 


Now in here, Sean encrypt the data using his private key. Now the only key it can be decrypt is the Sean’s public key. Chris already have this information. Even if he doesn’t have public key it can distribute to him. When Chris receives the data, he decrypts it using Sean’s public key and it confirms the sender is definitely Sean. 

Signing and Encryption  

In previous two scenarios, I have explained how digital encryption and digital signature works with PKI. But both of these scenarios can combined together to provide the encryption and signing in same time. In order to do that system, use two additional techniques.

Symmetric-Key – One time symmetric-key will use for the message encryption process as it is faster than the asymmetric-key encryption algorithms. This key need to be available for the receiver but to improve the security it will be still encrypt using receiver’s public key. 

Hashing – During the sign process system will generate a one-way hash value to represent the original data. Even some one manage to get that hash value it will not possible to reverse engineer to get the original data. If any modification done to the data, hash value will get change and the receiver will know straight away. These hashing algorithms are faster than encryption algorithms and also the hashed data will be smaller than actual data values. 

Let’s look in to this based on a scenario. We have two employees Simran and Brian and both using PKI setup. Both have their private and public keys assigned. 


Simran wants to send encrypted and signed data segment to Brian. Process mainly can be divided in to two stages which is data signing and data encryption. It will go through both stages before the data send to Brian. 


The first stage is to sign the data segment. System received the Data from Simran and first step is to generate the message digest using the hashing algorithms. This will ensure data integrity and if its altered once it leaves the senders system, receiver can easily identify it using the decryption process. This is one-way process. Once message digest it generated, in next step the messages digest will encrypt using Simran’s Private key in order to digitally sign. It will also include Simran’s Public key so Brian will be able to decrypt and verify the authenticity of the message. Once encrypt process finish it will attached with original data value. This process will ensue data was not altered and send from exact expected sender (Genuine). 


Next stage of the operation is to encrypt the data. First step is in the process is to generate one time symmetric key to encrypt the data. Asymmetric algorithm is less efficient compare to symmetric algorithms to use with long data segments. Once symmetric key is generated the data will encrypt using it (including message digest, signature). This symmetric key will be used by Brian to decrypt the message. There for we need to ensure it only available for Brian. The best way to do it is to encrypt the symmetric key using Brian’s public key. So, once he received it, he will be able to decrypt using his private key. This process is only encrypting symmetric key itself and rest of the message will stay same. Once it completed the data can send to Brian. 

Next step of the process to see how the decryption process will happen on Brian’s side. 


Message decryption process starts with decrypting the symmetric key. Brian needs symmetric to go further with decryption process. It only can decrypt using Brian’s private key. Once its decrypt, symmetric key can use to decrypt the messaged digests + signature. Once decryption done same key information cannot be used to decrypt similar messages as its one time key. 


Now we have the decrypted data and next step is to verify the signature. At this point we have message digest which is encrypt using Simran’s private key. It can be decrypt using Simran’s public key which is attached to the encrypted message. Once its decrypt we can retrieve the message digest. This digest value is one-way. We cannot reverse engineer it. There for retrieved original data digest value will recalculate using exact same algorithm used by sender. After that this newly generated digest value will compare with the digest value attached to message. If the value is equal it will confirm the data wasn’t modified during the communication process. When value is equal, signature will be verified and original data will issue to Brain. If the digest values are different the message will be discard as it been altered or not signed by Simran. 

This explained how PKI environment works with encryption/decryption process as well as digital signing /verification process.  

If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Group Policy Security Filtering

Group Policy can map to Sites, Domain and OUs. If group policy is mapped to OU, by default it will apply to any object under it. But within a OU, Domain or Site there are lots of objects. The security, system or application settings requirements covers by group policies not always applies to boarder target groups. Group Policy filtering capabilities allows to further narrow down the group policy target to security groups or individual objects. 

There are few different ways we can do the filtering in group policy.

1) Security Filtering

2) WMI Filtering

In this post we are going to look in to Security Filtering. In one of my previous post I already covered WMI filtering. It can be found under 

Before apply the security filtering, the first thing to make sure is group policy mapped correctly to the Site, Domain or OU. The security group or the objects you going to target should be under correct level where group policy is mapped. 

We can use the GMPC or PowerShell cmdlets to add the security filtering to GPO.

As you can see, by default any policy have “Authenticated Users” group added to the security filtering. It means by default the policy will apply to any authenticated user in that OU. When we add any group or object to security filtering, it also creates entry under delegation. In order to apply a group policy to an object, it needs minimum of,
Any object added to the Security Filtering section will have both of these permissions set by default. Same way if an object added directly to delegation section and apply both permissions, it will list down those objects under Security Filtering section. 
Now, before we add custom objects to the filtering, we need change the default behavior of the security filtering with “Authenticated Users”. Otherwise it doesn’t matter what security group or object you add it will still apply group policy settings to any authenticated user. Before Microsoft release security patch MS16-072 in year 2016, we can simply remove the Authenticated Users group and add the required objects to it. with this new security patch changes, group policies now will run with in computer security context. Before it was executed with in user’s security context. In order to accommodate this new security requirements, one of following permissions must be available under group policy delegation. 
Authenticated Users – READ
Domain Computers – READ
In order to edit these changes, Go to Group Policy, Then to Delegation tab, Click on Advanced, Select Authenticated users and then remove Apply group policy permissions. 
Now we can go back to Scope tab and add the required security group or objects in to security filtering section. it will automatically add the relevant Read and Apply Group Policy permissions. 
In here we looking in to how to apply group policy to specific target, but it also allows to explicitly allow it to large number of objects and block groups or object by applying it. as an example, let’s assume we have a OU with few hundred objects from different classes. From all these we have like 10 computer objects which we do not need to apply a given group policy. Which one is easy? go and add each and every security group and object to security filtering or allow every one for group policy and block it only for one security group? Microsoft allows to use the second method in filtering too. In order to do that, group policy should have default security filtering which is “Authenticated users” with READ and APPLY GROUP POLICY permissions. Then go to Delegation tab and click on Advanced option. In next window click on Add button and select the group or object that you need to block access to. 
Now in here we are denying READ and APPLY GROUP POLICY permissions to an object. So, it will not able to apply the group policy and all other object under that OU will still able to read and apply group policy. Easy ha?
This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Azure Virtual Machine Scale Sets – Part 01 – What is it and How to set it up?

There are many different solutions available to load balance applications. It can be based on separate hardware appliances, virtual appliances or system inbuilt method such as NLB (Network Load Balancer). However, there are few common challenges on these environments. 

If its third-party solution, additional cost involves for licenses, configuration and maintenance 

Applications or services not always use all of the allocated resources. It may depend on demand and time. Since its fixed number of instance, infrastructure resource will be wasted in non-peak time. if its cloud service, it going to waste money!

When the number of server instances increase, it makes it harder to manage systems. Too many manual tasks!

Azure virtual machine scale sets answers all above challenges. It can automatically increase and decreases number of vm instances running based on demand or schedule. No extra virtual appliances or licenses involves. It also allows to centrally manage, configure large number of instances. Following points are recognized as key benefits of Azure virtual machine scale sets.

It supports Azure load balancer (Layer-4) and Azure Application Gateway (Layer-7) traffic distribution.

It allows to maintain same VM configuration across the instance including VM size, Network, Disk, OS image, Application installs. 

Using Azure Availability Zones, if required we can configure to distribute VM instances in scale set to different datacenters. It adds additional availability. 

It can automatically increase and decrease number of vm instances running based on application demand. It saves money!

It can grow up to 1000 vm instances, if its own custom images, it supports up to 300 vm instances. 

It supports Azure Managed Disks and Premium Storage. 

Let’s see how we can setup Azure virtual machine scale set. In my demo I am going to use Azure PowerShell. 

1) Log in to Azure Portal as Global Administrator
2) Open Cloud shell (right hand corner)
3) Make sure you are using PowerShell Option
4) In my demo scale set configuration as following
New-AzureRmVmss `
  -ResourceGroupName "rebelResourceGroup" `
  -Location "canadacentral" `
  -VMScaleSetName "rebelScaleSet" `
  -VirtualNetworkName "rebelVnet" `
  -SubnetName "rebelSubnet" `
  -PublicIpAddressName "rebelPublicIPAddress" `
  -LoadBalancerName "rebelLoadBalancer" `
  -BackendPort "80" `
  -VmSize "Standard_DS3_v2" `
  -ImageName "Win2012Datacenter" `
  -InstanceCount "4" `
  -UpgradePolicy "Automatic"
In above,




This is the command use to create Azure Virtual Machine Scale Set


This define the resource group name and it is a new one.


This defines the resource region. In my demo its Canada Central


This defines the name for the Scale Set


This defines the virtual network name


This defines the subnet name. if you do not define subnet prefix, it will use default


This defines the name for public IP address. If not define allocation method using -AllocationMethod , it will use dynamic by default.


This defines the load balancer name


This creates relevant rules in loadbalancer and load balance the traffic. in my demo I am using TCP port 80.


This defines the VM size. if this is not defined, by default it uses Standard_DS2_v2


This defines the VM image details. If no valuves used it will use default value which is Windows Server 2016 Datacenter


This defines the initial number of instance running on the scale set


This defines upgrade policy for VM instances in scale set

Once this is run it will ask to define login details for instances. After completes, it will create the scale set.


This also can do using Portal. In order to use GUI, 

1) Log in to Azure Portal as Global Administrator

2) Go to All Services | Virtual Machine Scale Set


3) In new page, click on Add


4) Then it will open up the form, once fill in relevant info click on create 


5) We also can review the existing scale set properties using Virtual machine scale sets page. On page click on scale set name to view the properties. If we click on Instances, we can see the number of instances running


6) Scaling shows the number of instances used. If need it can also adjust in here. 


7) Size defines the size of the VM, again if need values can change in same page. 


8) Also, if we go to Azure Portal | Load Balancers, we can review settings for load balancer used in scale set.


9) In my demo I used TCP port 80 to load balance. Those info can find under Load Balancing rules


10) Relevant public ip info for scale set can be find under inbound NAT rules



This marks the end of this blog post. In next post we will look in to further configuration of scale set. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Active Directory Replication Status Review Using PowerShell

Data Replication is crucial for healthy Active Directory Environment. There are different ways to check status of replication. In this article I am going to explain how you can check status of domain replication using PowerShell.

For a given domain controller we can find its inbound replication partners using, 

Get-ADReplicationPartnerMetadata -Target

Above command provide detail description for the given domain controller including last successful replication, replication partition, server etc. 

We can list down all the inbound replication partners for given domain using, 

Get-ADReplicationPartnerMetadata -Target "" -Scope Domain

In above command the scope is defined as the domain. this can change to forest and get list of inbound partners in the forest. The output is for default partition.  If needed the partition can change using – Partition to Configuration or Schema partition. It will list down the relevant inbound partners for given partition. 

Associated replication failures for a site, forest, domain, domain controller can find using Get-ADReplicationFailure cmdlet. 

Get-ADReplicationFailure -Target

Above command will list down the replication failures for the given domain controller. 

Replication failures for domain can find out using, 

Get-ADReplicationFailure -Target -Scope Domain

Replication failures for forest can find out using, 

Get-ADReplicationFailure -Target -Scope Forest

Replication failures for site can find out using, 

Get-ADReplicationFailure -Target LondonSite -Scope Site

In command, LondonSite can replace using relevant site name. 

Using both Get-ADReplicationPartnerMetadata and Get-ADReplicationFailure, following PowerShell script can provide report against specified domain controller. 

## Active Directory Domain Controller Replication Status##

$domaincontroller = Read-Host 'What is your Domain Controller?'

## Define Objects ##

$report = New-Object PSObject -Property @{

ReplicationPartners = $null

LastReplication = $null

FailureCount = $null

FailureType = $null

FirstFailure = $null


## Replication Partners ##

$report.ReplicationPartners = (Get-ADReplicationPartnerMetadata -Target $domaincontroller).Partner

$report.LastReplication = (Get-ADReplicationPartnerMetadata -Target $domaincontroller).LastReplicationSuccess

## Replication Failures ##

$report.FailureCount  = (Get-ADReplicationFailure -Target $domaincontroller).FailureCount

$report.FailureType = (Get-ADReplicationFailure -Target $domaincontroller).FailureType

$report.FirstFailure = (Get-ADReplicationFailure -Target $domaincontroller).FirstFailureTime

## Format Output ##

$report | select ReplicationPartners,LastReplication,FirstFailure,FailureCount,FailureType | Out-GridView

In this command, it will give option for engineer to specify the Domain Controller name. 

$domaincontroller = Read-Host 'What is your Domain Controller?'

Then its creates some object and map those to result of the PowerShell command outputs. Last but not least it provides a report to display a report including, 

Replication Partner (ReplicationPartners)

Last Successful Replication (LastReplication)

AD Replication Failure Count (FailureCount)

AD Replication Failure Type (FailureType)

AD Replication Failure First Recorded Time (FirstFailure)


Further to Active Directory replication topologies, there are two types of replications.

1) Intra-Site – Replications between domain controllers in same Active Directory Site

2) Inter-Site – Replication between domain controllers in different Active Directory Site

We can review AD replication site objects using Get-ADReplicationSite cmdlet. 

Get-ADReplicationSite -Filter *

Above command returns all the AD replication sites in the AD forest. 


We can review AD replication site links on the AD forest using, 

Get-ADReplicationSiteLink -Filter *

In site links, most important information is to know the site cost and replication schedule. It allows ro understand the replication topology and expected delays on replications. 

Get-ADReplicationSiteLink -Filter {SitesIncluded -eq "CanadaSite"} | Format-Table Name,Cost,ReplicationFrequencyInMinutes -A

Above command list all the replication sites link included CanadaSite AD site along with the site link name, link cost, replication frequency. 

A site link bridge can use to bundle two or more site links and enables transitivity between site links.

Site link bridge information can retrieve using, 

Get-ADReplicationSiteLinkBridge -Filter *

Active Directory sites may use multiple IP address segments for its operations. It is important to associate those with the AD site configuration so domain controllers know which computer related to which site. 

Get-ADReplicationSubnet -Filter * | Format-Table Name,Site -A

Above command will list down all the Subnets in the forest in a table with subnet name and AD site.


Bridgehead servers are operating as the primary communication point to handle replication data which comes in and go out from AD site. 

We can list down all the preferred bridgehead servers in a domain using, 

$BHservers = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com").bridgeheadServerListBL

$BHservers | Out-GridView

In above command the attribute value bridgeheadServerListBL retrieve via ADSI connection. 

We can list down all of these findings using on script. 

## Script to gather information about Replication Topology ##

## Define Objects ##

$replreport = New-Object PSObject -Property @{

Domain = $null


## Find Domain Information ##

$replreport.Domain = (Get-ADDomain).DNSroot

## List down the AD sites in the Domain ##

$a = (Get-ADReplicationSite -Filter *)

Write-Host "########" $replreport.Domain "Domain AD Sites" "########"

$a | Format-Table Description,Name -AutoSize

## List down Replication Site link Information ##

$b = (Get-ADReplicationSiteLink -Filter *)

Write-Host "########" $replreport.Domain "Domain AD Replication SiteLink Information" "########"

$b | Format-Table Name,Cost,ReplicationFrequencyInMinutes -AutoSize

## List down SiteLink Bridge Information ##

$c = (Get-ADReplicationSiteLinkBridge -Filter *)

Write-Host "########" $replreport.Domain "Domain AD SiteLink Bridge Information" "########"

$c | select Name,SiteLinksIncluded | Format-List

## List down Subnet Information ##

$d = (Get-ADReplicationSubnet -Filter * | select Name,Site)

Write-Host "########" $replreport.Domain "Domain Subnet Information" "########"

$d | Format-Table Name,Site -AutoSize

## List down Prefered BridgeHead Servers ##

$e = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com").bridgeheadServerListBL

Write-Host "########" $replreport.Domain "Domain Prefered BridgeHead Servers" "########"


## End of the Script ##

The only thing we need to change is the ADSI connection with relevant domain DN. 

$e = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com")

This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Manage Active Directory Permissions with Delegate Control method

In one of my previous post I explained how we can manage AD administration privileges using ACLs. If you didn’t read it yet you can find it using

This Delegate Control method also works similar to ACLs, but its simplified the process as its uses,

Delegation of Control Wizard which can use to apply delegated permissions. 

Allows to use predefined tasks and assign permission to those

The Wizard contain following predefined tasks which can use to assign permissions. 

Create, delete, and manage user accounts

Reset user passwords and force password change at next logon

Read all user information

Create, delete and manage groups

Modify the membership of a group

Manage Group Policy links

Generate Resultant Set of Policy (Planning)

Generate Resultant Set of Policy (Logging)

Create, delete, and manage inetOrgPerson accounts

Reset inetOrgPerson passwords and force password change at next logon

Read all inetOrgPerson information

This also allows to create custom task to delegate permissions, if it’s not covered from the common task list. 

Similar to ACLs, Permissions can apply in,

1) Site – Delegated permission will valid for all the objects under the given Active Directory Site. 

2) Domain – Delegated permission will valid for all the objects under the given Active Directory Domain. 

3) OU – Delegated permission will valid for all the objects under the given Active Directory OU.

As an example, I have a security group called Second Line Engineers and Scott is a member of it. I like to allow members of this group to reset password for objects in OU=Users,OU=Europe,DC=rebeladmin,DC and nothing else. 

1) Log in to Domain Controller as Domain Admin/Enterprise Admin

2) Review Group Membership Using 

Get-ADGroupMember “Second Line Engineers”


3) Go to ADUC, right click on the Europe OU, then from list click on “Delegate Control

4) This will open new wizard, in initial page click Next to proceed. 

5) In next page, Click on Add button and add the Second Line Engineers group to it. Then click Next to proceed.


6) From the task to delegate window select Delegate the following common tasks option and from list select Reset user passwords and force password change at next logon. In this page, we can select multiple tasks. If none of those works, we still can create custom task to delegate. Once completes the selection, click next to proceed. 


7) This completes the wizard and click on Finish to complete. 

8) Now it’s time for testing. I log in to Windows 10 computer which has RSAT tools installed as user Scott. 

According to permissions, I should be able to reset password of an object under OU=Users,OU=Europe,DC=rebeladmin,DC

Set-ADAccountPassword -Identity dfrancis

This allows to change the password successfully. 


However, it should not allow to delete any objects. we can test it using,

Remove-ADUser -Identity "CN=Dishan Francis,OU=Users,OU=Europe,DC=rebeladmin,DC=com"

And as expected, it returns access denied error. 


This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Integrity check to Detect Low Level Active Directory Database Corruption

Active Directory maintains a multi-master database. like any other database there can be data corruptions, crashes, data lost etc. In my entire career, I still did not come across with a situation that a full database recovery is required in production environment. The reason is AD DS database is keep replicating to other available Domain Controllers and it is very rare that all the available Domain Controllers crash in same time and loose data.

By running integrity check, we can identify binary level AD database corruption. This comes as part of the Ntdsutil tool which use for Active Directory database maintenance. This go through every byte of the database file. The integrity command also checks if correct headers exist in the database itself and if all of the tables are functioning and consistent. This process also run as part of Active Directory Service Restore Mode (DRSM).

This check need to run with NTDS service off. 

In order to run integrity check,

1) Log in to Domain Controller as Domain/Enterprise Administrator
2) Open PowerShell as Administrator
3) Stop NTDS service using net stop ntds
4) Type 
activate instance ntds
5) In order to exit from the utility type, quit.
6) it is also recommended to run Semantic database analysis to confirm the consistency of active directory database contents. 
7) In order to do it, 
activate instance ntds
semantic database analysis
8) If its detected any integrity issues can type go fixup to fix the errors. 
9) After process is completed, type net start ntds to start the ntds service.
This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.