Author Archives: Dishan M. Francis

Active Directory Certificate Service Components

In my previous post I explained what is PKI and how it works. You can find it using http://www.rebeladmin.com/2018/05/how-pki-works/ 

Active Directory Certificate Service is the Microsoft solution for PKI, It is collection of role services and those can use to design the PKI for your organization. In this post we are going to look in to each of these role service and their responsibilities. 

Certificate Authority (CA)

CA role service holders responsible for issue, store, manage and revoke certificates. PKI setup can have multiple CAs. There are mainly two types of CA which can identify in PKI. 

Root CA

Root CA is the most trusted CA in PKI environment. Compromise of root CA will possibly compromise entire PKI. There for security of the root CA is critical and most organization only bring those online when they need to issue or renew a certificate. Root CA also capable of issue certificates to any object or services but considering security and hierarchy of the PKI it used to issue certificates only to Subordinate CAs. 

Subordinate CA

In PKI, Subordinate CAs are responsible for issues, store, manage and revoke certificates for objects or services. These publish certificate templates and users can create their certificate requests based on those templates. Once CA receives the request it will process it and issue the certificate. PKI can have multiple subordinate CAs. Each subordinate server should have its own certificate from the root CA. the validity period of these certificates is normally longer than ordinary certificates. It also need to renew its certificate from root CA when it reaches the end of validity period. Subordinate CA can have more subordinate CAs below it. in such situation, Subordinate CA also responsible for issuing certificates for its more subordinate CAs. These Subordinate CAs which have more Subordinate CAs called as Intermediate CA. These will not be responsible for issues certificates to users, devices or services. Then it will become more subordinate CA’s responsibilities. The more subordinate servers which issues certificates will call as Issuing CA.

comp1

Certificate Enrollment Web Service

Certificate enrolment web service allow users, computer or services to request certificate or renew certificate via web browser, even it is not domain-joined or temporally out of corporate network. If it is domain-joined and in corporate network, they can use auto enrollments or template based request process to retrieve certificate. This web service will remove the dependencies to use other enrollment mechanism. 

Certificate Enrollment Policy Web Service

This role service is works with certificate enrollment web service and allow user, computer or services to perform policy-based certificate enrollment. Similar to enrollment web services, the client computers can be non-domain joined computer or domain joined devices which is out of company network boundaries. When client request for policy information, enrollment policy web service query the AD DS using LDAP for the policy information and then deliver it to client via HTTPS. This information will be cashed and use for similar requests. Once user has the policy information then he/she can request certificate using certificate enrollment web service. 

Certificate Authority Web Enrollment 

This is similar to a web interface for Certificate Authority. Users, computers or services can request certificates using web interface. Using the interface users also can download the root certificates and intermediate certificates in order to validate the certificate. This also can use to request certificate revocation list (CRL). This list includes all the certificates which is expires or revoked with in its PKI. If any presented certificate match entry in the CRL, it will be automatically refused. 

Network Device Enrollment Service

Network devices such as routers, switch and firewalls can have device certificates to verify authenticity of traffic pass through it. majority of these devices are not Domain-Joined and their operation system are also very unique and do no support typical windows computer functions. In order to request or retrieve certificates, it uses Simple certificate enrollment protocol (SCEP). It allows network devices to have x.509 version 3 certificates similar to other domain-joined devices. This is important as if devices going to use IPsec it must needs have x.509 version 3 certificate.  

Online Responder

Online responder is responsible for producing information about certificate status. When I explain about CA web enrollment I explained about certificate revocation list (CRL). CRL includes the entire list of certificates which is expired and revoked with in the PKI. The list will keep growing based on the number of certificates it managed. Instead of using bulk data, online responder will response to individual requests from users to verify status of a particular certificate. This is more efficient than CRL method as requests is focused to find out status of one certificate in given time. 

Certificate Authority Types

Based on the installation mode the CAs can be divide in to two types which is Standalone CA and Enterprise CA. The best way to explain capabilities of both types is to compare them. 

Feature

Standalone CA

Enterprise CA

AD DS Dependency

Not Depend on AD DS, it can install on member server or stand-alone server in workgroup

Only can install in Member server

Operate Offline

Can Stay Offline

Cannot be Offline

Customized Certificate Templates

Do Not support, only support standard templates

Supported

Supported Enrollment Methods

Manual or Web Enrollment

Auto, Manual or Web Enrollment

Certificate Approval Process

Manual

Manual or Automatic based on the Policy

User input for the Certificate fields

Manual

Retrieved from AD DS

Certificate Issuing and Managing using AD DS

N/A

Supported

Standalone CA mostly use for as the root CA. in previous section I have explain how important is root CA security is. In Standalone CA it support to keep the server offline and bring it online when it need to issue certificate or renew certificate. Since root CA are only used to issue certificates to subordinate CA. so the manual processing and approval are manageable as it may only have to do in every few years’ time. This type also valid for public CAs. Issuing CA are involving with day to day certificate issuing, managing, storing, renewing and revoking process. Depending on the infrastructure size it can be hundreds or thousands who use these issuing CAs. If the request and approval process is manual it may take much manpower to maintain it. there for in corporate networks it always recommended to use enterprise CA type. Enterprise CAs allows engineers to create certificate templates with specific requirements and publish these via AD DS. End users can request the certificates based on these templates. Enterprise CAs are only can install on Windows server Enterprise or Data Center version only. 

This marks the end of this blog post. In next post we are going to look in to deployment models of AD CS. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

How PKI Works ?

When I talk to customers, engineers, most of them know SSL is “more secure” and works with TCP 443. But most of them do not really know what is a certificate and how this encryption and decryption works. It is very important to know how it’s exactly works then the deployment and management becomes easy. Most of the PKI related issues I have worked on are related to misunderstanding on core technologies, components and concepts related to it, rather than service level issues. 

Symmetric-key vs Asymmetric-key

There are two type of cryptographic methods use to encrypt the data in computer world. Symmetric method works exactly the same way your door lock works. You have one key to lock or open the door. This is also called as shared secret and private key. VPN connections, Backup Software are some of the examples where still uses Symmetric-key to encrypt data.

Asymmetric-key method is in other hand uses key pair to do the encryption and decryption. It includes two keys one is public key and the other one is private key. Public key is always distributed to public and any one can have it. Private key is unique for the object and it will not distribute to others. Any message encrypts using a public key only can decrypt using its private key. Any message encrypts using private key only can decrypt using public key. PKI uses the Asymmetric-key method for digital encryption and digital signature. 

Digital Encryption 

Digital encryption mean, the data transfer between two parties will be encrypted and sender will ensure it only can open from the expected receiver. Even another unauthorized party gain access to that encrypted data, they will not be able to decrypt the data. Best way to explain it will following example, 

pki1

We have an employee in organization called Sean. In PKI environment, he owns two keys which is public key and private key. It can use to encryption and signature process. Now he has a requirement to receive set of confidential data from compony account manager Chris. He doesn’t want anyone else to have this confidential data. The best way to do this to encrypt the data which going to send from Chris to Sean. 

pki2

In order to encrypt the data, Sean sends his public key to Chris. There is no issue with providing public key to any party. Then Chris uses this public key to encrypt the data he is sending over to Sean. This encrypted data only can open using Sean’s private key. He is the only one have this private key. This verifies the receiver and his authority over the data. 

Digital Signature 

Digital signature verifies the authenticity of the service or data. It is similar to signing a document to prove its authenticity. As an example, before purchase anything from amazon, we can check its digital certificate and it will verify the authenticity of the website and prove it’s not a phishing website. Let’s look in to it further with a use case. In previous scenario, Sean successfully decrypted the data he received from Chris. Now Sean wants to send some confidential data back to Chris. It can be encrypt using same method using Chris’s public key. But issue is Chris is not part of the PKI setup and he do not have key pair. Only thing Chris need to verify the sender is legitimate and its same user he claims to be. If Sean can certify it using digital signature and if Chris can verify it, the problem is solved. 

pki3

Now in here, Sean encrypt the data using his private key. Now the only key it can be decrypt is the Sean’s public key. Chris already have this information. Even if he doesn’t have public key it can distribute to him. When Chris receives the data, he decrypts it using Sean’s public key and it confirms the sender is definitely Sean. 

Signing and Encryption  

In previous two scenarios, I have explained how digital encryption and digital signature works with PKI. But both of these scenarios can combined together to provide the encryption and signing in same time. In order to do that system, use two additional techniques.

Symmetric-Key – One time symmetric-key will use for the message encryption process as it is faster than the asymmetric-key encryption algorithms. This key need to be available for the receiver but to improve the security it will be still encrypt using receiver’s public key. 

Hashing – During the sign process system will generate a one-way hash value to represent the original data. Even some one manage to get that hash value it will not possible to reverse engineer to get the original data. If any modification done to the data, hash value will get change and the receiver will know straight away. These hashing algorithms are faster than encryption algorithms and also the hashed data will be smaller than actual data values. 

Let’s look in to this based on a scenario. We have two employees Simran and Brian and both using PKI setup. Both have their private and public keys assigned. 

pki4

Simran wants to send encrypted and signed data segment to Brian. Process mainly can be divided in to two stages which is data signing and data encryption. It will go through both stages before the data send to Brian. 

pki5

The first stage is to sign the data segment. System received the Data from Simran and first step is to generate the message digest using the hashing algorithms. This will ensure data integrity and if its altered once it leaves the senders system, receiver can easily identify it using the decryption process. This is one-way process. Once message digest it generated, in next step the messages digest will encrypt using Simran’s Private key in order to digitally sign. It will also include Simran’s Public key so Brian will be able to decrypt and verify the authenticity of the message. Once encrypt process finish it will attached with original data value. This process will ensue data was not altered and send from exact expected sender (Genuine). 

pki6

Next stage of the operation is to encrypt the data. First step is in the process is to generate one time symmetric key to encrypt the data. Asymmetric algorithm is less efficient compare to symmetric algorithms to use with long data segments. Once symmetric key is generated the data will encrypt using it (including message digest, signature). This symmetric key will be used by Brian to decrypt the message. There for we need to ensure it only available for Brian. The best way to do it is to encrypt the symmetric key using Brian’s public key. So, once he received it, he will be able to decrypt using his private key. This process is only encrypting symmetric key itself and rest of the message will stay same. Once it completed the data can send to Brian. 

Next step of the process to see how the decryption process will happen on Brian’s side. 

pki7

Message decryption process starts with decrypting the symmetric key. Brian needs symmetric to go further with decryption process. It only can decrypt using Brian’s private key. Once its decrypt, symmetric key can use to decrypt the messaged digests + signature. Once decryption done same key information cannot be used to decrypt similar messages as its one time key. 

pki8

Now we have the decrypted data and next step is to verify the signature. At this point we have message digest which is encrypt using Simran’s private key. It can be decrypt using Simran’s public key which is attached to the encrypted message. Once its decrypt we can retrieve the message digest. This digest value is one-way. We cannot reverse engineer it. There for retrieved original data digest value will recalculate using exact same algorithm used by sender. After that this newly generated digest value will compare with the digest value attached to message. If the value is equal it will confirm the data wasn’t modified during the communication process. When value is equal, signature will be verified and original data will issue to Brain. If the digest values are different the message will be discard as it been altered or not signed by Simran. 

This explained how PKI environment works with encryption/decryption process as well as digital signing /verification process.  

If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step Guide to protect Azure VM using Azure Backup

Azure Backup is capable of replacing typical on-premises backup solutions. It is cloud-based, secure, reliable solution. It has four components which can use to backup different types of data.

Component

Protected data

Can use with On-premises?

Can use with Azure?

Azure Backup (MARS) agent

Files, Folders, System State

Yes

Yes

System Center DPM

Files, Folders, Volumes,

VMs, Applications, Workloads, System State

Yes

Yes

Azure Backup Server

Files, Folders, Volumes,

VMs, Applications, Workloads, System State

Yes

Yes

Azure IaaS VM Backup

VMs, All disks (using PowerShell)

No

Yes

More details about azure backup and components limitations can be find on https://docs.microsoft.com/en-us/azure/backup/backup-introduction-to-azure-backup 

In this article we are going to look in to Azure VM backup (Azure IaaS VM Backup). 

How Azure VM Backup works? 

Azure VM backup doesn’t need any special agent installed in VM. It also does not need to have any additional components (backup server) install either to enable backup. When very first backup job is triggered, it installs backup extension inside the VM. If its Windows VM, it installs VMSnapshot extension and if its Linux VM, it installs VMSnapshotLinux extension. VM must be in running state in order to install extension. After extension in place, it takes point-in-time snapshot of the VM. If VM is not running during backup window, it takes snapshot of VM storage. If its windows VM, backup service uses Volume Shadow Copy Service (VSS) to get consistence snapshot of VM disk. If its Linux VM, users can create custom scripts to run before and after backup job to keep application consistency. Once snapshot is taken it will transfer to the backup vault. Service can identify the recent changes and only transfer the block of data which changed from last backup. Once the data transfer completes snapshot will removed and recovery point will be created. 

vmbackup-architecture

Image Source: https://docs.microsoft.com/en-us/azure/backup/media/backup-azure-vms-introduction/vmbackup-architecture.png 

Performance of backup depends on,

1) Storage account limitations 

2) Number of disks in VM

3) Backup Schedule – if all jobs running in same time it can create traffic jam

According to Microsoft following are recommended when you use Azure backup for Azure VMs. Reference: https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-introduction 

1) Do not schedule more than 40 VMs to backup same time.

2) Schedule VMs backup when minimum IOPs been used in your environment (In relevant storage accounts). 

3) Better not to back up more than 20 disks in single storage account. If you have more than 20 disks in single storage account spread those VMs across the multiple policies to maintain required IOPS. 

4) Do not restore a VM running on Premium storage to same storage account. Also try to avoid restore while backup process is running on same storage account.

5) For Premium VM backup, ensure that storage account that hosts premium disks has at least 50% free space for staging snapshot for a successful backup.

6) Linux VM needs python 2.7 enabled for backup.

Next step is to see this in action.

1) Log in to Azure Portal as Global Administrator

2) First step is to create Azure Recovery Service Vault. In order to do that, go to All Services and click on Recovery Service vaults under storage section. 

bk1

3) Then click on Add in new window

bk2

4) It will open up wizard and there provide vault name, subscription, resource group and location. Once done, click on Create.

bk3

5) Now we have vault created, next step is to create backup policy. To do that click on vault we just created from the Recovery service vault window.

bk4

6) Then click on Backup Policies 

bk5

7) There is default policy from Azure VM backup. It backup VMs daily and keep it for 30 days.

bk6

8) I am going to create new policy to do backup every day at 01:00 am and keep it for 7 days. To do that click on add option in policy window. 

bk7

9) Then select the policy type. for VMs, it should be Azure Virtual Machine

bk8

10) In next window we can define time and retention period of data. Once done with the details click on Create

bk9

11) Next step of the configuration is to enable backup. In order to do that, go to the VM you like to backup. Then click on the option Backup 

bk10

12) Then in new window select the vault and policy we created before and then click on enable backup

bk11

13) Once it is done we can run backup by going in to same backup window. If you like to take ad-hoc backup, click on Backup Now

bk12

14) We can see the progress of the backup job by clicking View All Jobs

bk13

bk14

15) Once backup jobs completed we can see the status of it in same backup window.

bk15

16) To test the restore I installed Acrobat Reader in this server and created test folder in desktop. 

bk16

17) Now I am going to do a restore to an earlier day. To do that go to VM backup page, then click on Restore VM

bk17

18) In next window it asks which backup to restore. I am selecting back up from 3 days.

bk18

19) In next window it allows me to restore it as new VM or as disk. In here I am going to restore it as new VM

bk19

20) Once selection is done click on Restore to begin the process.

21) We also can check the status of the job using backup job window.

bk20

22) Once restore completed, I can see a new VM. 

bk21

23) Once log in to the VM I can’t see the folder and application I installed, as expected. 

bk22

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Azure virtual machine scale sets – part 02 – Deploy Application to scale set

In my previous post Azure virtual machine scale sets – part 01, we learned what is VM scale set and how we can create a scale set in Azure. if you not read it yet please go through it before we start on this post as reset of the steps in this post depend on it http://www.rebeladmin.com/2018/04/azure-virtual-machine-scale-sets-part-01/ 

In this post we are going to deploy a sample application to scale set. In my previous post I have created a new scale set using,

New-AzureRmVmss `

  -ResourceGroupName "rebelResourceGroup" `

  -Location "canadacentral" `

  -VMScaleSetName "rebelScaleSet" `

  -VirtualNetworkName "rebelVnet" `

  -SubnetName "rebelSubnet" `

  -PublicIpAddressName "rebelPublicIPAddress" `

  -LoadBalancerName "rebelLoadBalancer" `

  -BackendPort "80" `

  -VmSize "Standard_DS3_v2" `

  -ImageName "Win2012Datacenter" `

  -InstanceCount "4" `

  -UpgradePolicy "Automatic"

In above it created an Azure Load balancer and TCP port 80 been load balanced among 4 instances. Under Azure Load Balancer | Inbound NAT rules it does have default rules for port 3389 and 5985. Those ports are mapped to custom TCP ports in order to give external access. 

scaleapp1

As an example, in above sample, I can RDP to instance0 using 52.237.8.186:50000. Likewise, we can connect to each instance and install apps if need. instead of that we can use centralized remote deployment, so the configuration is same across the instance. 

In my config I didn’t use static ip address. You can find public ip address by running following azure PowerShell command,

Get-AzureRmPublicIpAddress -ResourceGroupName rebelResourceGroup | Select IpAddress

scaleapp2

In order to push application, first need to prepare app config. in my demo I got a file in GitHub repository. 

$customConfig = @{

  "fileUris" = (,"https://raw.githubusercontent.com/rebeladm/rebeladm/master/simplewebapp.ps1");

  "commandToExecute" = "powershell -ExecutionPolicy Unrestricted -File simplewebapp.ps1"

}

My config is very simple one. In PowerShell script I have following,

Add-WindowsFeature Web-Server

Set-Content -Path "C:\inetpub\wwwroot\Default.htm" -Value "Test webapp running on host $($env:computername) !"

It will install IIS and then create HTML file which will print text with the instance name. 

scaleapp3

As next step lets go and retrieve info about scale set,

$vmss = Get-AzureRmVmss `

          -ResourceGroupName "rebelResourceGroup" `

          -VMScaleSetName "rebelScaleSet"

scaleapp4

After that, lets create custom script extension

$vmss = Add-AzureRmVmssExtension `

  -VirtualMachineScaleSet $scaleconfig `

  -Name "customScript" `

  -Publisher "Microsoft.Compute" `

  -Type "CustomScriptExtension" `

  -TypeHandlerVersion 1.8 `

  -Setting $customConfig

In above,

 –Publisher specifies the name of the extension publisher. This can find using Get-AzureRmVMImagePublisher 

 –Type specify the extension type. we can use Get-AzureRmVMExtensionImageType find the extension type. 

TypeHandlerVersion specify the extension version. It can view using Get-AzureRmVMExtensionImage.

scaleapp5

Next step of the configuration is to update scale set with the custom extension,

Update-AzureRmVmss `

  -ResourceGroupName "rebelResourceGroup" `

  -Name "rebelScaleSet" `

  -VirtualMachineScaleSet $vmss

scaleapp6

Now it is time to do testing. Let’s go to public IP address and see if it’s got the app we submit. 

As I refresh we can see the instance number get updated. That means script is successfully running on scale set as expected. 

scaleapp7

scaleapp8

scaleapp9

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.  

Windows Admin Center – Rich Server Management Experience!

In last ignite (2017) Microsoft Released technical preview of “Project Honolulu” which aimed to provide light weight but powerful server management experience for windows users. I already covered it with detail blog post http://www.rebeladmin.com/2017/10/project-honolulu-better-windows-server-management-experience/ . Now the waiting is over and it is generally available as Windows Admin Center

As Windows Users we use many different MMC to manage roles/features.  We also use those to troubleshoot issues. If it is remote computers most of the time we keep RDP or use other methods to dial in. With Windows Admin Center now we can access all these consoles in one web based interface in secure, easy, well integrated way. It can connect other remote computers as well. 

Windows Admin Center features can list down as following,

Easy to Deploy –  It is easy to deploy. Can install in windows 10 or Windows 2016 server and start to manage device with in few minutes. 

Manage from Internal networks or external networks – This solution is web based. It can be access from internal network and same can publish to external networks with minimum configuration changes. 

Better Access Control – Windows Admin Center supports role based access control and gateway authentication option included local groups, Windows Active Directory and Azure Active Directory. 

Support for hyper-converged clusters – Windows Admin Center well capable of managing hyper-converged clusters including, 

Single console to manage compute, storage and networking

Create and Manage storage space direct features

Monitoring and Alerting 

Extensibility – Microsoft will offer SDK which will allow 3rd party vendors to develop solutions and allow to integrate with windows admin center to manage their solutions. 

How it Works?
 
Windows Admin Center have two components.
 
Web Server – It is the UI for Windows Admin Center and users can access it via HTTPS requests. It also can publish to remote networks to allow users to connect via web browser.
Gateway – Gateway is to manage connected servers via Remote PowerShell and WMI over WinRM. 
 
wac1
 
Image Source – https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/media/architecture.png 
 
Which Systems Will Support?
 
WAC will come default with upcoming windows server 2019. At the moment it can install on windows 10 in desktop mode which connect to the WAC gateway from the same computer where it is installed. It can also install on windows server 2016 in gateway mode which allows to Connect to WAC gateway from a client browser on a remote machine. 
WAC can manage any systems from windows server 2012. 
 
What about System Center and OMS? 
 
This is not replacement for high end infrastructure management solution such as SCCM and OMS. WAC will add additional management experience, if you already have those solution in place. 
 
Azure Integration? 
 
Yes, WAC supports Azure Integration. Azure AD can use for WAC gateway authentication. By providing gateway access to Azure VNet, WAC can manage Azure VM. WAC can also manage Azure Site Recovery activities. 
 
Let’s see how we can get it running,
In my demo I am going to install WAC in windows server 2016. 
 
To install WAC,
 
1) Log in to the server as Administrator
2) Download WAC installation from http://aka.ms/WindowsAdminCenter
3) Double click on the .msi file to begin the installation.
4) In initial window accept the license terms and click Next
 
wac2
 
5) Then it asks how you like to update it, select the default and click Next to proceed. 
 
wac3
 
6) In next window select option to allow installed to modify trusted host settings. In same window we also can select to create desktop shortcut if needed. 
 
wac4
 
7) In next window we can define the port and certificate for the management site. The default port is 443. In demo I am going to use self-sign cert. 
 
wac5
 
8) Once installation completes, we can launch WAC using desktop icon or https://serverip (replace server ip with the IP address of the server or hostname)
 
Note – WAC not supported on IE. So, you need to use Edge or another browser to access it. 
 
wac6
 
9) By default, it shows the server it is installed under “Server Manager”. In order to add another server, click on Windows Admin Center drop down, and select Server Manager
 
wac7
 
10) Then click on Add
 
wac8
 
11) Then type the FQDN for the server that you like to add. It should be able to resolve from the server. then click on Submit
 
wac9
 
12) We also can add Windows 10 computers to WAC. To do that click on Windows Admin Center drop down and select Computer Management
 
 
wac10
 
 
13) Then click on Add
 
wac11
 
14) Then type the FQDN for the PC that you like to add. It should be able to resolve from the server. then click on Submit
 
wac12
 
Note – Windows 10 do not have Powershell or WinRM remoting by default. To enable it you must run Enable-PSRemoting from PowerShell windows running as admin.
 
wac13
 
wac14
 
15) Once servers/pc are added you can connect to it by just clicking on the server/pc from the list. 
 
wac15
 
16) For remote devices, it will ask as who you like to login. Provide the relevant admin login details and click on Continue
 
wac16
 
17) Then it loads the related info for the server/pc
 
wac17
 
Now we have basic setup of WAC. In next posts we are going to look in to different features of WAC. This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Group Policy Security Filtering

Group Policy can map to Sites, Domain and OUs. If group policy is mapped to OU, by default it will apply to any object under it. But within a OU, Domain or Site there are lots of objects. The security, system or application settings requirements covers by group policies not always applies to boarder target groups. Group Policy filtering capabilities allows to further narrow down the group policy target to security groups or individual objects. 

There are few different ways we can do the filtering in group policy.

1) Security Filtering

2) WMI Filtering

In this post we are going to look in to Security Filtering. In one of my previous post I already covered WMI filtering. It can be found under http://www.rebeladmin.com/2018/02/group-policy-wmi-filters-nutshell/ 

Before apply the security filtering, the first thing to make sure is group policy mapped correctly to the Site, Domain or OU. The security group or the objects you going to target should be under correct level where group policy is mapped. 

We can use the GMPC or PowerShell cmdlets to add the security filtering to GPO.

gsec1
As you can see, by default any policy have “Authenticated Users” group added to the security filtering. It means by default the policy will apply to any authenticated user in that OU. When we add any group or object to security filtering, it also creates entry under delegation. In order to apply a group policy to an object, it needs minimum of,
 
1) READ
2) APPLY GROUP POLICY
 
Any object added to the Security Filtering section will have both of these permissions set by default. Same way if an object added directly to delegation section and apply both permissions, it will list down those objects under Security Filtering section. 
Now, before we add custom objects to the filtering, we need change the default behavior of the security filtering with “Authenticated Users”. Otherwise it doesn’t matter what security group or object you add it will still apply group policy settings to any authenticated user. Before Microsoft release security patch MS16-072 in year 2016, we can simply remove the Authenticated Users group and add the required objects to it. with this new security patch changes, group policies now will run with in computer security context. Before it was executed with in user’s security context. In order to accommodate this new security requirements, one of following permissions must be available under group policy delegation. 
 
Authenticated Users – READ
Domain Computers – READ
 
In order to edit these changes, Go to Group Policy, Then to Delegation tab, Click on Advanced, Select Authenticated users and then remove Apply group policy permissions. 
 
gsec2
 
Now we can go back to Scope tab and add the required security group or objects in to security filtering section. it will automatically add the relevant Read and Apply Group Policy permissions. 
 
gsec3
 
In here we looking in to how to apply group policy to specific target, but it also allows to explicitly allow it to large number of objects and block groups or object by applying it. as an example, let’s assume we have a OU with few hundred objects from different classes. From all these we have like 10 computer objects which we do not need to apply a given group policy. Which one is easy? go and add each and every security group and object to security filtering or allow every one for group policy and block it only for one security group? Microsoft allows to use the second method in filtering too. In order to do that, group policy should have default security filtering which is “Authenticated users” with READ and APPLY GROUP POLICY permissions. Then go to Delegation tab and click on Advanced option. In next window click on Add button and select the group or object that you need to block access to. 
 
gsec4
 
Now in here we are denying READ and APPLY GROUP POLICY permissions to an object. So, it will not able to apply the group policy and all other object under that OU will still able to read and apply group policy. Easy ha?
 
This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Azure Virtual Machine Scale Sets – Part 01 – What is it and How to set it up?

There are many different solutions available to load balance applications. It can be based on separate hardware appliances, virtual appliances or system inbuilt method such as NLB (Network Load Balancer). However, there are few common challenges on these environments. 

If its third-party solution, additional cost involves for licenses, configuration and maintenance 

Applications or services not always use all of the allocated resources. It may depend on demand and time. Since its fixed number of instance, infrastructure resource will be wasted in non-peak time. if its cloud service, it going to waste money!

When the number of server instances increase, it makes it harder to manage systems. Too many manual tasks!

Azure virtual machine scale sets answers all above challenges. It can automatically increase and decreases number of vm instances running based on demand or schedule. No extra virtual appliances or licenses involves. It also allows to centrally manage, configure large number of instances. Following points are recognized as key benefits of Azure virtual machine scale sets.

It supports Azure load balancer (Layer-4) and Azure Application Gateway (Layer-7) traffic distribution.

It allows to maintain same VM configuration across the instance including VM size, Network, Disk, OS image, Application installs. 

Using Azure Availability Zones, if required we can configure to distribute VM instances in scale set to different datacenters. It adds additional availability. 

It can automatically increase and decrease number of vm instances running based on application demand. It saves money!

It can grow up to 1000 vm instances, if its own custom images, it supports up to 300 vm instances. 

It supports Azure Managed Disks and Premium Storage. 

Let’s see how we can setup Azure virtual machine scale set. In my demo I am going to use Azure PowerShell. 

1) Log in to Azure Portal as Global Administrator
 
2) Open Cloud shell (right hand corner)
 
ss1
 
3) Make sure you are using PowerShell Option
 
ss2
 
4) In my demo scale set configuration as following
 
New-AzureRmVmss `
  -ResourceGroupName "rebelResourceGroup" `
  -Location "canadacentral" `
  -VMScaleSetName "rebelScaleSet" `
  -VirtualNetworkName "rebelVnet" `
  -SubnetName "rebelSubnet" `
  -PublicIpAddressName "rebelPublicIPAddress" `
  -LoadBalancerName "rebelLoadBalancer" `
  -BackendPort "80" `
  -VmSize "Standard_DS3_v2" `
  -ImageName "Win2012Datacenter" `
  -InstanceCount "4" `
  -UpgradePolicy "Automatic"
 
In above,
 

Parameter

Description

New-AzureRmVmss

This is the command use to create Azure Virtual Machine Scale Set

-ResourceGroupName

This define the resource group name and it is a new one.

-Location

This defines the resource region. In my demo its Canada Central

-VMScaleSetName

This defines the name for the Scale Set

-VirtualNetworkName

This defines the virtual network name

-SubnetName

This defines the subnet name. if you do not define subnet prefix, it will use default 192.168.1.0/24

-PublicIpAddressName

This defines the name for public IP address. If not define allocation method using -AllocationMethod , it will use dynamic by default.

-LoadBalancerName

This defines the load balancer name

-BackendPort

This creates relevant rules in loadbalancer and load balance the traffic. in my demo I am using TCP port 80.

-VmSize

This defines the VM size. if this is not defined, by default it uses Standard_DS2_v2

-ImageName

This defines the VM image details. If no valuves used it will use default value which is Windows Server 2016 Datacenter

-InstanceCount

This defines the initial number of instance running on the scale set

-UpgradePolicy

This defines upgrade policy for VM instances in scale set

Once this is run it will ask to define login details for instances. After completes, it will create the scale set.

ss3

This also can do using Portal. In order to use GUI, 

1) Log in to Azure Portal as Global Administrator

2) Go to All Services | Virtual Machine Scale Set

ss4

3) In new page, click on Add

ss5

4) Then it will open up the form, once fill in relevant info click on create 

ss6

5) We also can review the existing scale set properties using Virtual machine scale sets page. On page click on scale set name to view the properties. If we click on Instances, we can see the number of instances running

ss7

6) Scaling shows the number of instances used. If need it can also adjust in here. 

ss8

7) Size defines the size of the VM, again if need values can change in same page. 

ss9

8) Also, if we go to Azure Portal | Load Balancers, we can review settings for load balancer used in scale set.

ss10

9) In my demo I used TCP port 80 to load balance. Those info can find under Load Balancing rules

ss11

10) Relevant public ip info for scale set can be find under inbound NAT rules

ss12

 

This marks the end of this blog post. In next post we will look in to further configuration of scale set. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step guide to Azure Policy (Preview)

Every business has different regulations, compliances that they need to comply with. These regulations and compliances are different from one industry to another. As an example, if its financial institute they will need to comply with PCI (Payment Card Industry Data Security Standard), if it’s a healthcare service they will need to comply with HIPPA (Health Insurance Portability and Accountability Act). Some of these compliances are must to comply and some will just add extra value to business. ISO certifications are good example for that. Some of these regulations and compliances are directly apply to computer infrastructures as well. Especially related to data protection and data governance. 

Apart from that most business has their own “Policies” to protect data and workloads in their infrastructures. Most of the time end goal of these policies is to make sure if “IT department” done their part to support business compliance requirements.  There are two great tools available from Microsoft to make it easier for enterprises to reach their corporate compliance requirements with in Azure environments. 

1. Compliance Manager – This service can scan your azure environment and provide report of your compliance level against most common industry standard such as GDPR, ISO 27000 etc. I already wrote detail article about it http://www.rebeladmin.com/2017/11/microsoft-compliance-manager-makes-easy-deal-compliance-challenges/ 

2. Azure Policy – This is more to review continues compliance in corporate infrastructure policies. As an example, a corporate need to make sure all their Azure resources are deployed under west us region. With help of Azure policy, we can continuously monitor resources and make sure it does stay compliance with that policy. in event of breach it will flag it up as well. 

In this post we are going to look in to Azure Policies and how it can help. 

Azure Policy does have 34 inbuilt policy definitions (at the time this article written). These are covering most infrastructure Management, Audit, and security requirements. Users can use these inbuilt policies or build their own. 

Azure Policy definition are JSON based. Each policy has following elements. 

mode

parameters

display name

description

policy rule

               – logical evaluation

               – effect

Mode 

This is to define the resource type considered in the policy. There are two modes can use in a policy.

All – All resource types. This is the recommended mode for policies

Indexed –  Resource types that support tags and locations 

Parameters 

If you work with programming language or PowerShell I am sure you already know what parameter is. In here also it’s the same meaning. Parameter is special kind of variable which refer to piece of data. It simply the policy by reducing code. Following is extracted from a policy to show the parameter usage. 

"parameters": {

            "publisher": {

                "type": "String",

                "metadata": {

                    "description": "The publisher of the extension",

                    "strongType": "type",

                    "displayName": "Extension Publisher"

                }

Display name & Description

It is just to identify the policy. description also can use to add more meaning. 

{

    "type": "Microsoft.Authorization/policyDefinitions",

    "name": "allowed-custom-images", 

    "properties": {

        "displayName": "Approved VM images",

        "description": "This policy governs the approved VM images",

        "parameters": {

            "imageIds": {

                "type": "array",

                "metadata": {

                    "description": "The list of approved VM images",

                    "displayName": "Approved VM images"

                }

In above example, Approved VM images is policy display name and This policy governs the approved VM images is policy description

Policy Rule

It’s the heart of the policy. it is where it describes the policy using logical operators, conditions and effect

Under the policy rule following logical operators are supported. 

"not"

"allOf"

"anyOf"

It also accepts following conditions types

"equals"

"notEquals"

"like"

"notLike"

"match"

"notMatch"

"contains"

"notContains"

"in"

"notIn"

"containsKey"

"notContainsKey"

"exists"

Under a policy rule, following effects can use,

Deny – Generate event in audit log and fail the request

Audit – Only for auditing purpose and no request decision made

Append – Add additional fields to the request

AuditIfNotExists – Enable auditing if the resource not existing

DeployIfNotExists – Deploy if resource is not existing (at the moment this only supported in built-in policies)  

  "policyRule": {

            "if": {

                "allOf": [

                    {

                        "field": "type",

                        "equals": "Microsoft.Compute/virtualMachines"

                    },

                    {

                        "not": {

                            "field": "Microsoft.Compute/imageId",

                            "in": "[parameters('imageIds')]"

                        }

                    }

                ]

            },

            "then": {

                "effect": "deny"

            }

In above example, it uses if, not and then policy blocks been used. It checks images id of virtual machines and if it’s not matching it will deny request based on effect. 

More info about policy templates can be found under

https://docs.microsoft.com/en-us/azure/azure-policy/policy-definition

https://docs.microsoft.com/en-us/azure/azure-policy/json-samples

Policy Initiatives 

Azure Policy also allows to group policies together and apply it one scope. This is called Policy Initiative. This reduce the complexity of policy assignment. As an example, we can create policy initiative called “Infrastructure Security” and include all infrastructure security related policies to it.  

Using Azure Policy

Let’s see how we can use Azure Policy feature. 

1. Log in to Azure Portal as Global Administrator

2. Go to All Services and type Policy then click on policy tile. 

policy

3. Then it will open up the feature tile.

policy2

4. In my demo I am going to assign pre-built policy to restrict resource region. In order to assign policy, click on Assignment 

policy3

5. Then click on Assign Policy

policy4

6. Then in it will open up Assign Policy Wizard. Click on Policy to list and select the relevant policy. in my demo I am using policy called “Allowed Locations”. Select the policy from list and then click on select to complete the action. 

policy5

7. Under Name and Description fields define policy name and description which explain its characteristics. 

8. Under the Pricing Tier select the pricing tier for evaluation. 

9. Scope field defines the scope of the policy. it will be subscription in use. 

10. Using exclusion, we can exclude resource groups which not going to exclude from the policy. in this demo I am not going to exclude any.

policy6

11. Then under the Parameters I select the region I like to use for my resources (In my demo I am using Canada Central as the region). 

policy7

12. At the end click on Assign to complete the policy assignment process. 

policy8

13. Not it is time for testing. In my demo I am trying to create a storage account under west us region. When I do that it gives me error saying “There were validation errors. Click here to view details

policy9

When I click on it, it says it didn’t deploy as policy violation. (hope I will see more details when its GA)

policy10

Cool ha? It’s doing the job it supposed to do. Since policy effect is “deny” it should deny my request to create resource under other regions. 

Initiative Assignment

Assigning Initiative is same process as Policy assignment. But before do that you need initiative in place. There is only one in-built initiative in place currently. 

In order to create initiative,

1. Log in to Azure Portal as Global Administrator

2. Go to All Services and type Policy then click on policy tile. 

3. Then in policy feature window, click on Definition

policy11

4. After that click on Initiative definition option. 

policy12

5. in new window to start with, select Definition location. This is basically the targeted subscription. 

6. Under Name, define name for policy initiative. 

7. Under the category, you can either create new category or select existing one. 

policy13

8. After that click on available policy in left hand panel and click on Add it to initiative. 

policy14

9. Once it’s all done, click on Save to complete to process. 

policy15

10. Once it’s done, we can assign initiative, using Assignment | Assign Initiative Option

policy16

Create New Policy

Creating new policy is similar to creating initiative process. I can be done using Definition | Policy Definition option. 

policy17

policy18

Apart from that, using compliance option we can see the overall policy and initiative compliant status. It also allows to assign policies and initiative. 

policy19

This marks the end of this blog post. Hope it was useful for you. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Step-by-Step guide to configure Azure File Sync (preview)

In one of my previous blog post I have explained what is Azure File Share and how it can use to replace traditional on-premises file server. if you not read it yet please check it before we go further on this post as this feature is depend on Azure File Share. You can access article using http://www.rebeladmin.com/2018/03/step-step-guide-create-azure-file-share-map-windows-10/ 

With Azure File Sync we can make on-premises windows server to act as a cache copy holder for your Azure file share.  It allows users to access files locally using protocol such as SMB, NFS and FTPS. In this blog we going to look in to Azure file sync implementation.

Before we start configuration, we need to familiarizes with some terms associated with this feature. 

Azure File Sync Agent

It is an agent which we need to install in on-premises windows server in order to enable sync with Azure file share. It includes three components, 

1. FileSyncSvc.exe – This is the service responsible for monitoring changes in local server initiate sync with Azure file share. 

2. StorageSync.sys – This component is responsible for tiering files to Azure files. Cloud tiering is additional feature of Azure File Sync. It can use with not frequently used files greater than 64Kb. When this enabled, local file replaced with url to files in Azure file share. When user access it, in background it recalls the file from Azure file share. End user will not have any difference experience as it all happens in back end. 

3. PowerShell cmdlets – This helps to manage Microsoft.StorageSync Azure resource provider using PowerShell commands. These cmdlet files are located in

C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.PowerShell.Cmdlets.dll

C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll

This agent is only supported in Windows server 2012 R2 / 2016 standard and datacenter versions only. It is not supported on core version either. 

Storage Sync Service 

According to Microsoft “The Storage Sync Service is the top-level Azure resource for Azure File Sync. The Storage Sync Service resource is a peer of the storage account resource, and can similarly be deployed to Azure resource groups. A distinct top-level resource from the storage account resource is required because the Storage Sync Service can create sync relationships with multiple storage accounts via multiple sync groups. A subscription can have multiple Storage Sync Service resources deployed.”

Sync group 

Sync group defines the boundaries of sync job. A sync group includes cloud endpoint and server end point. Storage sync service can have multiple sync group. 

Cloud endpoint

Cloud endpoint represent an Azure file share. One cloud endpoint can only have one file share which means one Azure file share responsible for one sync group. 

Server endpoint

Server endpoint represent the local server directory which will cache files from Azure file share. A one server can hold multiple server endpoints but one endpoint can’t be part of multiple sync groups. If it’s still added, it will merge with the files belongs to other endpoints in same sync group. 

Registered Server 

Registered server represents the trust relationship between on-premise server and storage sync service. It is one-to-one connection. However, one storage sync service can have many servers registered with it. 

Now we know the component and how each component involves in sync operation between Azure file share and on-premises server. Next step is to get it configured. 

Setup Azure File Share

As first step of the demo I am going to create Azure file share. Steps for this task is already explained on one of my previous blog post. http://www.rebeladmin.com/2018/03/step-step-guide-create-azure-file-share-map-windows-10/

Azure file sync preview feature is only supported in Australia East, Canada Central, East US, Southeast Asia, UK South, West Europe, West US regions. There for azure file share also need to be in same regions. 

For this demo I have created a file share called “rebelshare”. It is associated with westus region. 

async1

Create Storage Sync Service
 
1) Log in to Azure Portal as global administrator
2) Go to New | Create a resource | Azure File Sync (Preview) | Create
 
asyncnew1
 
3) In new window type name for sync service and select relevant resource group for it. if required can create new resource group. once you fill in info, click on create
 
asyncnew2
 
Install Azure File Sync Agent
 
Next step in configuration is to install azure file sync agent in on-premises server. In this demo I am using server which running windows server 2016 datacenter edition. 
 
Before install agent,
 
Log in to server and disabled Internet Explorer Enhanced Security Configuration for administrators and users. This can re-enable after installation. 
 
async2
 
Verify PowerShell version its running. At least it need to run version 5.1
 
Install Azure PowerShell Module – Guide for it available in https://docs.microsoft.com/powershell/azure/install-azurerm-ps 
 
async3
 
Once above in place, go and download file sync agent from https://www.microsoft.com/en-us/download/details.aspx?id=55988
 
Once download is completed, double click to start the installation. In initial page, click Next to continue.
 
async4
 
In next page, accept the license agreement and click on Next.
 
After that in next window we can select the path for installation.
 
async5
 
In next window it asks in future how you need to update the agent version. It can be done using windows update. 
 
async6
 
In next window, keep default settings and click on Install to begin installation. 
 
Once installation is completed, it opens up Azure File Sync agent wizard. First step is to register the server. in window click on Sign in to start the process. 
 
async7
 
Then sign in using your Azure global administrator account. 
 
async8
 
In next window select the Azure Subscription, Resource group, Storage Sync service and click on Register
 
async9
 
Then it will ask again for login, once it is done it will complete the registration process. 
 
async10
 
Create Sync Group
 
Next step of the process is to create sync group. to do that.
 
1) Log in to Azure Portal as global administrator
2) Go to All Services and search for Storage Sync Services
3) In Storage Sync Services page click on the Storage Sync Service we created on earlier step. 
 
async11
 
4) In new window click on Sync Group icon.
 
async12
 
5) In next window, define name for sync group and select the subscription. Then select the storage account and Azure file share. At the end click on Create
 
async13
 
6) Once group is added, click on the new group
 
async14
 
7) In new window, click on add server endpoint option. 
 
async15
 
8) Then in new window select the registered server from the list and then define folder path for local cache copy. In my demo I am using E:\share path. I also enable cloud tiering feature. Once info is in click on create
 
async16
 
9) After initial sync we can see same files in two endpoints. 
 
async17
async18
 
10) You also can review status of endpoint sync using Storage Sync Services | Sync_Account | Sync_group

async19
 
This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Active Directory Replication Status Review Using PowerShell

Data Replication is crucial for healthy Active Directory Environment. There are different ways to check status of replication. In this article I am going to explain how you can check status of domain replication using PowerShell.

For a given domain controller we can find its inbound replication partners using, 

Get-ADReplicationPartnerMetadata -Target REBEL-SRV01.rebeladmin.com

Above command provide detail description for the given domain controller including last successful replication, replication partition, server etc. 

We can list down all the inbound replication partners for given domain using, 

Get-ADReplicationPartnerMetadata -Target "rebeladmin.com" -Scope Domain

In above command the scope is defined as the domain. this can change to forest and get list of inbound partners in the forest. The output is for default partition.  If needed the partition can change using – Partition to Configuration or Schema partition. It will list down the relevant inbound partners for given partition. 

Associated replication failures for a site, forest, domain, domain controller can find using Get-ADReplicationFailure cmdlet. 

Get-ADReplicationFailure -Target REBEL-SRV01.rebeladmin.com

Above command will list down the replication failures for the given domain controller. 

Replication failures for domain can find out using, 

Get-ADReplicationFailure -Target rebeladmin.com -Scope Domain

Replication failures for forest can find out using, 

Get-ADReplicationFailure -Target rebeladmin.com -Scope Forest

Replication failures for site can find out using, 

Get-ADReplicationFailure -Target LondonSite -Scope Site

In command, LondonSite can replace using relevant site name. 

Using both Get-ADReplicationPartnerMetadata and Get-ADReplicationFailure, following PowerShell script can provide report against specified domain controller. 

## Active Directory Domain Controller Replication Status##

$domaincontroller = Read-Host 'What is your Domain Controller?'

## Define Objects ##

$report = New-Object PSObject -Property @{

ReplicationPartners = $null

LastReplication = $null

FailureCount = $null

FailureType = $null

FirstFailure = $null

}

## Replication Partners ##

$report.ReplicationPartners = (Get-ADReplicationPartnerMetadata -Target $domaincontroller).Partner

$report.LastReplication = (Get-ADReplicationPartnerMetadata -Target $domaincontroller).LastReplicationSuccess

## Replication Failures ##

$report.FailureCount  = (Get-ADReplicationFailure -Target $domaincontroller).FailureCount

$report.FailureType = (Get-ADReplicationFailure -Target $domaincontroller).FailureType

$report.FirstFailure = (Get-ADReplicationFailure -Target $domaincontroller).FirstFailureTime

## Format Output ##

$report | select ReplicationPartners,LastReplication,FirstFailure,FailureCount,FailureType | Out-GridView

In this command, it will give option for engineer to specify the Domain Controller name. 

$domaincontroller = Read-Host 'What is your Domain Controller?'

Then its creates some object and map those to result of the PowerShell command outputs. Last but not least it provides a report to display a report including, 

Replication Partner (ReplicationPartners)

Last Successful Replication (LastReplication)

AD Replication Failure Count (FailureCount)

AD Replication Failure Type (FailureType)

AD Replication Failure First Recorded Time (FirstFailure)

repower1

Further to Active Directory replication topologies, there are two types of replications.

1) Intra-Site – Replications between domain controllers in same Active Directory Site

2) Inter-Site – Replication between domain controllers in different Active Directory Site

We can review AD replication site objects using Get-ADReplicationSite cmdlet. 

Get-ADReplicationSite -Filter *

Above command returns all the AD replication sites in the AD forest. 

repower2

We can review AD replication site links on the AD forest using, 

Get-ADReplicationSiteLink -Filter *

In site links, most important information is to know the site cost and replication schedule. It allows ro understand the replication topology and expected delays on replications. 

Get-ADReplicationSiteLink -Filter {SitesIncluded -eq "CanadaSite"} | Format-Table Name,Cost,ReplicationFrequencyInMinutes -A

Above command list all the replication sites link included CanadaSite AD site along with the site link name, link cost, replication frequency. 

A site link bridge can use to bundle two or more site links and enables transitivity between site links.

Site link bridge information can retrieve using, 

Get-ADReplicationSiteLinkBridge -Filter *

Active Directory sites may use multiple IP address segments for its operations. It is important to associate those with the AD site configuration so domain controllers know which computer related to which site. 

Get-ADReplicationSubnet -Filter * | Format-Table Name,Site -A

Above command will list down all the Subnets in the forest in a table with subnet name and AD site.

repower3

Bridgehead servers are operating as the primary communication point to handle replication data which comes in and go out from AD site. 

We can list down all the preferred bridgehead servers in a domain using, 

$BHservers = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com").bridgeheadServerListBL

$BHservers | Out-GridView

In above command the attribute value bridgeheadServerListBL retrieve via ADSI connection. 

We can list down all of these findings using on script. 

## Script to gather information about Replication Topology ##

## Define Objects ##

$replreport = New-Object PSObject -Property @{

Domain = $null

}

## Find Domain Information ##

$replreport.Domain = (Get-ADDomain).DNSroot

## List down the AD sites in the Domain ##

$a = (Get-ADReplicationSite -Filter *)

Write-Host "########" $replreport.Domain "Domain AD Sites" "########"

$a | Format-Table Description,Name -AutoSize

## List down Replication Site link Information ##

$b = (Get-ADReplicationSiteLink -Filter *)

Write-Host "########" $replreport.Domain "Domain AD Replication SiteLink Information" "########"

$b | Format-Table Name,Cost,ReplicationFrequencyInMinutes -AutoSize

## List down SiteLink Bridge Information ##

$c = (Get-ADReplicationSiteLinkBridge -Filter *)

Write-Host "########" $replreport.Domain "Domain AD SiteLink Bridge Information" "########"

$c | select Name,SiteLinksIncluded | Format-List

## List down Subnet Information ##

$d = (Get-ADReplicationSubnet -Filter * | select Name,Site)

Write-Host "########" $replreport.Domain "Domain Subnet Information" "########"

$d | Format-Table Name,Site -AutoSize

## List down Prefered BridgeHead Servers ##

$e = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com").bridgeheadServerListBL

Write-Host "########" $replreport.Domain "Domain Prefered BridgeHead Servers" "########"

$e

## End of the Script ##

The only thing we need to change is the ADSI connection with relevant domain DN. 

$e = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com")

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.