Step-by-Step Guide: Azure Key Vault

People use safes, security boxes to protect their valuable things. In digital world “Data” is the most valuable thing. Passwords, Connection Strings, Secrets, Data encryption/decryption keys protects access to different data sets. Whoever have access to those will also have access to data behind it (of cause they need to know how to use those 😊). So how we can protect those valuable info? People use different methods. Some use third party software installed on PC to do it. If its large environment some use web application so multiple people have access to it. different vendors use different methods to protect these types of valuable data. Microsoft Azure Key vault is a service which we can use to protect Passwords, Connection Strings, Secrets, Data encryption/decryption keys uses by cloud applications and services. Keys stored in vaults is protected by hardware security modules (HSMs). It is also possible to import or generate keys using HSMs. Any keys process that way will be done according to FIPS 140-2 Level 2 guidelines. You can find about FIPS 140-2 Level 2 using

Benefits of using Key Vault

Keys saved in vault will be served via URLs. Developers, engineers do not need worry about securing keys. Application or service do not see the keys as vault service process behalf of them.  

Customers do not have to disclosure their keys to vendors or service providers. They can manage their own keys and allow to access those keys via urls in vendor or service provider applications. Vendor or service providers will not see the keys. 

By design Microsoft can’t extract or see customer keys. So, its further protected in vendor level too. 

HSMs are FIPS 140-2 Level 2 validated. So, any industry required to comply with these standards are protected by default. 

Key usage details are logged. So, you know what’s happening with your keys.  

An Azure Administrator is allowed to do following using Azure Key Vault,

Create or import a key or secret

Revoke or delete a key or secret

Authorize users or applications to access the key vault, which allow them to manage or use its own keys and secrets

Configure key usage 

Record key usage

More info about Azure Key vault can find under 

Let’s go ahead and see how we can setup and use Azure Key Vault service. 

Create Azure Key Vault Instance  
1) Log in to Azure Portal as global admin.
2) Click on Cloud Shell icon in top right-hand corner. (You also can setup this using portal, Azure CLI or locally installed Azure PowerShell. In this demo I am using Azure PowerShell directly from portal)  
3) Then select PowerShell for the command type. 
4) Then type Get-AzureRmResourceGroup to list down resource groups. So, we can select the resource group to associate the new key vault. 
5) If you wish to create key vault under new resource group, you can do it using 
New-AzureRmResourceGroup -Name RGName -Location WestUS
In above command RGName specify the resource group name and WestUS define the region. You can find the available locations using Get-AzureRmLocation
6) Now it’s time to create the vault. We can create it using, 
New-AzureRmKeyVault -VaultName 'Rebel-KVault1' -ResourceGroupName 'therebeladmin' -Location 'North Central US'
In above VaultName defines the Key Vault name. ResourceGroupName defines the resource group it is associated with. Location defines the location of resource. 
7) We can view properties of existing key vault using,
Get-AzureRmKeyVault "Rebel-KVault1"
In above Rebel-KVault1 is the key vault name. 
Vault URI shows the URL which can use to access the key vault by applications and services. 
8) Next step is to create Access Policy for the key vault. Using access policy we can define who have control over key vault, what they can do inside key vault and also what a application or service can do with it. 
Set-AzureRmKeyVaultAccessPolicy -VaultName 'Rebel-KVault1' -UserPrincipalName '' -PermissionsToKeys create,delete,list -PermissionsToSecrets set,list,delete -PassThru
In above command, can create,delete,list keys in Rebel-KVault1. He also can set,list,delete secrets under same vault. 
We also can set permissions for application to retrieve secrets or keys. 
Set-AzureRmKeyVaultAccessPolicy -VaultName 'Rebel-KVault1' -ServicePrincipalName '' -PermissionsToSecrets Get
In above, service running on will have permissions to retrieve secrets from the vault. 
Key Management
Now we have a vault up and running. Next step is to see how to manage valued data using it. In this demo I am going to do this using Azure Portal. Same tasks still can be done using Azure CLI or Azure PowerShell. 
1) To access Key vault feature in portal, go to Azure Portal > All Services > Key vaults
2) Then click on the relevant key vault from the list. In my demo it is Rebel-KVault1 which we create on previous section. 
3) Then it will load new window. Let’s go ahead and add a secret. To do that click on the Secrets option. 
4) Then click on Generate/Import
5) Then in the form fill the relevant info. Value defines the secret. After put relevant info click on create
6) If you need to delete a secret, click on the relevant secret from the list.
7) Then click on Delete
8) We also can generate/import certificates for use. In order to do so click on Certificates from the list.
9) Then click on Generate/Import 
10) From the form, using Generate option we can create self-signed certificate. 
11) Using Import option, we can import certificates in .PFX format. In the form, Upload Certificate File is the path for the .PFX file. You can use browse option to define the path. We can provide the PFX password under Password field. Once form is done, click on Create
Hope now you have understanding about Azure key vault and how to use it. This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step Guide to Setup Two-Tier PKI Environment

In previous posts on PKI blog series we learned about theory part of PKI. If you didn’t read those yet, please go ahead and read those before start on deployment part. 

How PKI Works? 

Active directory certificate service components

PKI Deployment Models

In this post I am going to demonstrate how we can setup PKI using Two-Tier model. I have use this model as it is the recommended model for mid and large organizations. 

The above figure explains the setup I am going to do. In there I have one domain controller, one standalone root CA and one Issuing CA. all are running with windows server 2016 with latest patch level. 
Setup Standalone Root CA
First step is to setup the standalone root CA. This is not a domain member server and it is operating in workgroup level. By configuring it on separate VLAN will add additional security to the root CA as it will not be able to talk to each other directly even its online. 
Once server is ready log in to the server as member of local administrator group. First task is to install the AD CS role service. It can be done using,
Add-WindowsFeature ADCS-Cert-Authority -IncludeManagementTools
Once role service is installed, next step is to configure the role and get the CA up and running. 
Install-ADcsCertificationAuthority -CACommonName “REBELAdmin Root CA” -CAType StandaloneRootCA -CryptoProviderName “RSA#Microsoft Software Key Storage Provider” -HashAlgorithmName SHA256 -KeyLength 2048 -ValidityPeriod Years -ValidityPeriodUnits 20
The above command will configure the CA. in the command CACommonName defines the common name for the CA. CAType defines the CA operation type. In our case, it is a StandaloneRootCA. The option for it will be EnterpriseRootCA, EnterpriseSubordinateCA or StandaloneSubordinateCA. CryptoProviderName specify the cryptographic service provider and in the demo, I am using the Microsoft default service provider. HashAlgorithmName defines the hashing algorithm use by the CA. The option for it will be change based on the CSP we choose. SHA1 is no longer counted as secure algorithm as recommended to use SHA256 or above. KeyLength specify the key size for the algorithm. In this demo, I am using 2048 key. ValidityPeriod defines the validity period of CA certificates. It can be hours, days, weeks, months or years. ValidityPeriodUnits is followed by the ValidityPeriod and specify how many hours, days, weeks, months or years it will valid. In our demo, we are using 20 years. 
Now we have the root CA up and running. But before we use it we need to do certain configuration changes. 
As I mentioned early this is a standalone root CA and it is not part of the domain. However, CDP (Certificate Revocation List Distribution Points) and AIA (Authority Information Access) locations which required by CA will be storing in DC. Since those use DN names with domain, root CA need to be aware of the domain information to publish it properly. It will retrieve this information via registry key. 
certutil.exe –setreg ca\DSConfigDN CN=Configuration,DC=rebeladmin,DC=com
CDP Location
CDP is stands for Certificate Revocation List Distribution Points and it is defined the location where CRL can retrieve. This is web based location and should be able to access via HTTP. This list will be used by the certificate validator to verify the given certificate is not in revocation list.  
Before we do this, we need to prepare the web server for that task. This task will use the same server built for online issuing CA.
The web server can be install using, 
Install-WindowsFeature Web-WebServer -IncludeManagementTools
Next step is to create a folder and create share so that can be use as the virtual directory.
mkdir C:\CertEnroll 

New-smbshare -name CertEnroll C:\CertEnroll -FullAccess SYSTEM,"rebeladmin\Domain Admins" -ChangeAccess "rebeladmin\Cert Publishers"
As part of the exercise it will set share permissions to rebeladmin\Domain Admins (Full Access) and rebeladmin\Cert Publishers (Change Access).
After that load the IIS manager and add a Virtual Directory CertEnroll with the above path. 
Last but not least we need to create a DNS record to use this publication point using FQDN. In this demo, I am using This will allow to access the new distribution point using
Now everything ready and we can publish the CDP settings using,
certutil -setreg CA\CRLPublicationURLs "1:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl \n10:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10\n2:"
The single numbers in the command refers to the options and numbers with % refers to the variables.





No Changes


Publish CRL to the given location


Attach CDP extensions of issued certificates


Include in CRL to find the delta CRL locations


Specify if need to publish all CRL info to AD when publishing manually


Delta CRL location


Include IDP extension of issued CRL

All these settings also can specify using GUI. In order to access it, got to Server Manager > Tools > Certificate Authority > Right click and select properties of the server > Go to the Extension Tab

There you can add all the above using GUI.


GUI Reference




DNS Name of the CA server



NetBIOS name of the CA server



Given Name for the CA



Renewal Extension of the CA



DN of the Configuration Container in AD



Truncated Name of the CA ( 32 Characters )



Inserts a name suffix at the end of the file name before publishing a CRL



When this called, this will replace the CRLNameSuffix with a separate suffix to use the delta CRL



Object Class identifier for CDP



Object Class identifier for a CA

AIA Location 

AIA (Authority Information Access) is an extension which is in certificate and it defines the location where application or service can retrieve issuing CA’s certificate. This is also a web based path and we can use the same location we used for the CDP.

This can be set using,

certutil -setreg CA\CACertPublicationURLs "1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11\n2:"

The Options are very much similar to the CDP with few smaller changes.




No Changes


Publish CA certificate to given location


Attach AIA extensions of issued certificates


Attach Online Certificate Status Protocol (OCSP) Extensions

CA Time Limits

When we setup the CA we have defined CA validity period as 20 years. but it doesn’t mean every certificate it issue will have 20 years’ valid period. Root CA will issue certificates only to Issuing CAs. Certificate request, approval and renewal processes are manual. There for typically these certificates will have longer validity periods. In demo, I will set it for 10 years. 

certutil -setreg ca\ValidityPeriod "Years"

certutil -setreg ca\ValidityPeriodUnits 10

CRL Time Limits

CRL also got some time limits associated. 

Certutil -setreg CA\CRLPeriodUnits 13

Certutil -setreg CA\CRLPeriod "Weeks"

Certutil -setreg CA\CRLDeltaPeriodUnits 0

Certutil -setreg CA\CRLOverlapPeriodUnits 6

Certutil -setreg CA\CRLOverlapPeriod "Hours"

In above commands, 

CRLPeriodUnits – Specify the number of days, weeks, months or years CRL will be valid. 

CRLPeriod – Specify whether CRL validity period is measured by days, weeks, months or years. 

CRLDeltaPeriodUnit – Specify the number of days, weeks, months or years delta CRL will be valid. For offline CA this should be disabled. 

CRLOverlapPeriodUnits – This specify the number of days, weeks, months or years that CRL can overlap. 

CRLOverlapPeriod – This Specify whether CRL overlapping validity period is measured by days, weeks, months or years.

Now we have all the settings submitted and in order to apply changes cert service need to be restarted. 

Restart-Service certsvc


Next step is to create new CRL and it can generate using,

certutil -crl

Once it’s done, there will be two files under C:\Windows\System32\CertSrv\CertEnroll

Publish Root CA Data in to Active Directory 
in above list we have two files. One is end with .crt. This is the root CA certificate. In order to trust it by other clients in domain, it need to publish to the active directory. To do that first need to copy this file from root CA to Active Directory server. Then log in to AD as domain admin or enterprise admin and run,
certutil –f –dspublish "REBEL-CRTROOT_REBELAdmin Root CA.crt" RootCA
The next file is ends with .crl. This is the root CA’s CRL. This also need to publish to AD. So, everyone in domain aware of that too. In order to do that copy the file from root CA to domain controller and run the command,
certutil –f –dspublish "REBELAdmin Root CA.crl"
Setup Issuing CA
Now we finished with the root CA setup and next step is to setup Issuing CA. Issuing CA will be running from a domain member server and it will be AD integrated. In order to perform the installation, need to log in to the server as domain admin or enterprise admin. 
First task will be to install the AD CS role. 
Add-WindowsFeature ADCS-Cert-Authority -IncludeManagementTools
I also going to use same server for the web enrollment role service from the same service. So it can be add using
Add-WindowsFeature ADCS-web-enrollment
After that we can configure the role service using,
Install-ADcsCertificationAuthority -CACommonName “REBELAdmin IssuingCA” -CAType EnterpriseSubordinateCA -CryptoProviderName “RSA#Microsoft Software Key Storage Provider” -HashAlgorithmName SHA256 -KeyLength 2048
In order to configure the web enrollment role service
Issue Certificate for Issuing CA
In order to get AD CS running on issuing CA, it needs the certificate issued from the parent CA which is the root CA we just deployed. During the role configuration process, it automatically creates the certificate request under the C:\ and exact file name will be listed in command output from the previous command.
The file need to copy from the issuing CA to the root CA and the execute the command,
certreq -submit "REBEL-CA1.rebeladmin.com_REBELAdmin IssuingCA.req"
As I explain before any request to root CA will process manually and this request also will be waiting for manual approval. In order to approve the certificate, go to Server Manager > Tools > Certificate Authority > Pending Certificates and the right click on the certificate > All Tasks > Issue

Once it is issued, it need to be export and import in to Issuing CA. 
certreq -retrieve 2 "C:\REBEL-CA1.rebeladmin.com_REBELAdmin_IssuingCA.crt"
above command will export the certificate. The number 2 is the “Request ID” in the CA mmc. 
Once it exports, move the file to issuing CA and from there run command,
Certutil –installcert "C:\REBEL-CA1.rebeladmin.com_REBELAdmin_IssuingCA.crt"

start-service certsvc
Post Configuration Tasks 
Similar to the root CA, after the initial service setup, we need to define some configuration values.
CDP Location
It is similar to the root CA and I am going to use already created web location for it. 
certutil -setreg CA\CRLPublicationURLs "1:%WINDIR%\system32\CertSrv\CertEnroll\%3%8%9.crl\n2:\n3:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10"
AIA Location
Similar way AIA location also specified using,
certutil -setreg CA\CACertPublicationURLs "1:%WINDIR%\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:\n3:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11"
CA and CRL Time Limits
CA and CRL Time limits also need to adjust,
certutil -setreg CA\CRLPeriodUnits 7
certutil -setreg CA\CRLPeriod “Days”
certutil -setreg CA\CRLOverlapPeriodUnits 3
certutil -setreg CA\CRLOverlapPeriod “Days”
certutil -setreg CA\CRLDeltaPeriodUnits 0
certutil -setreg ca\ValidityPeriodUnits 3
certutil -setreg ca\ValidityPeriod “Years”
Once all these done, in order to complete the configuration, restart the cert service using,
Restart-Service certsvc
Last but not least run,
certutil -crl

To generate CRLs. 
Once all done we can run PKIView.msc to verify the configuration. 
Note – PKIVIEW was first introduced with windows 2003 and it gives visibility over enterprise PKI configuration. It also verifies the certificates and CRL for each CA to maintain integrity.
Certificate Templates
Now we have the working PKI and we can turn off the standalone root CA. it only should bring online if the Issuing CA certificates are expired or PKI compromised to generate new certificates.
Once CA is ready, objects and services can use it for certificates. CA comes with predefined “Certificates Templates”. These can use to build custom certificate templates according to organization requirements and publish it to AD. 
Certificate Templates MMC can access using Run > MMC > File > Add/Remove Snap In > Certificate Templates
To create a custom template, Right Click on a template and click on Duplicate Template
Then it will open up the properties window and can change the settings of the certificate template to match the requirements. Some common settings to change in templates are,
Template display name (General Tab) – Display name of the template
Template Name (General Tab) – Common Name of the template
Validity Period (General Tab) – Certificate Validity Period
Security – Authenticated Users or Groups must have “Enroll” permission to request certificates. 
Next step before use it to issue the certificate via CA. Then the members of the domain can request certificates based on that. 
To do that, go to Certificate Authority MMC > Certificate Templates > Right click on it > New > Certificate Template to Issue
Then from the list select the Template to issue and click Ok

Request Certificate
Based on the published certificates templates, users can request certificates from issuing CA. I have log in to an end user PC and going to request a certificate based on template we just created in previous step. 
Go to Run > MMC > Add/Remove Snap in > Certificates and Click Add Button
From the list select the “Computer Account” to manage certificates for Computer Object. This is depended on the template. Once selected, in next window select Local computer as the target. 
Note – If the user is not an administrator, with default permission it will only allow to open the “Current User” snap in. To open Computer Account, MMC need to Run as Administrator 
Once MMC is loaded, go to Personal container, right click and then follow All Tasks > Request New Certificate
It will open new window and click next till it reach request certificate window. In there we can see the new template. Click on check box to select and then click on link with Yellow warning sign to provide additional details which required for the certificate. 
Provide the required fields and click ok to proceed. Most of the time its Common name which required if its computer certificate. 
Once it’s done, click on “Enroll” to request certificate. Then it will automatically process the certificate request and issue the certificate. Once its issued it can be found under the Personal Certificate Container. 
As we can see a valid certificate is issued. Same time record of this issued certificate can be found under Issuing CA’s Certificate Authority MMC > Issued Certificate 
In this exercise, we learned how to setup two-tier PKI correctly. After setup, as any other system regular maintenance is required to keep up the good health. Also, it is important to have proper documentation about the setup, certificate templates and procedures to issue, renew and revoke different types of certificates.
This marks the end of this PKI blog series. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

PKI Deployment Models

In my previous posts, we learn how PKI works and what are the PKI components. You can find those articles in following links,

How PKI Works? 

Active directory certificate service components

In this post we are going to look in to different PKI deployment models. In several places in above articles, I have mentioned about the PKI hierarchy and components in it such as root CAs, Intermediate CAs and Issuing CAs. Based on the business and operation requirements, PKI topology will also change. There are three deployments models we can use to address the PKI requirements. 

Single-Tier Model

This is also called as one-tier model and it is the simplest deployment model for PKI. This is NOT recommended to use in any production network as its single point of failure of entire PKI. 


In this model, single CA will act as root CA and Issuing CA. as I explain before the root CA is the highest trusted CA in PKI hierarchy. Any compromise to root CA will be compromise entire PKI. In this model its one server, so any compromise on server will easily compromise entire PKI as it doesn’t need to spread through different hierarchy levels. This is model is easy to implement and easy to manage. Because of that event it’s not recommended, this model exists in corporate networks. 

Some CA aware applications, required certificates in order to function. System center operation manager (SCOM) is one of the good example. It uses certificates on to secure web interfaces, to authenticate management servers and many more. If the organization doesn’t have internal CA, the options are to purchase certificates from vendor or to deploy a new CA. In similar situations engineers usually use this single-tier model as its only use for a specific application or task. 



Less resources and Low Cost to manage as the its all running from single server. It also reduces the license cost for the operating systems.

High possibility of getting compromise as root CA is online and running entire PKI related roles from one single server. If someone get access to private key of root CA he has complete ownership over the PKI

Faster deployment and it is possible to get CA running in short period of time.

Lack of redundancy as Certificate issuing and management all depend on single server and availability of it will decide the availability of PKI


It is not scalable and it will need restructure the hierarchy if need to add more role servers.


All the certificate issuing and management done by one server and all the work requests has to handle by it. it creates a performance bottleneck.

Two-Tier Model 

This is the most commonly used PKI deployment model in corporate networks. By design the root CA need to keep offline and it will prevent private key of root certificate been compromised. root CA will issue certificates for subordinate CAs and Subordinate CAs are responsible for issuing certificates for objects and services. 

In event of Subordinate CAs certificate expire, Offline root CA will need to bring online to renew the certificate. Root CA doesn’t need to be a domain member and it should be operating in workgroup level (standalone CA). There for the certificate enrollment, approval and renew will be manual process. This is scalable solution and number of issuing CAs can be increase based on workloads. This allows to extend the CA boundaries to multiple sites too. In Single-Tier model if PKI got compromised, in order to recover all the issues certificates, need to be manually removed from the devices. In Two-Tier model, all need to do is revoke the certificates issued by CA and then publish CRL (Certificate Revocation List) and then reissue the certificates. 



Improved PKI security as root CA offline and it’s been protected by private key been compromised.  

High Maintenance – Need to maintain multiple systems and need skills to process the manual certificates request/approval/renewal between root CA and subordinate CA

Flexible scalability – Can start small and expand by adding additional subordinate CAs when required.

Cost – Cost of resources and licenses are high compare to Single-Tier model 

Restrict Issuing CA impact in CA hierarchy by controlling certificates scope. It will prevent issuing “rouge” certificates.

Manual certificate renewal process between root CA and subordinate CAs adds additional risks as if administrators forgot to renew it on time it can bring whole PKI down.

Improved performances as workloads can shared among multiple subordinate CAs.


Flexible maintenance capabilities as less dependencies.


Three-Tier Model 

Three-Tier model is the highest in the model list which operates with greater security, scalability and control. Similar to two-tier model it also has offline root CA and online Issuing CAs. Addition to that there will be offline intermediate CAs which operates between root CA and subordinate CAs. Main reason for it is to operate intermediate CAs as Policy CAs. In larger organizations, different departments, different sites, different operation units can have different certificate requirements. As an example, a certificate issued to a perimeter network will required manual approval process while others users in corporate network prefer auto approval. IT team prefer to have advanced Cryptography provider for its certificates and large key while other users operates with default RSA algorithm. All these different requirements are defined by the Policy CA and it publish the relevant templates, procedures to the other CAs. 

This model add another layer of security to the hierarchy. However, if you not using CA policies the intermediate tier will not use. It is just can be a waste of money and resources. there for most of the organizations prefer Two-tier model to start with and then expand as required. 
In this model both root CA and Intermediate CAs are operates as standalone CA. root CA only will issue certificates to intermediate CAs and those only will issue certificate to Issuing CAs. 



Improved security as it adds another layer of CAs to the certificate verification.

Cost – Cost of resources and licensee are high as its need to maintain three layers. It also increases the operation cost.

Greater scalability as each tier can span horizontally.

High Maintenance – When number of servers increases the efforts need to maintain those also increases. Both tiers which operates standalone CAs required additional maintenance as its not supported for automatic certificate request/approval/renewal. 

In event of compromise of issuing CA, intermediate CA can revoke the compromised CA with minimum impact to existing setup

Implementation Complexity is high compare to other models.

High Performance setup as workloads are distributed and administrative boundaries are well defined by intermediate CAs.


Improved control over certificate policies and allow enterprise to have tailored certificates.


High availability as dependencies further reduced.


This marks the end of this blog post. In next post we are going to look in to deployment of AD CS. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Active Directory Certificate Service Components

In my previous post I explained what is PKI and how it works. You can find it using 

Active Directory Certificate Service is the Microsoft solution for PKI, It is collection of role services and those can use to design the PKI for your organization. In this post we are going to look in to each of these role service and their responsibilities. 

Certificate Authority (CA)

CA role service holders responsible for issue, store, manage and revoke certificates. PKI setup can have multiple CAs. There are mainly two types of CA which can identify in PKI. 

Root CA

Root CA is the most trusted CA in PKI environment. Compromise of root CA will possibly compromise entire PKI. There for security of the root CA is critical and most organization only bring those online when they need to issue or renew a certificate. Root CA also capable of issue certificates to any object or services but considering security and hierarchy of the PKI it used to issue certificates only to Subordinate CAs. 

Subordinate CA

In PKI, Subordinate CAs are responsible for issues, store, manage and revoke certificates for objects or services. These publish certificate templates and users can create their certificate requests based on those templates. Once CA receives the request it will process it and issue the certificate. PKI can have multiple subordinate CAs. Each subordinate server should have its own certificate from the root CA. the validity period of these certificates is normally longer than ordinary certificates. It also need to renew its certificate from root CA when it reaches the end of validity period. Subordinate CA can have more subordinate CAs below it. in such situation, Subordinate CA also responsible for issuing certificates for its more subordinate CAs. These Subordinate CAs which have more Subordinate CAs called as Intermediate CA. These will not be responsible for issues certificates to users, devices or services. Then it will become more subordinate CA’s responsibilities. The more subordinate servers which issues certificates will call as Issuing CA.


Certificate Enrollment Web Service

Certificate enrolment web service allow users, computer or services to request certificate or renew certificate via web browser, even it is not domain-joined or temporally out of corporate network. If it is domain-joined and in corporate network, they can use auto enrollments or template based request process to retrieve certificate. This web service will remove the dependencies to use other enrollment mechanism. 

Certificate Enrollment Policy Web Service

This role service is works with certificate enrollment web service and allow user, computer or services to perform policy-based certificate enrollment. Similar to enrollment web services, the client computers can be non-domain joined computer or domain joined devices which is out of company network boundaries. When client request for policy information, enrollment policy web service query the AD DS using LDAP for the policy information and then deliver it to client via HTTPS. This information will be cashed and use for similar requests. Once user has the policy information then he/she can request certificate using certificate enrollment web service. 

Certificate Authority Web Enrollment 

This is similar to a web interface for Certificate Authority. Users, computers or services can request certificates using web interface. Using the interface users also can download the root certificates and intermediate certificates in order to validate the certificate. This also can use to request certificate revocation list (CRL). This list includes all the certificates which is expires or revoked with in its PKI. If any presented certificate match entry in the CRL, it will be automatically refused. 

Network Device Enrollment Service

Network devices such as routers, switch and firewalls can have device certificates to verify authenticity of traffic pass through it. majority of these devices are not Domain-Joined and their operation system are also very unique and do no support typical windows computer functions. In order to request or retrieve certificates, it uses Simple certificate enrollment protocol (SCEP). It allows network devices to have x.509 version 3 certificates similar to other domain-joined devices. This is important as if devices going to use IPsec it must needs have x.509 version 3 certificate.  

Online Responder

Online responder is responsible for producing information about certificate status. When I explain about CA web enrollment I explained about certificate revocation list (CRL). CRL includes the entire list of certificates which is expired and revoked with in the PKI. The list will keep growing based on the number of certificates it managed. Instead of using bulk data, online responder will response to individual requests from users to verify status of a particular certificate. This is more efficient than CRL method as requests is focused to find out status of one certificate in given time. 

Certificate Authority Types

Based on the installation mode the CAs can be divide in to two types which is Standalone CA and Enterprise CA. The best way to explain capabilities of both types is to compare them. 


Standalone CA

Enterprise CA

AD DS Dependency

Not Depend on AD DS, it can install on member server or stand-alone server in workgroup

Only can install in Member server

Operate Offline

Can Stay Offline

Cannot be Offline

Customized Certificate Templates

Do Not support, only support standard templates


Supported Enrollment Methods

Manual or Web Enrollment

Auto, Manual or Web Enrollment

Certificate Approval Process


Manual or Automatic based on the Policy

User input for the Certificate fields


Retrieved from AD DS

Certificate Issuing and Managing using AD DS



Standalone CA mostly use for as the root CA. in previous section I have explain how important is root CA security is. In Standalone CA it support to keep the server offline and bring it online when it need to issue certificate or renew certificate. Since root CA are only used to issue certificates to subordinate CA. so the manual processing and approval are manageable as it may only have to do in every few years’ time. This type also valid for public CAs. Issuing CA are involving with day to day certificate issuing, managing, storing, renewing and revoking process. Depending on the infrastructure size it can be hundreds or thousands who use these issuing CAs. If the request and approval process is manual it may take much manpower to maintain it. there for in corporate networks it always recommended to use enterprise CA type. Enterprise CAs allows engineers to create certificate templates with specific requirements and publish these via AD DS. End users can request the certificates based on these templates. Enterprise CAs are only can install on Windows server Enterprise or Data Center version only. 

This marks the end of this blog post. In next post we are going to look in to deployment models of AD CS. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

How PKI Works ?

When I talk to customers, engineers, most of them know SSL is “more secure” and works with TCP 443. But most of them do not really know what is a certificate and how this encryption and decryption works. It is very important to know how it’s exactly works then the deployment and management becomes easy. Most of the PKI related issues I have worked on are related to misunderstanding on core technologies, components and concepts related to it, rather than service level issues. 

Symmetric-key vs Asymmetric-key

There are two type of cryptographic methods use to encrypt the data in computer world. Symmetric method works exactly the same way your door lock works. You have one key to lock or open the door. This is also called as shared secret and private key. VPN connections, Backup Software are some of the examples where still uses Symmetric-key to encrypt data.

Asymmetric-key method is in other hand uses key pair to do the encryption and decryption. It includes two keys one is public key and the other one is private key. Public key is always distributed to public and any one can have it. Private key is unique for the object and it will not distribute to others. Any message encrypts using a public key only can decrypt using its private key. Any message encrypts using private key only can decrypt using public key. PKI uses the Asymmetric-key method for digital encryption and digital signature. 

Digital Encryption 

Digital encryption mean, the data transfer between two parties will be encrypted and sender will ensure it only can open from the expected receiver. Even another unauthorized party gain access to that encrypted data, they will not be able to decrypt the data. Best way to explain it will following example, 


We have an employee in organization called Sean. In PKI environment, he owns two keys which is public key and private key. It can use to encryption and signature process. Now he has a requirement to receive set of confidential data from compony account manager Chris. He doesn’t want anyone else to have this confidential data. The best way to do this to encrypt the data which going to send from Chris to Sean. 


In order to encrypt the data, Sean sends his public key to Chris. There is no issue with providing public key to any party. Then Chris uses this public key to encrypt the data he is sending over to Sean. This encrypted data only can open using Sean’s private key. He is the only one have this private key. This verifies the receiver and his authority over the data. 

Digital Signature 

Digital signature verifies the authenticity of the service or data. It is similar to signing a document to prove its authenticity. As an example, before purchase anything from amazon, we can check its digital certificate and it will verify the authenticity of the website and prove it’s not a phishing website. Let’s look in to it further with a use case. In previous scenario, Sean successfully decrypted the data he received from Chris. Now Sean wants to send some confidential data back to Chris. It can be encrypt using same method using Chris’s public key. But issue is Chris is not part of the PKI setup and he do not have key pair. Only thing Chris need to verify the sender is legitimate and its same user he claims to be. If Sean can certify it using digital signature and if Chris can verify it, the problem is solved. 


Now in here, Sean encrypt the data using his private key. Now the only key it can be decrypt is the Sean’s public key. Chris already have this information. Even if he doesn’t have public key it can distribute to him. When Chris receives the data, he decrypts it using Sean’s public key and it confirms the sender is definitely Sean. 

Signing and Encryption  

In previous two scenarios, I have explained how digital encryption and digital signature works with PKI. But both of these scenarios can combined together to provide the encryption and signing in same time. In order to do that system, use two additional techniques.

Symmetric-Key – One time symmetric-key will use for the message encryption process as it is faster than the asymmetric-key encryption algorithms. This key need to be available for the receiver but to improve the security it will be still encrypt using receiver’s public key. 

Hashing – During the sign process system will generate a one-way hash value to represent the original data. Even some one manage to get that hash value it will not possible to reverse engineer to get the original data. If any modification done to the data, hash value will get change and the receiver will know straight away. These hashing algorithms are faster than encryption algorithms and also the hashed data will be smaller than actual data values. 

Let’s look in to this based on a scenario. We have two employees Simran and Brian and both using PKI setup. Both have their private and public keys assigned. 


Simran wants to send encrypted and signed data segment to Brian. Process mainly can be divided in to two stages which is data signing and data encryption. It will go through both stages before the data send to Brian. 


The first stage is to sign the data segment. System received the Data from Simran and first step is to generate the message digest using the hashing algorithms. This will ensure data integrity and if its altered once it leaves the senders system, receiver can easily identify it using the decryption process. This is one-way process. Once message digest it generated, in next step the messages digest will encrypt using Simran’s Private key in order to digitally sign. It will also include Simran’s Public key so Brian will be able to decrypt and verify the authenticity of the message. Once encrypt process finish it will attached with original data value. This process will ensue data was not altered and send from exact expected sender (Genuine). 


Next stage of the operation is to encrypt the data. First step is in the process is to generate one time symmetric key to encrypt the data. Asymmetric algorithm is less efficient compare to symmetric algorithms to use with long data segments. Once symmetric key is generated the data will encrypt using it (including message digest, signature). This symmetric key will be used by Brian to decrypt the message. There for we need to ensure it only available for Brian. The best way to do it is to encrypt the symmetric key using Brian’s public key. So, once he received it, he will be able to decrypt using his private key. This process is only encrypting symmetric key itself and rest of the message will stay same. Once it completed the data can send to Brian. 

Next step of the process to see how the decryption process will happen on Brian’s side. 


Message decryption process starts with decrypting the symmetric key. Brian needs symmetric to go further with decryption process. It only can decrypt using Brian’s private key. Once its decrypt, symmetric key can use to decrypt the messaged digests + signature. Once decryption done same key information cannot be used to decrypt similar messages as its one time key. 


Now we have the decrypted data and next step is to verify the signature. At this point we have message digest which is encrypt using Simran’s private key. It can be decrypt using Simran’s public key which is attached to the encrypted message. Once its decrypt we can retrieve the message digest. This digest value is one-way. We cannot reverse engineer it. There for retrieved original data digest value will recalculate using exact same algorithm used by sender. After that this newly generated digest value will compare with the digest value attached to message. If the value is equal it will confirm the data wasn’t modified during the communication process. When value is equal, signature will be verified and original data will issue to Brain. If the digest values are different the message will be discard as it been altered or not signed by Simran. 

This explained how PKI environment works with encryption/decryption process as well as digital signing /verification process.  

If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step Guide to protect Azure VM using Azure Backup

Azure Backup is capable of replacing typical on-premises backup solutions. It is cloud-based, secure, reliable solution. It has four components which can use to backup different types of data.


Protected data

Can use with On-premises?

Can use with Azure?

Azure Backup (MARS) agent

Files, Folders, System State



System Center DPM

Files, Folders, Volumes,

VMs, Applications, Workloads, System State



Azure Backup Server

Files, Folders, Volumes,

VMs, Applications, Workloads, System State



Azure IaaS VM Backup

VMs, All disks (using PowerShell)



More details about azure backup and components limitations can be find on 

In this article we are going to look in to Azure VM backup (Azure IaaS VM Backup). 

How Azure VM Backup works? 

Azure VM backup doesn’t need any special agent installed in VM. It also does not need to have any additional components (backup server) install either to enable backup. When very first backup job is triggered, it installs backup extension inside the VM. If its Windows VM, it installs VMSnapshot extension and if its Linux VM, it installs VMSnapshotLinux extension. VM must be in running state in order to install extension. After extension in place, it takes point-in-time snapshot of the VM. If VM is not running during backup window, it takes snapshot of VM storage. If its windows VM, backup service uses Volume Shadow Copy Service (VSS) to get consistence snapshot of VM disk. If its Linux VM, users can create custom scripts to run before and after backup job to keep application consistency. Once snapshot is taken it will transfer to the backup vault. Service can identify the recent changes and only transfer the block of data which changed from last backup. Once the data transfer completes snapshot will removed and recovery point will be created. 


Image Source: 

Performance of backup depends on,

1) Storage account limitations 

2) Number of disks in VM

3) Backup Schedule – if all jobs running in same time it can create traffic jam

According to Microsoft following are recommended when you use Azure backup for Azure VMs. Reference: 

1) Do not schedule more than 40 VMs to backup same time.

2) Schedule VMs backup when minimum IOPs been used in your environment (In relevant storage accounts). 

3) Better not to back up more than 20 disks in single storage account. If you have more than 20 disks in single storage account spread those VMs across the multiple policies to maintain required IOPS. 

4) Do not restore a VM running on Premium storage to same storage account. Also try to avoid restore while backup process is running on same storage account.

5) For Premium VM backup, ensure that storage account that hosts premium disks has at least 50% free space for staging snapshot for a successful backup.

6) Linux VM needs python 2.7 enabled for backup.

Next step is to see this in action.

1) Log in to Azure Portal as Global Administrator

2) First step is to create Azure Recovery Service Vault. In order to do that, go to All Services and click on Recovery Service vaults under storage section. 


3) Then click on Add in new window


4) It will open up wizard and there provide vault name, subscription, resource group and location. Once done, click on Create.


5) Now we have vault created, next step is to create backup policy. To do that click on vault we just created from the Recovery service vault window.


6) Then click on Backup Policies 


7) There is default policy from Azure VM backup. It backup VMs daily and keep it for 30 days.


8) I am going to create new policy to do backup every day at 01:00 am and keep it for 7 days. To do that click on add option in policy window. 


9) Then select the policy type. for VMs, it should be Azure Virtual Machine


10) In next window we can define time and retention period of data. Once done with the details click on Create


11) Next step of the configuration is to enable backup. In order to do that, go to the VM you like to backup. Then click on the option Backup 


12) Then in new window select the vault and policy we created before and then click on enable backup


13) Once it is done we can run backup by going in to same backup window. If you like to take ad-hoc backup, click on Backup Now


14) We can see the progress of the backup job by clicking View All Jobs



15) Once backup jobs completed we can see the status of it in same backup window.


16) To test the restore I installed Acrobat Reader in this server and created test folder in desktop. 


17) Now I am going to do a restore to an earlier day. To do that go to VM backup page, then click on Restore VM


18) In next window it asks which backup to restore. I am selecting back up from 3 days.


19) In next window it allows me to restore it as new VM or as disk. In here I am going to restore it as new VM


20) Once selection is done click on Restore to begin the process.

21) We also can check the status of the job using backup job window.


22) Once restore completed, I can see a new VM. 


23) Once log in to the VM I can’t see the folder and application I installed, as expected. 


This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure virtual machine scale sets – part 02 – Deploy Application to scale set

In my previous post Azure virtual machine scale sets – part 01, we learned what is VM scale set and how we can create a scale set in Azure. if you not read it yet please go through it before we start on this post as reset of the steps in this post depend on it 

In this post we are going to deploy a sample application to scale set. In my previous post I have created a new scale set using,

New-AzureRmVmss `

  -ResourceGroupName "rebelResourceGroup" `

  -Location "canadacentral" `

  -VMScaleSetName "rebelScaleSet" `

  -VirtualNetworkName "rebelVnet" `

  -SubnetName "rebelSubnet" `

  -PublicIpAddressName "rebelPublicIPAddress" `

  -LoadBalancerName "rebelLoadBalancer" `

  -BackendPort "80" `

  -VmSize "Standard_DS3_v2" `

  -ImageName "Win2012Datacenter" `

  -InstanceCount "4" `

  -UpgradePolicy "Automatic"

In above it created an Azure Load balancer and TCP port 80 been load balanced among 4 instances. Under Azure Load Balancer | Inbound NAT rules it does have default rules for port 3389 and 5985. Those ports are mapped to custom TCP ports in order to give external access. 


As an example, in above sample, I can RDP to instance0 using Likewise, we can connect to each instance and install apps if need. instead of that we can use centralized remote deployment, so the configuration is same across the instance. 

In my config I didn’t use static ip address. You can find public ip address by running following azure PowerShell command,

Get-AzureRmPublicIpAddress -ResourceGroupName rebelResourceGroup | Select IpAddress


In order to push application, first need to prepare app config. in my demo I got a file in GitHub repository. 

$customConfig = @{

  "fileUris" = (,"");

  "commandToExecute" = "powershell -ExecutionPolicy Unrestricted -File simplewebapp.ps1"


My config is very simple one. In PowerShell script I have following,

Add-WindowsFeature Web-Server

Set-Content -Path "C:\inetpub\wwwroot\Default.htm" -Value "Test webapp running on host $($env:computername) !"

It will install IIS and then create HTML file which will print text with the instance name. 


As next step lets go and retrieve info about scale set,

$vmss = Get-AzureRmVmss `

          -ResourceGroupName "rebelResourceGroup" `

          -VMScaleSetName "rebelScaleSet"


After that, lets create custom script extension

$vmss = Add-AzureRmVmssExtension `

  -VirtualMachineScaleSet $scaleconfig `

  -Name "customScript" `

  -Publisher "Microsoft.Compute" `

  -Type "CustomScriptExtension" `

  -TypeHandlerVersion 1.8 `

  -Setting $customConfig

In above,

 –Publisher specifies the name of the extension publisher. This can find using Get-AzureRmVMImagePublisher 

 –Type specify the extension type. we can use Get-AzureRmVMExtensionImageType find the extension type. 

TypeHandlerVersion specify the extension version. It can view using Get-AzureRmVMExtensionImage.


Next step of the configuration is to update scale set with the custom extension,

Update-AzureRmVmss `

  -ResourceGroupName "rebelResourceGroup" `

  -Name "rebelScaleSet" `

  -VirtualMachineScaleSet $vmss


Now it is time to do testing. Let’s go to public IP address and see if it’s got the app we submit. 

As I refresh we can see the instance number get updated. That means script is successfully running on scale set as expected. 




This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.  

Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Windows Admin Center – Rich Server Management Experience!

In last ignite (2017) Microsoft Released technical preview of “Project Honolulu” which aimed to provide light weight but powerful server management experience for windows users. I already covered it with detail blog post . Now the waiting is over and it is generally available as Windows Admin Center

As Windows Users we use many different MMC to manage roles/features.  We also use those to troubleshoot issues. If it is remote computers most of the time we keep RDP or use other methods to dial in. With Windows Admin Center now we can access all these consoles in one web based interface in secure, easy, well integrated way. It can connect other remote computers as well. 

Windows Admin Center features can list down as following,

Easy to Deploy –  It is easy to deploy. Can install in windows 10 or Windows 2016 server and start to manage device with in few minutes. 

Manage from Internal networks or external networks – This solution is web based. It can be access from internal network and same can publish to external networks with minimum configuration changes. 

Better Access Control – Windows Admin Center supports role based access control and gateway authentication option included local groups, Windows Active Directory and Azure Active Directory. 

Support for hyper-converged clusters – Windows Admin Center well capable of managing hyper-converged clusters including, 

Single console to manage compute, storage and networking

Create and Manage storage space direct features

Monitoring and Alerting 

Extensibility – Microsoft will offer SDK which will allow 3rd party vendors to develop solutions and allow to integrate with windows admin center to manage their solutions. 

How it Works?
Windows Admin Center have two components.
Web Server – It is the UI for Windows Admin Center and users can access it via HTTPS requests. It also can publish to remote networks to allow users to connect via web browser.
Gateway – Gateway is to manage connected servers via Remote PowerShell and WMI over WinRM. 
Image Source – 
Which Systems Will Support?
WAC will come default with upcoming windows server 2019. At the moment it can install on windows 10 in desktop mode which connect to the WAC gateway from the same computer where it is installed. It can also install on windows server 2016 in gateway mode which allows to Connect to WAC gateway from a client browser on a remote machine. 
WAC can manage any systems from windows server 2012. 
What about System Center and OMS? 
This is not replacement for high end infrastructure management solution such as SCCM and OMS. WAC will add additional management experience, if you already have those solution in place. 
Azure Integration? 
Yes, WAC supports Azure Integration. Azure AD can use for WAC gateway authentication. By providing gateway access to Azure VNet, WAC can manage Azure VM. WAC can also manage Azure Site Recovery activities. 
Let’s see how we can get it running,
In my demo I am going to install WAC in windows server 2016. 
To install WAC,
1) Log in to the server as Administrator
2) Download WAC installation from
3) Double click on the .msi file to begin the installation.
4) In initial window accept the license terms and click Next
5) Then it asks how you like to update it, select the default and click Next to proceed. 
6) In next window select option to allow installed to modify trusted host settings. In same window we also can select to create desktop shortcut if needed. 
7) In next window we can define the port and certificate for the management site. The default port is 443. In demo I am going to use self-sign cert. 
8) Once installation completes, we can launch WAC using desktop icon or https://serverip (replace server ip with the IP address of the server or hostname)
Note – WAC not supported on IE. So, you need to use Edge or another browser to access it. 
9) By default, it shows the server it is installed under “Server Manager”. In order to add another server, click on Windows Admin Center drop down, and select Server Manager
10) Then click on Add
11) Then type the FQDN for the server that you like to add. It should be able to resolve from the server. then click on Submit
12) We also can add Windows 10 computers to WAC. To do that click on Windows Admin Center drop down and select Computer Management
13) Then click on Add
14) Then type the FQDN for the PC that you like to add. It should be able to resolve from the server. then click on Submit
Note – Windows 10 do not have Powershell or WinRM remoting by default. To enable it you must run Enable-PSRemoting from PowerShell windows running as admin.
15) Once servers/pc are added you can connect to it by just clicking on the server/pc from the list. 
16) For remote devices, it will ask as who you like to login. Provide the relevant admin login details and click on Continue
17) Then it loads the related info for the server/pc
Now we have basic setup of WAC. In next posts we are going to look in to different features of WAC. This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Group Policy Security Filtering

Group Policy can map to Sites, Domain and OUs. If group policy is mapped to OU, by default it will apply to any object under it. But within a OU, Domain or Site there are lots of objects. The security, system or application settings requirements covers by group policies not always applies to boarder target groups. Group Policy filtering capabilities allows to further narrow down the group policy target to security groups or individual objects. 

There are few different ways we can do the filtering in group policy.

1) Security Filtering

2) WMI Filtering

In this post we are going to look in to Security Filtering. In one of my previous post I already covered WMI filtering. It can be found under 

Before apply the security filtering, the first thing to make sure is group policy mapped correctly to the Site, Domain or OU. The security group or the objects you going to target should be under correct level where group policy is mapped. 

We can use the GMPC or PowerShell cmdlets to add the security filtering to GPO.

As you can see, by default any policy have “Authenticated Users” group added to the security filtering. It means by default the policy will apply to any authenticated user in that OU. When we add any group or object to security filtering, it also creates entry under delegation. In order to apply a group policy to an object, it needs minimum of,
Any object added to the Security Filtering section will have both of these permissions set by default. Same way if an object added directly to delegation section and apply both permissions, it will list down those objects under Security Filtering section. 
Now, before we add custom objects to the filtering, we need change the default behavior of the security filtering with “Authenticated Users”. Otherwise it doesn’t matter what security group or object you add it will still apply group policy settings to any authenticated user. Before Microsoft release security patch MS16-072 in year 2016, we can simply remove the Authenticated Users group and add the required objects to it. with this new security patch changes, group policies now will run with in computer security context. Before it was executed with in user’s security context. In order to accommodate this new security requirements, one of following permissions must be available under group policy delegation. 
Authenticated Users – READ
Domain Computers – READ
In order to edit these changes, Go to Group Policy, Then to Delegation tab, Click on Advanced, Select Authenticated users and then remove Apply group policy permissions. 
Now we can go back to Scope tab and add the required security group or objects in to security filtering section. it will automatically add the relevant Read and Apply Group Policy permissions. 
In here we looking in to how to apply group policy to specific target, but it also allows to explicitly allow it to large number of objects and block groups or object by applying it. as an example, let’s assume we have a OU with few hundred objects from different classes. From all these we have like 10 computer objects which we do not need to apply a given group policy. Which one is easy? go and add each and every security group and object to security filtering or allow every one for group policy and block it only for one security group? Microsoft allows to use the second method in filtering too. In order to do that, group policy should have default security filtering which is “Authenticated users” with READ and APPLY GROUP POLICY permissions. Then go to Delegation tab and click on Advanced option. In next window click on Add button and select the group or object that you need to block access to. 
Now in here we are denying READ and APPLY GROUP POLICY permissions to an object. So, it will not able to apply the group policy and all other object under that OU will still able to read and apply group policy. Easy ha?
This marks the end of this blog post. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure Virtual Machine Scale Sets – Part 01 – What is it and How to set it up?

There are many different solutions available to load balance applications. It can be based on separate hardware appliances, virtual appliances or system inbuilt method such as NLB (Network Load Balancer). However, there are few common challenges on these environments. 

If its third-party solution, additional cost involves for licenses, configuration and maintenance 

Applications or services not always use all of the allocated resources. It may depend on demand and time. Since its fixed number of instance, infrastructure resource will be wasted in non-peak time. if its cloud service, it going to waste money!

When the number of server instances increase, it makes it harder to manage systems. Too many manual tasks!

Azure virtual machine scale sets answers all above challenges. It can automatically increase and decreases number of vm instances running based on demand or schedule. No extra virtual appliances or licenses involves. It also allows to centrally manage, configure large number of instances. Following points are recognized as key benefits of Azure virtual machine scale sets.

It supports Azure load balancer (Layer-4) and Azure Application Gateway (Layer-7) traffic distribution.

It allows to maintain same VM configuration across the instance including VM size, Network, Disk, OS image, Application installs. 

Using Azure Availability Zones, if required we can configure to distribute VM instances in scale set to different datacenters. It adds additional availability. 

It can automatically increase and decrease number of vm instances running based on application demand. It saves money!

It can grow up to 1000 vm instances, if its own custom images, it supports up to 300 vm instances. 

It supports Azure Managed Disks and Premium Storage. 

Let’s see how we can setup Azure virtual machine scale set. In my demo I am going to use Azure PowerShell. 

1) Log in to Azure Portal as Global Administrator
2) Open Cloud shell (right hand corner)
3) Make sure you are using PowerShell Option
4) In my demo scale set configuration as following
New-AzureRmVmss `
  -ResourceGroupName "rebelResourceGroup" `
  -Location "canadacentral" `
  -VMScaleSetName "rebelScaleSet" `
  -VirtualNetworkName "rebelVnet" `
  -SubnetName "rebelSubnet" `
  -PublicIpAddressName "rebelPublicIPAddress" `
  -LoadBalancerName "rebelLoadBalancer" `
  -BackendPort "80" `
  -VmSize "Standard_DS3_v2" `
  -ImageName "Win2012Datacenter" `
  -InstanceCount "4" `
  -UpgradePolicy "Automatic"
In above,




This is the command use to create Azure Virtual Machine Scale Set


This define the resource group name and it is a new one.


This defines the resource region. In my demo its Canada Central


This defines the name for the Scale Set


This defines the virtual network name


This defines the subnet name. if you do not define subnet prefix, it will use default


This defines the name for public IP address. If not define allocation method using -AllocationMethod , it will use dynamic by default.


This defines the load balancer name


This creates relevant rules in loadbalancer and load balance the traffic. in my demo I am using TCP port 80.


This defines the VM size. if this is not defined, by default it uses Standard_DS2_v2


This defines the VM image details. If no valuves used it will use default value which is Windows Server 2016 Datacenter


This defines the initial number of instance running on the scale set


This defines upgrade policy for VM instances in scale set

Once this is run it will ask to define login details for instances. After completes, it will create the scale set.


This also can do using Portal. In order to use GUI, 

1) Log in to Azure Portal as Global Administrator

2) Go to All Services | Virtual Machine Scale Set


3) In new page, click on Add


4) Then it will open up the form, once fill in relevant info click on create 


5) We also can review the existing scale set properties using Virtual machine scale sets page. On page click on scale set name to view the properties. If we click on Instances, we can see the number of instances running


6) Scaling shows the number of instances used. If need it can also adjust in here. 


7) Size defines the size of the VM, again if need values can change in same page. 


8) Also, if we go to Azure Portal | Load Balancers, we can review settings for load balancer used in scale set.


9) In my demo I used TCP port 80 to load balance. Those info can find under Load Balancing rules


10) Relevant public ip info for scale set can be find under inbound NAT rules



This marks the end of this blog post. In next post we will look in to further configuration of scale set. If you have any questions feel free to contact me on also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter