Windows Admin Center – Rich Server Management Experience!

In last ignite (2017) Microsoft Released technical preview of “Project Honolulu” which aimed to provide light weight but powerful server management experience for windows users. I already covered it with detail blog post http://www.rebeladmin.com/2017/10/project-honolulu-better-windows-server-management-experience/ . Now the waiting is over and it is generally available as Windows Admin Center

As Windows Users we use many different MMC to manage roles/features.  We also use those to troubleshoot issues. If it is remote computers most of the time we keep RDP or use other methods to dial in. With Windows Admin Center now we can access all these consoles in one web based interface in secure, easy, well integrated way. It can connect other remote computers as well. 

Windows Admin Center features can list down as following,

Easy to Deploy –  It is easy to deploy. Can install in windows 10 or Windows 2016 server and start to manage device with in few minutes. 

Manage from Internal networks or external networks – This solution is web based. It can be access from internal network and same can publish to external networks with minimum configuration changes. 

Better Access Control – Windows Admin Center supports role based access control and gateway authentication option included local groups, Windows Active Directory and Azure Active Directory. 

Support for hyper-converged clusters – Windows Admin Center well capable of managing hyper-converged clusters including, 

Single console to manage compute, storage and networking

Create and Manage storage space direct features

Monitoring and Alerting 

Extensibility – Microsoft will offer SDK which will allow 3rd party vendors to develop solutions and allow to integrate with windows admin center to manage their solutions. 

How it Works?
 
Windows Admin Center have two components.
 
Web Server – It is the UI for Windows Admin Center and users can access it via HTTPS requests. It also can publish to remote networks to allow users to connect via web browser.
Gateway – Gateway is to manage connected servers via Remote PowerShell and WMI over WinRM. 
 
wac1
 
Image Source – https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/media/architecture.png 
 
Which Systems Will Support?
 
WAC will come default with upcoming windows server 2019. At the moment it can install on windows 10 in desktop mode which connect to the WAC gateway from the same computer where it is installed. It can also install on windows server 2016 in gateway mode which allows to Connect to WAC gateway from a client browser on a remote machine. 
WAC can manage any systems from windows server 2012. 
 
What about System Center and OMS? 
 
This is not replacement for high end infrastructure management solution such as SCCM and OMS. WAC will add additional management experience, if you already have those solution in place. 
 
Azure Integration? 
 
Yes, WAC supports Azure Integration. Azure AD can use for WAC gateway authentication. By providing gateway access to Azure VNet, WAC can manage Azure VM. WAC can also manage Azure Site Recovery activities. 
 
Let’s see how we can get it running,
In my demo I am going to install WAC in windows server 2016. 
 
To install WAC,
 
1) Log in to the server as Administrator
2) Download WAC installation from http://aka.ms/WindowsAdminCenter
3) Double click on the .msi file to begin the installation.
4) In initial window accept the license terms and click Next
 
wac2
 
5) Then it asks how you like to update it, select the default and click Next to proceed. 
 
wac3
 
6) In next window select option to allow installed to modify trusted host settings. In same window we also can select to create desktop shortcut if needed. 
 
wac4
 
7) In next window we can define the port and certificate for the management site. The default port is 443. In demo I am going to use self-sign cert. 
 
wac5
 
8) Once installation completes, we can launch WAC using desktop icon or https://serverip (replace server ip with the IP address of the server or hostname)
 
Note – WAC not supported on IE. So, you need to use Edge or another browser to access it. 
 
wac6
 
9) By default, it shows the server it is installed under “Server Manager”. In order to add another server, click on Windows Admin Center drop down, and select Server Manager
 
wac7
 
10) Then click on Add
 
wac8
 
11) Then type the FQDN for the server that you like to add. It should be able to resolve from the server. then click on Submit
 
wac9
 
12) We also can add Windows 10 computers to WAC. To do that click on Windows Admin Center drop down and select Computer Management
 
 
wac10
 
 
13) Then click on Add
 
wac11
 
14) Then type the FQDN for the PC that you like to add. It should be able to resolve from the server. then click on Submit
 
wac12
 
Note – Windows 10 do not have Powershell or WinRM remoting by default. To enable it you must run Enable-PSRemoting from PowerShell windows running as admin.
 
wac13
 
wac14
 
15) Once servers/pc are added you can connect to it by just clicking on the server/pc from the list. 
 
wac15
 
16) For remote devices, it will ask as who you like to login. Provide the relevant admin login details and click on Continue
 
wac16
 
17) Then it loads the related info for the server/pc
 
wac17
 
Now we have basic setup of WAC. In next posts we are going to look in to different features of WAC. This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Group Policy Security Filtering

Group Policy can map to Sites, Domain and OUs. If group policy is mapped to OU, by default it will apply to any object under it. But within a OU, Domain or Site there are lots of objects. The security, system or application settings requirements covers by group policies not always applies to boarder target groups. Group Policy filtering capabilities allows to further narrow down the group policy target to security groups or individual objects. 

There are few different ways we can do the filtering in group policy.

1) Security Filtering

2) WMI Filtering

In this post we are going to look in to Security Filtering. In one of my previous post I already covered WMI filtering. It can be found under http://www.rebeladmin.com/2018/02/group-policy-wmi-filters-nutshell/ 

Before apply the security filtering, the first thing to make sure is group policy mapped correctly to the Site, Domain or OU. The security group or the objects you going to target should be under correct level where group policy is mapped. 

We can use the GMPC or PowerShell cmdlets to add the security filtering to GPO.

gsec1
As you can see, by default any policy have “Authenticated Users” group added to the security filtering. It means by default the policy will apply to any authenticated user in that OU. When we add any group or object to security filtering, it also creates entry under delegation. In order to apply a group policy to an object, it needs minimum of,
 
1) READ
2) APPLY GROUP POLICY
 
Any object added to the Security Filtering section will have both of these permissions set by default. Same way if an object added directly to delegation section and apply both permissions, it will list down those objects under Security Filtering section. 
Now, before we add custom objects to the filtering, we need change the default behavior of the security filtering with “Authenticated Users”. Otherwise it doesn’t matter what security group or object you add it will still apply group policy settings to any authenticated user. Before Microsoft release security patch MS16-072 in year 2016, we can simply remove the Authenticated Users group and add the required objects to it. with this new security patch changes, group policies now will run with in computer security context. Before it was executed with in user’s security context. In order to accommodate this new security requirements, one of following permissions must be available under group policy delegation. 
 
Authenticated Users – READ
Domain Computers – READ
 
In order to edit these changes, Go to Group Policy, Then to Delegation tab, Click on Advanced, Select Authenticated users and then remove Apply group policy permissions. 
 
gsec2
 
Now we can go back to Scope tab and add the required security group or objects in to security filtering section. it will automatically add the relevant Read and Apply Group Policy permissions. 
 
gsec3
 
In here we looking in to how to apply group policy to specific target, but it also allows to explicitly allow it to large number of objects and block groups or object by applying it. as an example, let’s assume we have a OU with few hundred objects from different classes. From all these we have like 10 computer objects which we do not need to apply a given group policy. Which one is easy? go and add each and every security group and object to security filtering or allow every one for group policy and block it only for one security group? Microsoft allows to use the second method in filtering too. In order to do that, group policy should have default security filtering which is “Authenticated users” with READ and APPLY GROUP POLICY permissions. Then go to Delegation tab and click on Advanced option. In next window click on Add button and select the group or object that you need to block access to. 
 
gsec4
 
Now in here we are denying READ and APPLY GROUP POLICY permissions to an object. So, it will not able to apply the group policy and all other object under that OU will still able to read and apply group policy. Easy ha?
 
This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure Virtual Machine Scale Sets – Part 01 – What is it and How to set it up?

There are many different solutions available to load balance applications. It can be based on separate hardware appliances, virtual appliances or system inbuilt method such as NLB (Network Load Balancer). However, there are few common challenges on these environments. 

If its third-party solution, additional cost involves for licenses, configuration and maintenance 

Applications or services not always use all of the allocated resources. It may depend on demand and time. Since its fixed number of instance, infrastructure resource will be wasted in non-peak time. if its cloud service, it going to waste money!

When the number of server instances increase, it makes it harder to manage systems. Too many manual tasks!

Azure virtual machine scale sets answers all above challenges. It can automatically increase and decreases number of vm instances running based on demand or schedule. No extra virtual appliances or licenses involves. It also allows to centrally manage, configure large number of instances. Following points are recognized as key benefits of Azure virtual machine scale sets.

It supports Azure load balancer (Layer-4) and Azure Application Gateway (Layer-7) traffic distribution.

It allows to maintain same VM configuration across the instance including VM size, Network, Disk, OS image, Application installs. 

Using Azure Availability Zones, if required we can configure to distribute VM instances in scale set to different datacenters. It adds additional availability. 

It can automatically increase and decrease number of vm instances running based on application demand. It saves money!

It can grow up to 1000 vm instances, if its own custom images, it supports up to 300 vm instances. 

It supports Azure Managed Disks and Premium Storage. 

Let’s see how we can setup Azure virtual machine scale set. In my demo I am going to use Azure PowerShell. 

1) Log in to Azure Portal as Global Administrator
 
2) Open Cloud shell (right hand corner)
 
ss1
 
3) Make sure you are using PowerShell Option
 
ss2
 
4) In my demo scale set configuration as following
 
New-AzureRmVmss `
  -ResourceGroupName "rebelResourceGroup" `
  -Location "canadacentral" `
  -VMScaleSetName "rebelScaleSet" `
  -VirtualNetworkName "rebelVnet" `
  -SubnetName "rebelSubnet" `
  -PublicIpAddressName "rebelPublicIPAddress" `
  -LoadBalancerName "rebelLoadBalancer" `
  -BackendPort "80" `
  -VmSize "Standard_DS3_v2" `
  -ImageName "Win2012Datacenter" `
  -InstanceCount "4" `
  -UpgradePolicy "Automatic"
 
In above,
 

Parameter

Description

New-AzureRmVmss

This is the command use to create Azure Virtual Machine Scale Set

-ResourceGroupName

This define the resource group name and it is a new one.

-Location

This defines the resource region. In my demo its Canada Central

-VMScaleSetName

This defines the name for the Scale Set

-VirtualNetworkName

This defines the virtual network name

-SubnetName

This defines the subnet name. if you do not define subnet prefix, it will use default 192.168.1.0/24

-PublicIpAddressName

This defines the name for public IP address. If not define allocation method using -AllocationMethod , it will use dynamic by default.

-LoadBalancerName

This defines the load balancer name

-BackendPort

This creates relevant rules in loadbalancer and load balance the traffic. in my demo I am using TCP port 80.

-VmSize

This defines the VM size. if this is not defined, by default it uses Standard_DS2_v2

-ImageName

This defines the VM image details. If no valuves used it will use default value which is Windows Server 2016 Datacenter

-InstanceCount

This defines the initial number of instance running on the scale set

-UpgradePolicy

This defines upgrade policy for VM instances in scale set

Once this is run it will ask to define login details for instances. After completes, it will create the scale set.

ss3

This also can do using Portal. In order to use GUI, 

1) Log in to Azure Portal as Global Administrator

2) Go to All Services | Virtual Machine Scale Set

ss4

3) In new page, click on Add

ss5

4) Then it will open up the form, once fill in relevant info click on create 

ss6

5) We also can review the existing scale set properties using Virtual machine scale sets page. On page click on scale set name to view the properties. If we click on Instances, we can see the number of instances running

ss7

6) Scaling shows the number of instances used. If need it can also adjust in here. 

ss8

7) Size defines the size of the VM, again if need values can change in same page. 

ss9

8) Also, if we go to Azure Portal | Load Balancers, we can review settings for load balancer used in scale set.

ss10

9) In my demo I used TCP port 80 to load balance. Those info can find under Load Balancing rules

ss11

10) Relevant public ip info for scale set can be find under inbound NAT rules

ss12

 

This marks the end of this blog post. In next post we will look in to further configuration of scale set. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to Azure Policy (Preview)

Every business has different regulations, compliances that they need to comply with. These regulations and compliances are different from one industry to another. As an example, if its financial institute they will need to comply with PCI (Payment Card Industry Data Security Standard), if it’s a healthcare service they will need to comply with HIPPA (Health Insurance Portability and Accountability Act). Some of these compliances are must to comply and some will just add extra value to business. ISO certifications are good example for that. Some of these regulations and compliances are directly apply to computer infrastructures as well. Especially related to data protection and data governance. 

Apart from that most business has their own “Policies” to protect data and workloads in their infrastructures. Most of the time end goal of these policies is to make sure if “IT department” done their part to support business compliance requirements.  There are two great tools available from Microsoft to make it easier for enterprises to reach their corporate compliance requirements with in Azure environments. 

1. Compliance Manager – This service can scan your azure environment and provide report of your compliance level against most common industry standard such as GDPR, ISO 27000 etc. I already wrote detail article about it http://www.rebeladmin.com/2017/11/microsoft-compliance-manager-makes-easy-deal-compliance-challenges/ 

2. Azure Policy – This is more to review continues compliance in corporate infrastructure policies. As an example, a corporate need to make sure all their Azure resources are deployed under west us region. With help of Azure policy, we can continuously monitor resources and make sure it does stay compliance with that policy. in event of breach it will flag it up as well. 

In this post we are going to look in to Azure Policies and how it can help. 

Azure Policy does have 34 inbuilt policy definitions (at the time this article written). These are covering most infrastructure Management, Audit, and security requirements. Users can use these inbuilt policies or build their own. 

Azure Policy definition are JSON based. Each policy has following elements. 

mode

parameters

display name

description

policy rule

               – logical evaluation

               – effect

Mode 

This is to define the resource type considered in the policy. There are two modes can use in a policy.

All – All resource types. This is the recommended mode for policies

Indexed –  Resource types that support tags and locations 

Parameters 

If you work with programming language or PowerShell I am sure you already know what parameter is. In here also it’s the same meaning. Parameter is special kind of variable which refer to piece of data. It simply the policy by reducing code. Following is extracted from a policy to show the parameter usage. 

"parameters": {

            "publisher": {

                "type": "String",

                "metadata": {

                    "description": "The publisher of the extension",

                    "strongType": "type",

                    "displayName": "Extension Publisher"

                }

Display name & Description

It is just to identify the policy. description also can use to add more meaning. 

{

    "type": "Microsoft.Authorization/policyDefinitions",

    "name": "allowed-custom-images", 

    "properties": {

        "displayName": "Approved VM images",

        "description": "This policy governs the approved VM images",

        "parameters": {

            "imageIds": {

                "type": "array",

                "metadata": {

                    "description": "The list of approved VM images",

                    "displayName": "Approved VM images"

                }

In above example, Approved VM images is policy display name and This policy governs the approved VM images is policy description

Policy Rule

It’s the heart of the policy. it is where it describes the policy using logical operators, conditions and effect

Under the policy rule following logical operators are supported. 

"not"

"allOf"

"anyOf"

It also accepts following conditions types

"equals"

"notEquals"

"like"

"notLike"

"match"

"notMatch"

"contains"

"notContains"

"in"

"notIn"

"containsKey"

"notContainsKey"

"exists"

Under a policy rule, following effects can use,

Deny – Generate event in audit log and fail the request

Audit – Only for auditing purpose and no request decision made

Append – Add additional fields to the request

AuditIfNotExists – Enable auditing if the resource not existing

DeployIfNotExists – Deploy if resource is not existing (at the moment this only supported in built-in policies)  

  "policyRule": {

            "if": {

                "allOf": [

                    {

                        "field": "type",

                        "equals": "Microsoft.Compute/virtualMachines"

                    },

                    {

                        "not": {

                            "field": "Microsoft.Compute/imageId",

                            "in": "[parameters('imageIds')]"

                        }

                    }

                ]

            },

            "then": {

                "effect": "deny"

            }

In above example, it uses if, not and then policy blocks been used. It checks images id of virtual machines and if it’s not matching it will deny request based on effect. 

More info about policy templates can be found under

https://docs.microsoft.com/en-us/azure/azure-policy/policy-definition

https://docs.microsoft.com/en-us/azure/azure-policy/json-samples

Policy Initiatives 

Azure Policy also allows to group policies together and apply it one scope. This is called Policy Initiative. This reduce the complexity of policy assignment. As an example, we can create policy initiative called “Infrastructure Security” and include all infrastructure security related policies to it.  

Using Azure Policy

Let’s see how we can use Azure Policy feature. 

1. Log in to Azure Portal as Global Administrator

2. Go to All Services and type Policy then click on policy tile. 

policy

3. Then it will open up the feature tile.

policy2

4. In my demo I am going to assign pre-built policy to restrict resource region. In order to assign policy, click on Assignment 

policy3

5. Then click on Assign Policy

policy4

6. Then in it will open up Assign Policy Wizard. Click on Policy to list and select the relevant policy. in my demo I am using policy called “Allowed Locations”. Select the policy from list and then click on select to complete the action. 

policy5

7. Under Name and Description fields define policy name and description which explain its characteristics. 

8. Under the Pricing Tier select the pricing tier for evaluation. 

9. Scope field defines the scope of the policy. it will be subscription in use. 

10. Using exclusion, we can exclude resource groups which not going to exclude from the policy. in this demo I am not going to exclude any.

policy6

11. Then under the Parameters I select the region I like to use for my resources (In my demo I am using Canada Central as the region). 

policy7

12. At the end click on Assign to complete the policy assignment process. 

policy8

13. Not it is time for testing. In my demo I am trying to create a storage account under west us region. When I do that it gives me error saying “There were validation errors. Click here to view details

policy9

When I click on it, it says it didn’t deploy as policy violation. (hope I will see more details when its GA)

policy10

Cool ha? It’s doing the job it supposed to do. Since policy effect is “deny” it should deny my request to create resource under other regions. 

Initiative Assignment

Assigning Initiative is same process as Policy assignment. But before do that you need initiative in place. There is only one in-built initiative in place currently. 

In order to create initiative,

1. Log in to Azure Portal as Global Administrator

2. Go to All Services and type Policy then click on policy tile. 

3. Then in policy feature window, click on Definition

policy11

4. After that click on Initiative definition option. 

policy12

5. in new window to start with, select Definition location. This is basically the targeted subscription. 

6. Under Name, define name for policy initiative. 

7. Under the category, you can either create new category or select existing one. 

policy13

8. After that click on available policy in left hand panel and click on Add it to initiative. 

policy14

9. Once it’s all done, click on Save to complete to process. 

policy15

10. Once it’s done, we can assign initiative, using Assignment | Assign Initiative Option

policy16

Create New Policy

Creating new policy is similar to creating initiative process. I can be done using Definition | Policy Definition option. 

policy17

policy18

Apart from that, using compliance option we can see the overall policy and initiative compliant status. It also allows to assign policies and initiative. 

policy19

This marks the end of this blog post. Hope it was useful for you. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to configure Azure File Sync (preview)

In one of my previous blog post I have explained what is Azure File Share and how it can use to replace traditional on-premises file server. if you not read it yet please check it before we go further on this post as this feature is depend on Azure File Share. You can access article using http://www.rebeladmin.com/2018/03/step-step-guide-create-azure-file-share-map-windows-10/ 

With Azure File Sync we can make on-premises windows server to act as a cache copy holder for your Azure file share.  It allows users to access files locally using protocol such as SMB, NFS and FTPS. In this blog we going to look in to Azure file sync implementation.

Before we start configuration, we need to familiarizes with some terms associated with this feature. 

Azure File Sync Agent

It is an agent which we need to install in on-premises windows server in order to enable sync with Azure file share. It includes three components, 

1. FileSyncSvc.exe – This is the service responsible for monitoring changes in local server initiate sync with Azure file share. 

2. StorageSync.sys – This component is responsible for tiering files to Azure files. Cloud tiering is additional feature of Azure File Sync. It can use with not frequently used files greater than 64Kb. When this enabled, local file replaced with url to files in Azure file share. When user access it, in background it recalls the file from Azure file share. End user will not have any difference experience as it all happens in back end. 

3. PowerShell cmdlets – This helps to manage Microsoft.StorageSync Azure resource provider using PowerShell commands. These cmdlet files are located in

C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.PowerShell.Cmdlets.dll

C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll

This agent is only supported in Windows server 2012 R2 / 2016 standard and datacenter versions only. It is not supported on core version either. 

Storage Sync Service 

According to Microsoft “The Storage Sync Service is the top-level Azure resource for Azure File Sync. The Storage Sync Service resource is a peer of the storage account resource, and can similarly be deployed to Azure resource groups. A distinct top-level resource from the storage account resource is required because the Storage Sync Service can create sync relationships with multiple storage accounts via multiple sync groups. A subscription can have multiple Storage Sync Service resources deployed.”

Sync group 

Sync group defines the boundaries of sync job. A sync group includes cloud endpoint and server end point. Storage sync service can have multiple sync group. 

Cloud endpoint

Cloud endpoint represent an Azure file share. One cloud endpoint can only have one file share which means one Azure file share responsible for one sync group. 

Server endpoint

Server endpoint represent the local server directory which will cache files from Azure file share. A one server can hold multiple server endpoints but one endpoint can’t be part of multiple sync groups. If it’s still added, it will merge with the files belongs to other endpoints in same sync group. 

Registered Server 

Registered server represents the trust relationship between on-premise server and storage sync service. It is one-to-one connection. However, one storage sync service can have many servers registered with it. 

Now we know the component and how each component involves in sync operation between Azure file share and on-premises server. Next step is to get it configured. 

Setup Azure File Share

As first step of the demo I am going to create Azure file share. Steps for this task is already explained on one of my previous blog post. http://www.rebeladmin.com/2018/03/step-step-guide-create-azure-file-share-map-windows-10/

Azure file sync preview feature is only supported in Australia East, Canada Central, East US, Southeast Asia, UK South, West Europe, West US regions. There for azure file share also need to be in same regions. 

For this demo I have created a file share called “rebelshare”. It is associated with westus region. 

async1

Create Storage Sync Service
 
1) Log in to Azure Portal as global administrator
2) Go to New | Create a resource | Azure File Sync (Preview) | Create
 
asyncnew1
 
3) In new window type name for sync service and select relevant resource group for it. if required can create new resource group. once you fill in info, click on create
 
asyncnew2
 
Install Azure File Sync Agent
 
Next step in configuration is to install azure file sync agent in on-premises server. In this demo I am using server which running windows server 2016 datacenter edition. 
 
Before install agent,
 
Log in to server and disabled Internet Explorer Enhanced Security Configuration for administrators and users. This can re-enable after installation. 
 
async2
 
Verify PowerShell version its running. At least it need to run version 5.1
 
Install Azure PowerShell Module – Guide for it available in https://docs.microsoft.com/powershell/azure/install-azurerm-ps 
 
async3
 
Once above in place, go and download file sync agent from https://www.microsoft.com/en-us/download/details.aspx?id=55988
 
Once download is completed, double click to start the installation. In initial page, click Next to continue.
 
async4
 
In next page, accept the license agreement and click on Next.
 
After that in next window we can select the path for installation.
 
async5
 
In next window it asks in future how you need to update the agent version. It can be done using windows update. 
 
async6
 
In next window, keep default settings and click on Install to begin installation. 
 
Once installation is completed, it opens up Azure File Sync agent wizard. First step is to register the server. in window click on Sign in to start the process. 
 
async7
 
Then sign in using your Azure global administrator account. 
 
async8
 
In next window select the Azure Subscription, Resource group, Storage Sync service and click on Register
 
async9
 
Then it will ask again for login, once it is done it will complete the registration process. 
 
async10
 
Create Sync Group
 
Next step of the process is to create sync group. to do that.
 
1) Log in to Azure Portal as global administrator
2) Go to All Services and search for Storage Sync Services
3) In Storage Sync Services page click on the Storage Sync Service we created on earlier step. 
 
async11
 
4) In new window click on Sync Group icon.
 
async12
 
5) In next window, define name for sync group and select the subscription. Then select the storage account and Azure file share. At the end click on Create
 
async13
 
6) Once group is added, click on the new group
 
async14
 
7) In new window, click on add server endpoint option. 
 
async15
 
8) Then in new window select the registered server from the list and then define folder path for local cache copy. In my demo I am using E:\share path. I also enable cloud tiering feature. Once info is in click on create
 
async16
 
9) After initial sync we can see same files in two endpoints. 
 
async17
async18
 
10) You also can review status of endpoint sync using Storage Sync Services | Sync_Account | Sync_group

async19
 
This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Active Directory Replication Status Review Using PowerShell

Data Replication is crucial for healthy Active Directory Environment. There are different ways to check status of replication. In this article I am going to explain how you can check status of domain replication using PowerShell.

For a given domain controller we can find its inbound replication partners using, 

Get-ADReplicationPartnerMetadata -Target REBEL-SRV01.rebeladmin.com

Above command provide detail description for the given domain controller including last successful replication, replication partition, server etc. 

We can list down all the inbound replication partners for given domain using, 

Get-ADReplicationPartnerMetadata -Target "rebeladmin.com" -Scope Domain

In above command the scope is defined as the domain. this can change to forest and get list of inbound partners in the forest. The output is for default partition.  If needed the partition can change using – Partition to Configuration or Schema partition. It will list down the relevant inbound partners for given partition. 

Associated replication failures for a site, forest, domain, domain controller can find using Get-ADReplicationFailure cmdlet. 

Get-ADReplicationFailure -Target REBEL-SRV01.rebeladmin.com

Above command will list down the replication failures for the given domain controller. 

Replication failures for domain can find out using, 

Get-ADReplicationFailure -Target rebeladmin.com -Scope Domain

Replication failures for forest can find out using, 

Get-ADReplicationFailure -Target rebeladmin.com -Scope Forest

Replication failures for site can find out using, 

Get-ADReplicationFailure -Target LondonSite -Scope Site

In command, LondonSite can replace using relevant site name. 

Using both Get-ADReplicationPartnerMetadata and Get-ADReplicationFailure, following PowerShell script can provide report against specified domain controller. 

## Active Directory Domain Controller Replication Status##

$domaincontroller = Read-Host 'What is your Domain Controller?'

## Define Objects ##

$report = New-Object PSObject -Property @{

ReplicationPartners = $null

LastReplication = $null

FailureCount = $null

FailureType = $null

FirstFailure = $null

}

## Replication Partners ##

$report.ReplicationPartners = (Get-ADReplicationPartnerMetadata -Target $domaincontroller).Partner

$report.LastReplication = (Get-ADReplicationPartnerMetadata -Target $domaincontroller).LastReplicationSuccess

## Replication Failures ##

$report.FailureCount  = (Get-ADReplicationFailure -Target $domaincontroller).FailureCount

$report.FailureType = (Get-ADReplicationFailure -Target $domaincontroller).FailureType

$report.FirstFailure = (Get-ADReplicationFailure -Target $domaincontroller).FirstFailureTime

## Format Output ##

$report | select ReplicationPartners,LastReplication,FirstFailure,FailureCount,FailureType | Out-GridView

In this command, it will give option for engineer to specify the Domain Controller name. 

$domaincontroller = Read-Host 'What is your Domain Controller?'

Then its creates some object and map those to result of the PowerShell command outputs. Last but not least it provides a report to display a report including, 

Replication Partner (ReplicationPartners)

Last Successful Replication (LastReplication)

AD Replication Failure Count (FailureCount)

AD Replication Failure Type (FailureType)

AD Replication Failure First Recorded Time (FirstFailure)

repower1

Further to Active Directory replication topologies, there are two types of replications.

1) Intra-Site – Replications between domain controllers in same Active Directory Site

2) Inter-Site – Replication between domain controllers in different Active Directory Site

We can review AD replication site objects using Get-ADReplicationSite cmdlet. 

Get-ADReplicationSite -Filter *

Above command returns all the AD replication sites in the AD forest. 

repower2

We can review AD replication site links on the AD forest using, 

Get-ADReplicationSiteLink -Filter *

In site links, most important information is to know the site cost and replication schedule. It allows ro understand the replication topology and expected delays on replications. 

Get-ADReplicationSiteLink -Filter {SitesIncluded -eq "CanadaSite"} | Format-Table Name,Cost,ReplicationFrequencyInMinutes -A

Above command list all the replication sites link included CanadaSite AD site along with the site link name, link cost, replication frequency. 

A site link bridge can use to bundle two or more site links and enables transitivity between site links.

Site link bridge information can retrieve using, 

Get-ADReplicationSiteLinkBridge -Filter *

Active Directory sites may use multiple IP address segments for its operations. It is important to associate those with the AD site configuration so domain controllers know which computer related to which site. 

Get-ADReplicationSubnet -Filter * | Format-Table Name,Site -A

Above command will list down all the Subnets in the forest in a table with subnet name and AD site.

repower3

Bridgehead servers are operating as the primary communication point to handle replication data which comes in and go out from AD site. 

We can list down all the preferred bridgehead servers in a domain using, 

$BHservers = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com").bridgeheadServerListBL

$BHservers | Out-GridView

In above command the attribute value bridgeheadServerListBL retrieve via ADSI connection. 

We can list down all of these findings using on script. 

## Script to gather information about Replication Topology ##

## Define Objects ##

$replreport = New-Object PSObject -Property @{

Domain = $null

}

## Find Domain Information ##

$replreport.Domain = (Get-ADDomain).DNSroot

## List down the AD sites in the Domain ##

$a = (Get-ADReplicationSite -Filter *)

Write-Host "########" $replreport.Domain "Domain AD Sites" "########"

$a | Format-Table Description,Name -AutoSize

## List down Replication Site link Information ##

$b = (Get-ADReplicationSiteLink -Filter *)

Write-Host "########" $replreport.Domain "Domain AD Replication SiteLink Information" "########"

$b | Format-Table Name,Cost,ReplicationFrequencyInMinutes -AutoSize

## List down SiteLink Bridge Information ##

$c = (Get-ADReplicationSiteLinkBridge -Filter *)

Write-Host "########" $replreport.Domain "Domain AD SiteLink Bridge Information" "########"

$c | select Name,SiteLinksIncluded | Format-List

## List down Subnet Information ##

$d = (Get-ADReplicationSubnet -Filter * | select Name,Site)

Write-Host "########" $replreport.Domain "Domain Subnet Information" "########"

$d | Format-Table Name,Site -AutoSize

## List down Prefered BridgeHead Servers ##

$e = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com").bridgeheadServerListBL

Write-Host "########" $replreport.Domain "Domain Prefered BridgeHead Servers" "########"

$e

## End of the Script ##

The only thing we need to change is the ADSI connection with relevant domain DN. 

$e = ([adsi]"LDAP://CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=rebeladmin,DC=com")

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Manage Active Directory Permissions with Delegate Control method

In one of my previous post I explained how we can manage AD administration privileges using ACLs. If you didn’t read it yet you can find it using http://www.rebeladmin.com/2018/02/step-step-guide-manage-active-directory-permissions-using-object-acls/

This Delegate Control method also works similar to ACLs, but its simplified the process as its uses,

Delegation of Control Wizard which can use to apply delegated permissions. 

Allows to use predefined tasks and assign permission to those

The Wizard contain following predefined tasks which can use to assign permissions. 

Create, delete, and manage user accounts

Reset user passwords and force password change at next logon

Read all user information

Create, delete and manage groups

Modify the membership of a group

Manage Group Policy links

Generate Resultant Set of Policy (Planning)

Generate Resultant Set of Policy (Logging)

Create, delete, and manage inetOrgPerson accounts

Reset inetOrgPerson passwords and force password change at next logon

Read all inetOrgPerson information

This also allows to create custom task to delegate permissions, if it’s not covered from the common task list. 

Similar to ACLs, Permissions can apply in,

1) Site – Delegated permission will valid for all the objects under the given Active Directory Site. 

2) Domain – Delegated permission will valid for all the objects under the given Active Directory Domain. 

3) OU – Delegated permission will valid for all the objects under the given Active Directory OU.

As an example, I have a security group called Second Line Engineers and Scott is a member of it. I like to allow members of this group to reset password for objects in OU=Users,OU=Europe,DC=rebeladmin,DC and nothing else. 

1) Log in to Domain Controller as Domain Admin/Enterprise Admin

2) Review Group Membership Using 

Get-ADGroupMember “Second Line Engineers”

dele1

3) Go to ADUC, right click on the Europe OU, then from list click on “Delegate Control

4) This will open new wizard, in initial page click Next to proceed. 

5) In next page, Click on Add button and add the Second Line Engineers group to it. Then click Next to proceed.

dele2

6) From the task to delegate window select Delegate the following common tasks option and from list select Reset user passwords and force password change at next logon. In this page, we can select multiple tasks. If none of those works, we still can create custom task to delegate. Once completes the selection, click next to proceed. 

dele3

7) This completes the wizard and click on Finish to complete. 

8) Now it’s time for testing. I log in to Windows 10 computer which has RSAT tools installed as user Scott. 

According to permissions, I should be able to reset password of an object under OU=Users,OU=Europe,DC=rebeladmin,DC

Set-ADAccountPassword -Identity dfrancis

This allows to change the password successfully. 

dele4

However, it should not allow to delete any objects. we can test it using,

Remove-ADUser -Identity "CN=Dishan Francis,OU=Users,OU=Europe,DC=rebeladmin,DC=com"

And as expected, it returns access denied error. 

dele5

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Integrity check to Detect Low Level Active Directory Database Corruption

Active Directory maintains a multi-master database. like any other database there can be data corruptions, crashes, data lost etc. In my entire career, I still did not come across with a situation that a full database recovery is required in production environment. The reason is AD DS database is keep replicating to other available Domain Controllers and it is very rare that all the available Domain Controllers crash in same time and loose data.

By running integrity check, we can identify binary level AD database corruption. This comes as part of the Ntdsutil tool which use for Active Directory database maintenance. This go through every byte of the database file. The integrity command also checks if correct headers exist in the database itself and if all of the tables are functioning and consistent. This process also run as part of Active Directory Service Restore Mode (DRSM).

This check need to run with NTDS service off. 

In order to run integrity check,

1) Log in to Domain Controller as Domain/Enterprise Administrator
2) Open PowerShell as Administrator
3) Stop NTDS service using net stop ntds
4) Type 
 
ntdsutil
activate instance ntds
files
integrity
 
ntds1
 
5) In order to exit from the utility type, quit.
6) it is also recommended to run Semantic database analysis to confirm the consistency of active directory database contents. 
7) In order to do it, 
 
ntdsutil
activate instance ntds
semantic database analysis
go
 
ntds2
 
8) If its detected any integrity issues can type go fixup to fix the errors. 
9) After process is completed, type net start ntds to start the ntds service.
 
This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to create Azure file share and Map it in Windows 10

Azure Files is a managed, cloud based file share that can access via SMB protocol. Once you create Azure File share it can be access from anyware using Windows, Linux or macOS. It can also can be mapped as a shared drive to the system.

Azure Files have following benefits, 

Simple – Easy to setup and easy to manage. It also can use with Azure Backup and Azure File Sync. It got everything to use as replacement for on-premises file server. 

Future Proof – When people are moving on-premises workload to Azure, sometime applications needed access to file shares. Azure Files allows to facilitate that requirements easily. Also, if you are maintaining on-premises file servers, when windows versions change, you need to upgrade those as well. Azure File is fully managed service which means no need to worry about versions.  

Reliable – High Availability of on-premises file share depend on many things such as power, File Sync between servers, Bandwidth etc. but with Azure Files you do not need to worry about it as it was already designed and operate with as high available service. You do not need to worry about keeping sync servers in different geographical locations either. 

Integration – Azure Files uses industry standard SMB protocol. It can be manage using Azure CLI, PowerShell, file system I/O APIs, Azure Storage Client Libraries and Azure Storage REST API. There for it allow developers to integrate it with existing systems or new systems easily. 

Let’s see how we can create Azure File Share and map it with Windows 10 PC.

In my demo I am going to use PowerShell for the setup. This is fully supported to setup via Azure Portal. 

Setup Storage Account

1) Log in to Azure Portal using Global Admin Account

2) Click on Cloud Shell in right hand corner

file1

3) Make sure PowerShell console loaded. Same thing can be done by directly connecting to Azure using Azure PowerShell module. https://docs.microsoft.com/en-us/powershell/azure/install-azurerm-ps?view=azurermps-5.4.0

file2

4) Before create storage account I need to find info about my resource group that I am going to use. to do that run Get-AzureRmResourceGroup it will list down the group details along with the location. 

file3

5) Once we retrieve info, we can create new storage account using,

New-AzureRmStorageAccount -ResourceGroupName therebeladmin `

  -Name rebelsa1 `

  -Location northcentralus `

  -SkuName Standard_LRS

In above, -ResourceGroupName specify the resource group name that storage account will belongs to. -Name defines the name of the storage account.  -Location defines the location for storage account. -SkuName defines the storage types. 

Standard_LRS – Locally-redundant storage.

Standard_ZRS – Zone-redundant storage.

Standard_GRS – Geo-redundant storage.

Standard_RAGRS – Read access geo-redundant storage.

Premium_LRS – Premium locally-redundant storage.

file4

Setup Azure File Share

1) Now we have storage account, before we create share, we need to find out storage access key for the account. To do that we can use

Get-AzureRmStorageAccountKey -ResourceGroupName "therebeladmin" -AccountName "rebelsa1"

file5

2) Now we can create file share called “rebelshare” using 

$SAContext = New-AzureStorageContext “rebelsa1” “<storage key>”

New-AzureStorageShare rebelshare -Context $SAContext

In above, rebelsa1 is the storage key and <storage key> need to replace by storage account key found on previous step.

file6

In here it used the default quote which is 5tb. 

Map it to Windows 10 

To map folder to the Windows PC, we can use following PowerShell command,

net use R: \\rebelsa1.file.core.windows.net\rebelshare <storage key> /user:Azure\rebelsa1

In above, it will map the Azure File share we created as R:\ drive. <storage key> need to replace with Azure storage key.

file7

in above I successfully map the share and copied file from my local C: drive. 

Note – In order to map this, share you need to have communication to Azure via SMB ports. If your firewalls blocking it, you will not able to map the drive. This is bit of an issue if you using the map drive in most of public wifi networks. However, you still can access the share using portal. 

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Active Directory Right Management Service (AD RMS) – Part 04 – AD RMS Configuration

So far in this series we learn what is RMS and how it works. You can access those using,

Part 01 What is AD RMS ?

Part 02AD RMS Components

Part 03How AD RMS Works ?

This is the last part of the series and in here I am going to demonstrate how to install and configure AD RMS. 

Setup AD RMS Root Cluster

AD RMS only can install in Domain Member Server. I have a demo server setup and its already member server of the domain. First AD RMS server add to the forest creates the AD RMS cluster. 

Install AD RMS Role

1) Log in to the server as Enterprise Administrator. 

2) Install the AD RMS role and related management tools using, 

Install-WindowsFeature ADRMS -IncludeManagementTools

rms04-1

Configure AD RMS Role

1) Launch Server Manager > Notifications > Under “Configuration required for Active Directory Rights Management Services” > Perform Additional Configuration. This will open the AD RMS Configuration Wizard. Click Next to start the configuration. 

rms04-2

2) In next screen, it gives option to create new AD RMS root cluster or join it to existing AD RMS cluster. Since it is new cluster, select option Create a new AD RMS root cluster and click Next.  

3) Next Screen is to define the AD RMS database configuration. If it’s going to use MS SQL server need to specify the Database server and the instance. Or else it can use Windows Internal Database. Please note if WID used, it cannot have any more AD RMS servers and cannot have AD RMS mobile extension either. Since its demo, I am going to use WID. Once selection made, click Next to move to next step. 

rms04-3

4) In Next window, we need to define service account. It is used to communicate with other services and computers. This doesn’t need to have Domain or Enterprise Admin rights. Click on Specify and provide the user name and password for the account. Then click Next to Proceed to next window. 

rms04-4

5) In next windows, we need to select the Cryptographic mode. This defines the strength of the hashes. This is support two mode which is SHA-1 and SHA-256. It is highly recommended to use Mode 2 which is SHA-256 for stronger hashing. However, this need to be match with the other RMS cluster it deals with. In our setup, I am going to use default SHA-256. Once Selection is made click next to proceed. 

rms04-5

6) AD RMS uses cluster key to sign the certificate and licenses it issues. This is also required when AD RMS restore or when new AD RMS server add to the same cluster. It can be saved in two places. Default method is to use AD RMS centrally managed key storage. So, it doesn’t need any additional configurations. It also supports to use cryptographic service provider (CSP) as storage. But this required manual distribution of key when add another AD RMS server to the cluster. In this we will use option “Use AD RMS centrally managed key storage”. Once selection is made click Next to proceed. 

7) AD RMS also uses Password to encrypt the cluster key described in above. This is required to provide when add another AD RMS server to cluster or when restore AD RMS from backup. This key is cannot reset. There for recommended to keep it recorded in secure place. Once define the AD RMS Cluster Key Password, click Next to proceed. 

8) In next window, we need to define the IIS virtual directory for the AD RMS web site. Unless there is specific requirement always use the default and click Next. 

rms04-6

9) In next step, we need to define a AD RMS cluster URL. This will use by AD RMS clients to communicate with AD RMS cluster. It is highly recommended to use SSL for this even its allow to use it with HTTP only method. The related DNS records and Firewall rules need to be adjusted in order to provide connection between AD RMS clients and this URL (Internally or Externally). Once configuration values provided, click Next to proceed. One thing need to noted is, once this URL is specified, it cannot be change. In this demo, the RMS URL is https://rms.rebeladmin.com. 

rms04-7

10) In next step, we need to define Server Authentication Certificate. This certificate will use to encrypt the network traffic between RMS clients and AD RMS cluster. For testing it can use self-signed certificate but not recommended for production. If its uses internal CA, client computers should be aware of the root certificate. In wizard, it automatically takes the list of SSL certificates installed in the Computer and we can select the certificate from there. It also allowed to configure this setting in later time. Once settings are defined, click Next to proceed. 

rms04-8

11) In next window, it asks to provide Name for the Server License Certificate (SLC). This certificate is to define the identity of the AD RMS cluster and it used in the Data protection process between clients to encrypt/decrypt symmetric keys. Once defined a meaningful name, click Next to proceed. 

12) Last step of the configuration is to register AD RMS connection service point (SCP) with the AD DS. If needed this can configure later too. This need enterprise administrator privileges to register it with AD DS. In this demo, I already logged as enterprise administrator so I am using “Register the SCP now”. Once option selected, click Next

rms04-9

13) After the confirmation, installation will begin and wait for the result. If it’s all successful, log off and log back in the AD RMS server. 

14) Once log back in, Go to Server Manager > Tools > Active Directory Rights Management Service to access the AD RMS cluster.

rms04-10

Test Protecting Data using AD RMS Cluster

Next step of the demo is to test the AD RMS cluster by protecting data. For that I am using two user accounts. 

User

Email Address

Role

Peter

peter@rebeladmin.com

Author

Adam

adam@rebeladmin.com

Recipient

Email account filed is must and if user doesn’t have email address defined, it will not be allowed to protect the document. 

The end user computers must have added https://rms.rebeladmin.com to the Internet Explorer, Local Intranet’s trusted site lists. This can be done via GPO. If it’s not added, when go to protect the document, users will get following error,

rms04-11

In this demo as user Peter going to create protected document using Word 2013. The recipient will only be user Adam and he will only have read permission. 

To Protect the Document

1) Log in to the Windows 10 (Domain member) computer as user Peter

2) Open word 2013 and type some text

3) Then Go to File > Protect Document > Restrict Access > Connect to Digital Rights Management Servers and get templates 

rms04-12

4) Once its successfully retrieves the templates, go back to same option and select Restricted Access

rms04-13

5) Then it will open up new window. On there for the read permissions, type adam@rebeladmin.com to provide read only permission to user adam. Then click OK.  

rms04-14

6) After that save the document. In demo, I used a network share which user adam also have access. 

7) Now I log in to another window 10 computers as user adam. 

8) Then brows to path where document was saved and open it using word 2013. 

9) On the opening process, it asks to authenticate to the RMS to retrieve the licenses. After that it open the document. In top of the document it says document got limited access. When click on the “View Permission” it list down the allowed permissions and it matches what we set in the author side. 

rms04-15

10) Further in to testing I have log in to system as another user (Liam) and when I access the file I gets, 

rms04-16

This ends the configuration and testing of the AD RMS cluster. In this demo, I explained how we can set up AD RMS cluster with minimum resource and configuration. I only used the default configuration of AD RMS cluster and no custom policies applied. By understand core functions allows you to customize it to meet your organization requirements. 

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter