Azure AD now support macOS Conditional Access – Let’s see it in action!

Azure AD conditional access policies allows to provide conditional based access to cloud workloads. 

In one of my previous blog post I explain it in detail what is conditional access policy and how we can configure it. you can find it on http://www.rebeladmin.com/2017/07/conditional-access-policies-azure-active-directory/ . I highly recommend to read it before we continue on this post. 

In Condition Access Policy, there are two main section.

Assignments –  This is where we can define conditions applying to user environment such as users and groups, applications, device platform, login locations etc.

Access Control –  This is to control access for the users and groups when they comply with the conditions specified in the “assignments” section. it can be either allow access or deny access. 

Under Assignment section we can define device platforms involves in the condition. Before when I wrote my previous post it was only supporting for following platforms.

• Android

• iOS

• Windows Phone

• Windows

From November 14th 2017, Azure AD add macOS to the list. With this update following OS versions, applications, and browsers are supported on macOS for conditional access:

Operating Systems

macOS 10.11+

Applications

Microsoft Office 2016 for macOS v15.34 and later

Microsoft Teams

Web applications (via Application Proxy)

Browsers

Safari

Chrome

In original documentation, it didn’t say anything about web apps but in this demo, I am going to use conditional access with on-premises web app which is publish to internet using Azure Application Proxy. I wrote article about application proxy while ago and it can access via http://www.rebeladmin.com/2017/06/azure-active-directory-application-proxy-part-02/ 

Before start configuration, let me explain little bit about my environment. I have on-premises domain environment with therebeladmin.com. I integrated it with Azure AD Premium and I have healthy sync. I have on-premises webapp and I have published it to internet using Azure Application Proxy so I can use Azure AD authentication with it. webapp can access via https://webapp-myrebeladmin.msappproxy.net/webapp/ 

I have a mac with sierra running. In this demo, I am going to setup a conditional access policy to block access to webapp if the request coming from a mac environment. 

mac01

In order to configure this, 

1) Log on to Azure as global admin
2) Click on Azure Active Directory from left menu.
 
mac2
 
3) Then in Azure Active Directory panel, click on Conditional Access under security section. 
 
mac3
 
4) It will load up the conditional access window. Click on + New Policy to create new policy. 
 
mac4
 
5) It will open up policy window where we can define policy settings. First thing first, provide a name for policy. in my case I will use “Block access from macOS
 
mac5
 
6) Then click on User and groups to define target users for the policy. in this demo, I am going to target All users. once selection is done click on Done
 
mac6
 
7) Then Click on Clouds Apps to select application for the policy. in my policy, I am going to target rebelwebapp. Once selection is done click on Select and the Done to complete the process. 
 
mac7
 
8) Next step is to define the conditions. In order to do that click on Conditions option. In here I am only worrying about device platforms. To select platforms, click on option Device Platforms. Then to enable the condition click on Yes under configure and then under include tab select macOS. After that click on Done in both windows to complete the process. 
 
mac8
 
9) Next step to define access control rules. To do that click on Grant under access controls section. in my demo, I am going to block access to app. So, I am selecting block access option. Once selection is done click on select to complete the action. 
 
mac9
 
10) Now policy is ready. To enable it click On tab under Enable Policy option. 
 
mac10
 
11) Then to create the policy, click on Create button. 
 
mac11
 
12) Now policy is ready and next step is to test it. in order to do that I am using webapp url via mac. As soon as I access url, it asks for login.
 
mac12
 
13) As soon as I type user name and password, I get following response saying it is not allowed. 
 
mac13
 
14) If we click on More Details it gives more info about error. As expected it was due to the conditional access policy we set up. Nice ha!!
 
mac14
 
So as expected, conditional access with macOS working fine. This is another good step forward. Well done Microsoft! This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.  
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Azure AD Password Synchronization

Azure AD Connect allows engineers to sync on-permises AD data to Azure AD. If you use express settings for the AD connect setup, by default it enables the password synchronization as well. This allows users to use same Active Directory password to authenticate in to cloud based workloads. This allow users to use single login details without maintaining different passwords. It simplifies the user’s login experience as well as reduce the helpdesk involvements. 

Windows Active Directory uses hash values, which is generated by hash algorithm as passwords. It is not being saved as clear text password and it is impossible to revert it back to a clear text password. There is misunderstanding about this as some people thinks Azure AD password sync uses clear text passwords. In every 2 minutes’ intervals Azure AD connect server retrieves password hashes from on-premises AD and sync it to Azure AD per user-basis in chronological order. This also involves with encryption and decryption process to add extra security to password sync process. In event of password change it will sync to Azure AD in next password sync interval. In healthy environment, maximum delay to update password will be 2 minutes. 

If the password was changed while user has open session, it will affect on next Azure authentication attempt. It will not log out the user from existing session. Also, password synchronization doesn’t mean SSO. Users always have to use corporate login details to authenticate to Azure Services. You can find more information about SSO using https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-sso 

Enable synchronization of NTLM and Kerberos credential hashes to Azure AD

However Azure AD Connect does not synchronize NTLM and Kerberos credential hashes to Azure AD by default. So, if you had Azure AD directory setup and only enabled Azure Domain Services recently make sure you check following,

pass1
1. If there is existing Azure AD Connect server, Upgrade the Azure AD connect to latest
2. If there is existing Azure AD Connect server, confirm password synchronization is enabled in Azure AD connect 
 
In order to do that, open Azure AD connect and select option to “view current configuration” and check if password synchronization is enabled. 
 
pass2
 
If it’s not, we need to go back to initial page and select option “customize synchronization options” and under optional features select password synchronization
 
pass3
 
Run following PowerShell script on local AD to force full password synchronization, and enable all on-premises users’ credential hashes to sync to Azure AD. 

$adConnector = "<CASE SENSITIVE AD CONNECTOR NAME>"  
$azureadConnector = "<CASE SENSITIVE AZURE AD CONNECTOR NAME>"  
Import-Module “C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync\ADSync.psd1”  
$c = Get-ADSyncConnector -Name $adConnector  
$p = New-Object Microsoft.IdentityManagement.PowerShell.ObjectModel.ConfigurationParameter "Microsoft.Synchronize.ForceFullPasswordSync", String, ConnectorGlobal, $null, $null, $null
$p.Value = 1  
$c.GlobalParameters.Remove($p.Name)  
$c.GlobalParameters.Add($p)  
$c = Add-ADSyncConnector -Connector $c  
Set-ADSyncAADPasswordSyncConfiguration -SourceConnector $adConnector -TargetConnector $azureadConnector -Enable $false   
Set-ADSyncAADPasswordSyncConfiguration -SourceConnector $adConnector -TargetConnector $azureadConnector -Enable $true  
 
You can find AD connector and Azure AD Connector name using, Start > Synchronization Service > Connections.
 
pass4
 
After that you can try to log in to Azure as a user in on-premises AD. If sync is working properly, it should accept your corporate login. 
 
This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.  

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to create custom Active Directory Attributes

In active directory schema, it is allowed to add custom attributes. In organizations, there are situations where this option is useful. It is most of the time related to application integration requirements with active directory infrastructure. In modern infrastructures, applications are decentralizing identity management. Organization’s identities can sit on active directory as well as applications. Some may in in-house infrastructures and some may even in public cloud. If these applications are integrated with active directory it’s still provides central identity management but it’s not always. Some applications have their own way of handling its user accounts and privileges. Similar to active directory attributes, these applications can also have their own attributes defined by its database system to store the data. These application attributes most of the time will not match the attributes on active directory. As an example, HR system uses employee ID to identify an employee record uniquely from others. But active directory use username to identify a unique record. Each system’s attributes hold some data about the objects even its referring to same user or device. If there is another application which required to retrieve data from both system’s attributes how we can facilitate such without data duplication?

One’s a customer was talking to me regarding similar requirement. They have active directory infrastructure in place. They also maintaining a HR system which is not integrated with active directory. They got a new requirement for an employee collaboration application which required data input in specific way. It has defined its fields in the database and we need to match the data on that order. Some of these required data about users can retrieve from active directory and some of user data can retrieve from the HR system. Instead of keeping two data feeds to the system we decided to treat the active directory as the trustworthy data source for this new system. If active directory need to hold all the required data, it somehow need to store the data comes from HR system as well. The final solution was to add custom attributes to active directory schema and associate it with the user class. Instead of both system operate as data feeds, now HR system pass the filtered values to Active directory and it exports all the required data in CSV format to the application.  

In order to create custom attributes, go to active directory schema snap-in, right click on attributes container and select create attribute

Tip – In order to open active directory schema snap-in you need to run command regsvr32 schmmgmt.dll from the Domain Controller. After that you can use MMC and add active directory schema as snap-in. 

Then system will give a warning about the schema object creation and click OK to continue. 

It will open up a form and this is where we need to define the details about custom attribute. 

1) Common Name – This is the name of the object. It is only allowed to use letters, numbers and hyphen for the CN. 

2) LDAP Display Name – When object is referring in script, program or command line utility it need to call using the LDAP Display name instead of the Common Name. when you define the CN, it will automatically create the LDAP Display name. 

3) X500 Object ID – Each and every attribute in active directory schema has unique OID value. There is script develop by Microsoft to generate these unique OID valves. It can be found in https://gallery.technet.microsoft.com/scriptcenter/Generate-an-Object-4c9be66a#content it also can directly run using following PowerShell command. 

 

#--- 

$Prefix="1.2.840.113556.1.8000.2554" 

$GUID=[System.Guid]::NewGuid().ToString() 

$Parts=@() 

$Parts+=[UInt64]::Parse($guid.SubString(0,4),"AllowHexSpecifier") 

$Parts+=[UInt64]::Parse($guid.SubString(4,4),"AllowHexSpecifier") 

$Parts+=[UInt64]::Parse($guid.SubString(9,4),"AllowHexSpecifier") 

$Parts+=[UInt64]::Parse($guid.SubString(14,4),"AllowHexSpecifier") 

$Parts+=[UInt64]::Parse($guid.SubString(19,4),"AllowHexSpecifier") 

$Parts+=[UInt64]::Parse($guid.SubString(24,6),"AllowHexSpecifier") 

$Parts+=[UInt64]::Parse($guid.SubString(30,6),"AllowHexSpecifier") 

$OID=[String]::Format("{0}.{1}.{2}.{3}.{4}.{5}.{6}.{7}",$prefix,$Parts[0],$Parts[1],$Parts[2],$Parts[3],$Parts[4],$Parts[5],$Parts[6]) 

$oid 

#---

 

4) Syntax – It define the storage representation for the object. It is only allowed to use syntaxes defined by Microsoft. One attribute can only associate with one syntax. In below I listed few common used syntaxes in attributes. 

 

Syntax

Description

Boolean

True or False 

Unicode String

A large string

Numeric String

String of digits

Integer

32-bit Numeric value

Large Integer

64-bit Numeric value

SID

Security Identifier Value

Distinguished Name

String value to uniquely identify object in AD

Along with the syntax we also can define the minimum or maximum values. If it’s not defined it will take the default values. 

In following demo, I like to add a new attribute called NI-Number and add it to the User Class

attri1

As the next step, we need to add it to the user class. In order to do that go to classes container, double click on user class and click on attributes tab. In there by clicking the add button can browse and select the newly added attribute from the list. 

attri2

Now when we open a user account we can see the new attribute and we can add the new data to it. 

attri3

Once data been added we can filter out the information as required. 

Get-ADuser “tuser4” -Properties nINumber | ft nINumber

attri4

Note – To add the attributes to the schema you need to have schema administrator privileges or enterprise administrator privileges. 

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.  

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Review Active Directory Domain Service Events with PowerShell

There are different ways to review Active Directory service related logs in a domain controller. Most common way is to review events under Event Viewer mmc. 

event1

We can review events using server manager too. 

event2

We also can use PowerShell commands to review event logs or filter events from local and remote computers without any additional service configurations. Get-EventLog is the primary cmdlet we can use for this task. 

Get-EventLog -List

Above command will list down the details about the log files in your local system including the log file name, max log file size, number of entries. 

Get-EventLog -LogName ‘Directory Service’ | fl

Above command will list down all the events under the log file Directory Service

we also can limit the number of events we need to list down. As an example, if we only need to list down the latest 5 events from the Directory Service log file, we can use,

Get-EventLog -Newest 5 -LogName ‘Directory Service’

We can further filter down it by listing down evens according to entry type. 

Get-EventLog -Newest 5 -LogName ‘Directory Service’ -EntryType Error

Above command will list down first five “errors” in the Directory Service log file.

We also can add time limit to filter events more. 

Get-EventLog -Newest 5 -LogName ‘Directory Service’ -EntryType Error –After (Get-Date).AddDays(-1)

Above command will list down the events with error type ‘error’ with in last 24 hours under Directory Service log.

We also can get the events from the remote computers. 

Get-EventLog -Newest 5 -LogName ‘Directory Service’ -ComputerName ‘REBEL-SRV01’ | fl -Property *

Above command will list down the first five log entries in Directory Service log file from REBEL-SRV01 remote computer. 

event3

We also can extract events from few computers in same time. 

Get-EventLog -Newest 5 -LogName ‘Directory Service’ -ComputerName “localhost”,“REBEL-SRV01”

Above command will list down the log entries from local computer and the REBEL-SRV01 remote computer. 

When it comes to filtering, we can further filter events using the event source. 

Get-EventLog -LogName ‘Directory Service’ -Source “NTDS KCC”

Above command will list down the events with the source NTDS KCC

It also allows to search for the specific event ids. 

Get-EventLog -LogName ‘Directory Service’ | where {$_.eventID -eq 1000}

Above command will list down the events with event id 1000. 

Note – There are recommended list of events which we need to audit periodically to identify potential issues in active directory environment. The complete list is available for review under https://docs.microsoft.com/en-gb/windows-server/identity/ad-ds/plan/appendix-l–events-to-monitor

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.  

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Active Directory Health Monitoring with OMS (Operation Management Suite)

System Center Operation Manager (SCOM) is the Microsoft solution to monitor application and systems health in detail. It applies to Active Directory monitoring as well.  Using relevant management packs, it can monitor health of active directory services and its activities. Microsoft introduced Operation Management suite to bring monitoring to the next level with advanced analytics technologies. SCOM was more in to monitoring applications, services and devices running on-premises. But OMS work with on-premises, cloud only or hybrid cloud environments. 

OMS Benefits 

Minimal Configuration and Maintenance – If you worked with SCOM before you may know how many different components we need to configure such as management servers, SQL servers, Gateway Servers, Certificate Authority etc. But with OMS all we need a subscription and initial configuration of monitoring agents or gateway. No more complex maintenance routings either. 

Scalable – Latest records from Microsoft shows OMS is already using by more than 50k customer. More than 20PB data been collected more than 188 million queries been run for a week. With cloud based solution we no longer need to worry about the resource when we expanding. Subscription is based on the features and the amount of data you upload. You do not need to pay for the compute power. I am sure Microsoft no-way near running out of resources!! 

Integration with SCOM – OMS fully supported to integrate with SCOM. It allows engineers to specify which systems and data should be analyze by OMS. It also allows to do smooth migration from SCOM to OMS in stages. In integrated environment SCOM works similar to a gateway and OMS do queries through SCOM. OMS and SCOM both uses same monitoring agent (Microsoft Monitoring Agent) and there for client side configuration are minimum. 

Note – Some OMS components such as Network Performance Monitoring, WireData 2.0, Service Map required additional agent files, system changes and direct connection with OMS. 

Frequent Features Updates –Microsoft releases System center version in every four years’ time. But OMS updates and new services are coming more often. It allows Microsoft to address industry requirements quickly. 

OMS in Hybrid Environment 

In a hybrid environment, we can integrate on-premises system with OMS using three methods. 

Microsoft Monitoring Agent – Monitoring agent need to install in each and every system and it will directly connect to OMS to upload the data and run queries. Every system need to connection to OMS via port 443. 

SCOM – If you already have SCOM installed and configured in your infrastructure, OMS can integrate with it. Data upload to OMS will be done from SCOM management servers. OMS runs the queries to the systems via SCOM. However, some OMS feature still need direct connection to system to collect specific data. 

OMS gateway – Now OMS supports to collect data and run queries via its own gateway. This works similar to SCOM gateways. All the systems do not need to have direct connection to OMS and OMS gateway will collect and upload relevant data from its infrastructure. 

What is in there for AD Monitoring? 

In SCOM environment we can monitor active directory components and services using relevant management packs. It collects great amount of insight. However, to identify potential issues, engineers need to analyze these collected data. OMS provide two solution packs which collect data from Active Directory environment and analyze those for you. After analyzing it will visualize it in user friendly way. It also provides insight how to fix the detected problems as well as provide guidelines to improve the environment performance, security and high availability. 

AD Assessment – This solution will analyze risk and health of AD environments on a regular interval. It provides list of recommendations to improve you existing AD infrastructure. 

AD Replication Status – This solution analyzes replication status of your Active Directory environment. 

In this section I am going to demonstrate how we can monitor AD environment using OMS. Before we start we need, 

1) Valid OMS Subscription – OMS has different level of subscriptions. It is depending on the OMS services you use and amount of data uploaded daily. It does have free version which provides 500mb daily upload and 7-day data retention. 

2) Direct Connection to OMS – In this demo I am going to use the direct OMS integration via Microsoft Monitoring Agent. 

3) Domain Administrator Account – in order to install the agent in the domain controllers we need to have Domain Administrator privileges. 

Enable OMS AD Solutions 

1) Log in to OMS https://login.mms.microsoft.com/signin.aspx?ref=ms_mms as OMS administrator

2) Click on Solution Gallery

oms1

3) By default, AD Assessment solution is enabled. In order to enable AD Replication Status solution, click on the tile from the solution list and then click on Add

oms2

Install OMS Agents 
 
Next step of the configuration is to install monitoring agent in domain controllers and get them connected with OMS. 
 
1) Log in to the domain controller as domain administrator
2) Log in to OMS portal 
3) Go to Settings > Connected Sources > Windows Servers > click on Download Windows Agent (64bit). it will download the monitoring agent to the system. 
 
oms3
 
4) Once it is download, double click on the setup and start the installation process. 
5) In first windows of the wizard click Next to begin the installation. 
6) In next window read and accept the licenses terms.
7) In next window, we can select where it should install. If there is on changes click Next to Continue. 
8) In next window, it asks where it will connect to. In our scenario, it will connect to OMS directly. 
 
oms4
 
9) In next window, it asks about OMS Workspace ID and Key. it can be found in OMS portal in Settings > Connected Sources > Windows Servers. if this server is behind proxy server, we also can specify the proxy setting in this window. Once relevant info provided click on Next to continue. 
 
oms5
 
10) In next window, it asks how I need to check agent updates. It is recommended to use windows updates option. Once selection has made, Click Next
11) In confirmation page, click Install to begin the installation. 
12) Follow same steps for other domain controllers.
13) After few minutes, we can see the newly added servers are connected as data source under Settings > Connected Sources > Windows Servers
 
oms6

View Analyzed Data
 
1) After few minutes, OMS will start to collect data and virtualize the findings. 
2) To view the data, log in to OMS portal and click on relevant solution tile in home page. 
 
oms7
 
3) Once click on the tile it brings you to a page where it displays more details about its findings. 
 
oms8
 
4) As I explain before, it not only displays errors. It also gives recommendation on how to fix the existing issues. 
 
oms9
 
Collect Windows Logs for Analysis
 
Using OMS, we also can collect windows logs and use OMS analyzing capabilities to analyze those. When this enabled, OMS space usage and bandwidth usage on organization end will be higher. In order to collect logs,
 
1) Log in to OMS portal
2) Go to Settings > Data > Windows Event Log
3) In the box, you can search for the relevant log file name and add it to the list. We also can select which type of events to extract. Once selection is made click Save
 
oms10
 
4) After few minutes, you can start to see the events under log search option. In their using queries we can filter out the data. Also, we can setup email alerts based on the events. 
 
oms11
 
I believe now you have a basic knowledge on how to use OMS to monitor AD environment. There is lot of things we can do with OMS and I will cover those in future posts. 
 
This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.  
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Active Directory Lingering objects

If you are maintaining healthy AD infrastructure it is very much unlikely to see lingering objects in AD. Let’s assume a Domain Controller has been disconnected from Active Directory environment and stayed offline more that the value specified tombstone lifetime attribute. Then it was again reconnected to replication topology. The objects which were deleted from Active Directory during the time that particular domain controller stayed offline will be remain as lingering objects on it. 

When object was deleted using one domain controller, it replicates to other domain controllers as tombstone object. it contains few attribute values but it cannot be used for active operations. It remains in Domain Controllers until it reaches the time specify by tombstone lifetime value. Then tombstone object will be permanently deleted from the directory. Tombstone time value is forest wide setting and depend on the operating system running. For operating systems after windows server 2003, default tombstone value is 180 days.  

The problem happens when the Domain Controller with lingering object involve with outbound replication. In such situation, one of following can happen. 

If the destination domain controller has strict replication consistency enabled it will halt the inbound replication from that particular Domain Controller. 

If the destination domain controller has strict replication consistency disabled it will request full replica and will reintroduced to the directory. 

Events 1388, 1988, 2042 are clues for lingering objects in Active Directory Infrastructure. 

Event id

Event Description

1388

Another domain controller (DC) has attempted to replicate into this DC an object which is not present in the local Active Directory Domain Services database. The object may have been deleted and already garbage collected (a tombstone lifetime or more has past since the object was deleted) on this DC. The attribute set included in the update request is not sufficient to create the object. The object will be re-requested with a full attribute set and re-created on this DC. Source DC (Transport-specific network address): xxxxxxxxxxxxxxxxx._msdcs.contoso.com Object: CN=xxxx,CN=xxx,DC=xxxx,DC=xxx Object GUID: xxxxxxxxxxxxx Directory partition: DC=xxxx,DC=xx Destination highest property USN: xxxxxx

1988

Active Directory Domain Services Replication encountered the existence of objects in the following partition that have been deleted from the local domain controllers (DCs) Active Directory Domain Services database. Not all direct or transitive replication partners replicated in the deletion before the tombstone lifetime number of days passed. Objects that have been deleted and garbage collected from an Active Directory Domain Services partition but still exist in the writable partitions of other DCs in the same domain, or read-only partitions of global catalog servers in other domains in the forest are known as "lingering objects". This event is being logged because the source DC contains a lingering object which does not exist on the local DCs Active Directory Domain Services database.

This replication attempt has been blocked. The best solution to this problem is to identify and remove all lingering objects in the forest. Source DC (Transport-specific network address): xxxxxxxxxxxxxx._msdcs.contoso.com Object: CN=xxxxxx,CN=xxxxx,DC=xxxxxx,DC=xxx Object GUID: xxxxxxxxxxxx

2042

It has been too long since this machine last replicated with the named source machine. The time between replications with this source has exceeded the tombstone lifetime. Replication has been stopped with this source. The reason that replication is not allowed to continue is that the two machine's views of deleted objects may now be different. The source machine may still have copies of objects that have been deleted (and garbage collected) on this machine. If they were allowed to replicate, the source machine might return objects which have already been deleted. Time of last successful replication: <date> <time> Invocation ID of source: <Invocation ID> Name of source: <GUID>._msdcs.<domain> Tombstone lifetime (days): <TSL number in days> The replication operation has failed.

Strict replication consistency

This setting is controlled by a registry key. After windows server 2003, by default this setting is enabled. The key can be found under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters 

lin1

Removing lingering objects

Lingering objects can be remove using:

repadmin /removelingeringobjects <faulty DC name> <reference DC GUID><directory partition>

In the preceding command:

faulty DC name: It represents the DC which contains lingering objects

reference DC GUID: It is the GUID of a DC which contains an up-to-date database that can be used as a reference

directory partition is the directory partition where lingering objects are contained

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.  

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Introducing change tracking and inventory features for Azure VM

In any infrastructure, users are using different applications, services, file shares in order to get their work done. each of these components may have periodic changes. In order to maintain integrity, provide faster IT support and identify risks, it is important to track all the changes against these components. Maintaining inventory (software level) will help engineers to verify if systems are running as expected with relevant software and services. Change tracking and inventory process can be done using software or workflows. 

Operation Management Suite (OMS) provides solution called “Change Tracking” to capture changes in cloud only, hybrid or on-premises only environments. It can track changes on Files, Registry Entries, Software, Windows services and Linux Daemons. 

ch1

Microsoft recently release “Change tracking and Inventory” solution which can implement in Azure VM level. However, this is not replacement for OMS solution. OMS can track changes on any environment whiles this new solution is just for cloud only servers. 

There are few things I like about this solution,

1. Easy to implement – with few clicks this feature can be enable in VM level.

2. No Agents – It doesn’t need agents to track changes or maintain inventory. 

3. No need to log in to VM – It is not required any user interaction or system credentials, in order to use these features. Engineers can view visualized data without login to VM. 

4. No scheduled scans – It tracks changes automatically. you do not need to manually create scan jobs or update schedules. (however, in background jobs are controlled by azure automation) 

Please note this is still in preview mode, there for not recommended to use in production environment. But it is not too early to try its capabilities.  

Let’s go ahead and see how we can enable and use this feature. 

1. Log in to Azure portal (https://portal.azure.com) as Global Administrator.

2. Go to Virtual Machines and click on the VM you like to get this feature enabled. 

3. In VM panel, under OPERATIONS section click on Inventory (preview) 

ch2

4. Then it will load up detail window, click on purple color bar like in below image to enable the feature. 

ch3

5. Then it will load up new window with information such as log analytic workspace id and automation account id. Click on Enable to proceed. 

ch4

6. Once feature is enabled we can see following window. It will take while to populate data.

ch5

7. It also enables the Change tracking (preview) feature. 

ch6

8. After sometime you will be able to see data under Inventory (preview) and Change tracking (preview) windows. Let’s start with Inventory (preview). When I load the window, first it lists down the software installed in the system. 

ch7

9. This includes information about windows updates and all other third-party applications. If it’s a software, it displays the version number and publisher’s info. As an example, I install acrobat reader and I can see its info as following. 

ch8

10. Under the Files tab we can see the files details. By default, it doesn’t scan for any files. In system, there are thousands of files, it is no point to add all those to inventory. Instead of that users can define what folders and files to monitor and add to inventory. If its folder, it will list all these files under that particular folder automatically. In order to do that, click on Edit Settings Option.

ch9

11. In next window, click on Windows Files tab. 

ch10

12. Then click on Add in next window. 

ch11

13. In next window, type a unique name for the file or folder under Item Name. Then type folder or file path under Enter Path. Once everything done, click on Save

ch12

14. Once its added it will show under Windows Files tab. If you need to disable inventory and change tracking for a file or folder all you need to do is click on it and click on false button under Enabled.

ch14

15. If it’s a Linux system, files and folder paths can add using Linux Files tab. 

ch15

16. When we add files here, it is automatically enable change tracking for those files and folders. So, you do not need to add it again under change tracking feature. 

17. Under Registry Files tab we can enable registry files tracking and inventory as well. It does have pre-defined registry path but at the moment I can see a way to add custom path. To enable feature, click on registry entry, then click on True under enabled. In the end click on Save

ch16

18. This also enable tracking for windows registry files under change tracking feature. 

19. Under Windows services tab, it lists all services in the system. It also shows its current status and startup status.

ch17

20. This is just for one VM, if you need to view multiple VM inventory data, you need to click on Manage multiple computers.

ch18

21. In new window, it lists the machines which has this feature enabled. 

ch19

22. If you need to add new VMs to the list, it can be done using Add Azure VM option. Then it will allow you to enable inventory feature. 

ch20

23. There is option to add non-azure virtual machines too. but that will lead you to OMS.

24. All other windows are familiar, only change here you can see if the same event is repeated in different computers. 

ch21

25. All the events in here also can view using log analytics. In order to access that, click on Log Analytics option in main window. 

ch22

26. Then it will load all the events and we can find relevant info using queries or just browsing through filters. 

ch23

27. Now we done with inventory feature and let’s move in to Change tracking (preview) feature. In order to access the feature just click on Change tracking (preview) option under operations. 

ch24

28. As soon as login it shows the changes for last 24 hours. It is shows as graph as well as a list. 

ch25

29. Using Time Range dropdown, we can define the time range for data. 

ch26

30. Using Change Types dropdown, we can select which type of data to view. 

ch27

31. The graph itself really useful to narrow down a change quickly. All you need do is move mouse over the timeline and then select the area you like to dig in to by dragging the cursor. Then it simple list down the results for that particular time. 

ch28

32. Using Manage multiple computers option we can view changes for multiple computers in same window. It works same way it works in inventory feature.

ch29

33. Edit Settings option also same as in inventory feature. So, I am not going to cover it here. 

34. In main window, there is option to manage connection with azure activity log. There you can enable integration with azure activity log. You can find more info about activity log in https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs

ch30

This marks the end of this blog post. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm  to get updates about new blog posts.  

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Update Management for Azure VM

Keeping your operating systems up-to-date is critical as it will be the first step towards protecting your systems from emerging threats. it will also help to improve efficiency and user experience. Simplest way to update your windows operating systems to use “Windows Update” feature comes with every operating system. But this is not enough for corporates as it is important to manage windows updates in control manner. Microsoft has tools such as WSUS, SCCM to manage windows update in infrastructure. when it comes to hybrid or cloud only environment it is important update your virtual machines running on cloud as well. Microsoft Operation Management Suite (OMS)’s “Update Management” is a great way to manage updates in any environment (on-premises, cloud only or hybrid). It detects and report missing updates in your environment. It also allows to deploy those using Azure automation. 

update1

However, if you running an azure environment, now Microsoft have another solution which will help to manage updates for Azure VMs. This is NOT a replacement for OMS update management even though it works similar. it helps to manage updates in individual VM level or as group. This feature “Update Management” still in preview mode but it is not too early to try its capabilities. 

There are few things I like about this feature.

1. No Agents or additional configuration – This feature can enable under a VM with few clicks and it doesn’t require any additional configuration inside the VM. It doesn’t need any agent installation or any other configuration such as firewall changes. It’s simple and efficient. 

2. No need to log in to VM – This is ideal for MSPs as well. In order to manage updates, you do not need to log in to VM at all. No need to define passwords to install updates either. 

3. Reporting – It list down missing updates and categories those based on type. It lists info about failed deployments. So, everything been logged and visualized in easy way to understand. 

Let’s see how we can get this setup.

1. In order to enable this feature, you need to log in to Azure as global administrator. 

2. The click on Virtual Machines to list down VMs.

update2

3. Then click on the VM which you choose. 

4. From left hand side panel, click on Update Management (Preview)

update3

5. In next window click on purple bar (as in following image) to enable the feature. 

update4

6. Then it will load the page to enable to feature. As we can see it is also creating log analytic workspace as well as automation account. Click on Enable to proceed. 

update5

7. Once it is enabled, it will take 15-20 minutes to gather information about updates. Once it is finish we can see new data under Update Management (Preview) panel. 

update6

8. In Missing Update section, it shows update name, classification, published date and link to see more details about updates. 

update7

9. If we click on one of missing updates it will bring to us to the log search window and in there we can see more details about update. 

update8

10. In Update Management (Preview) panel, lets click Manage Multiple Computers Option. 

update9

11. In that window, we can see all the computers which have this feature enabled and their compliance status. 

update10

12. By clicking on each computer in list, we can see more detail about it using log search window.

update11

13. We also can add Azure VM to update management. To do that click on Add Azure VM option in Manage Multiple Computers panel. 

update12

14. It will list the VMs in account and click on the relevant VM you like to add. Then we can enable the feature under it. 

update13

15. Now we have list of missing updates. Next step is to schedule update. In order to do that go back to Update Management (Preview) panel and click on Schedule update deployments option. 

update14

16. In new window, first thing is to define name for the job. Under Update classification we can select which updates to consider for the schedule. 

update15

17. If need to exclude any updates, we can do that using updates to exclude option. In there we need to define relevant KB numbers. 

update16

18. Under the schedule settings we can define the time to apply updates. It can be either one time or recurring job. 

update17

19. Using maintenance window option we can set how long it should be in maintenance mode. 

update18

20. Once it’s done click on Create to create the schedule. 

update19

21. If you use the same Schedule update deployments option under Manage Multiple Computers window, we can create schedule for multiple computers. 

update20

22. Once schedule is created we can see it under Scheduled update deployments tab. 

update21

23. This completes the configuration part and once schedule run, we can verify it using Update Management (Preview) panel 

This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Project “Honolulu” for better windows server management experience

If you worked with windows server 2003 or earlier before, I am sure you know how painful it was to install roles and manage those. We had to go through “Add or Remove Windows Components” and many “MMC”. Also, it was recommended to run “Security Configuration Wizard” before install roles as security settings not come default with role installation. To address these difficulties Microsoft introduced “Server Manager” with windows server 2008. It replaced many of wizards’ server 2003 had. It made roles and feature management easy. It was further developed and was available with every server operating system released after windows server 2008. 

Project “Honolulu” from Microsoft is to bring server management in to next level. It is simple but powerful web based interface which can install on windows 10 or windows server 2016. It can use to configure and troubleshoot servers locally or remotely. 

Why it is good? 

1. Simplified one web console for server management – Instead of using multiple MMC to manage resources, Honolulu gives simple web based interface to do it. It also allows to go from simple role install to advanced troubleshooting using same console.  

2. Better interconnection – With Honolulu we can connect windows server 2016, windows server 2012 R2, windows server 2012, Hyper-V 2016/2012R2/2012 in to one console. It also allows to manage failover cluster, hyper-converged environments from same console. Microsoft also working with partners to refine their SDK and extension model.

3. No Agents or No additional configurations – To connect servers to console it is not required agents or any other additional configuration. Only requirement is to have connectivity between gateway server and member servers. 

4. Familiar tools packaged together – web based console allow you to access familiar tools from one place. For an example, you can access Server management, registry editor, firewall tools via console. Before we had to use different methods to open those MMC. This also make it easy to adopt without additional trainings. 

5. Flexible for integration – the design itself welcoming third parties to create modules and integrate it with Honolulu so those applications or services also can manage via same console. 

6. Can use to manage resources via internet – Honolulu web console (web server components) can publish to remote networks and allow engineers to manage servers without using traditional management methods such as VPN, RDP etc. 

Will it replace other management tools? 

At the moment System center and Operation Management Suite (OMS) provides advanced infrastructure management capabilities. Project Honolulu will be complementary to those existing tools but it is not mean to replace in any mean.

Is it got anything to do with Azure? 

No, it is not. It can use with in azure VMs too. Console doesn’t need internet access even to operate. 

When it will release? 

At the moment, it is in technical preview stage. It will be released in year 2018. However, this will not prevent you from testing and providing feedback to improve it further. 

Is it supporting all operating systems?

It can only install on windows server 2016 or windows 10. 

Version

Install Honolulu

Managed node

Managed HCI cluster

Windows Server 2016

Yes

Yes

Future

Windows Server version 1709

Yes

Yes

Yes, under insider program

Windows 10

Yes

N/A

N/A

Windows Server 2012 R2

No

Yes

N/A

Windows Server 2012

No

Yes

N/A

However, if you need to manage windows server 2012 R2 and 2012 you need to install Windows Management Framework (WMF) version 5.0 or higher first as required PowerShell features not available in earlier versions.  

Architecture 
 
Honolulu has two components.
 
Gateway – Gateway is to manage connected servers via Remote PowerShell and WMI over WinRM. 
Web Server – It is the UI for Honolulu and users can access it via HTTPS requests. It also can publish to remote networks to allow users to connect via web browser. 
 
Feedback is important 
 
The project is still in early stage. We all can contribute to make it perfect. Feel free to submit your feedback via https://aka.ms/HonoluluFeedback 
 
Let’s take a look!
 
Now it’s time to install and play with it. In my demo environment, I have two windows servers 2016 under domain therebeladmin.com. I am going install it on one server and get both servers connected to the console. 
 
1. Log in to the server as administrator 
3. Double click on .exe to install. In initial window accept the license terms and click Next to continue. 
4. In next window click on tick box to select “Allow project “Honolulu” to modify this machine’s trusted host’s settings”. In same window can select “create a desktop shortcut to launch project “Honolulu”” option to create desktop shortcut. 
 
hon1
 
5. In next window, we can define a port for management site. For demo purpose, we can use self-sign certificate to allow HTTPS requests. once selections are made, click Install to proceed. 
 
hon2
 
6. Once installation is completed, double click on shortcut to launch the console.
 
Note : Console is recommended to use on Edge or Chrome browser. If you using IE, it will give error saying to use it on recommended browser. 
 
7. In initial window, it will launch a tour to explain project Honolulu. You can either follow it or skip. 
8. In home page, it lists the servers added to console. By default, it has the server it is installed on. 
 
hon3
 
9. To add new server to the list, click on Add button.
 
hon4
 
10. Then it shows the list of connections types available. In demo, I am going to add single server so I am going to choose “Add Server Connection” 
 
hon5
 
11. In next window, it asks the name of the server. please provide server name and click on Submit
 
hon6
 
You need administrator account to add server to console. In my demo, I am using domain admin account to install and configure Honolulu. If you are in workgroup environment, it will give option to define admin account user name and credentials. 
 
hon7
 
We also can import multiple servers using .txt file. 
 
hon8
 
Once its added, it will show up on home page. 
 
hon9
 
12. In order to manage server, click on the server name in homepage. Then it will bring up the server overview page. 
 
hon10
 
In this page, it gives real-time information about server performance. It also provides data about server resources. not only that it also gives options to restart or shutdown server, access settings and edit computer name. 
 
hon11
 
13. Using Device tab, we can view the details about the server hardware resources. 
 
hon12
 
14. Certificate tab allows to view all the certificates in server. more importantly it shows certificates for local machine and current user in same window. If its traditional method we have to open this using MMC. 
 
hon13
 
15. Events tab shows all the events generated in server. 
 
hon14
 
16. Files tab works similar to file explorer. You can create folders, rename folders and upload files to folders using it. unfortunately, you can’t change permissions of the folders at the moment. 
 
hon15
 
17. Firewall Tab is one of my favorites. Now it is easy to see what each rule does. It also allows to modify rules if needed.  
 
hon16
 
hon17
 
18. Registry tab is also very useful. Using same console now we can add or modify registry entries. 
 
hon18
 
19. Roles and Features tab allow you to install/remove roles and features.
 
hon19
 
hon20
 
20. Services tab work similar to traditional services mmc. It can use to status of services, start services, stop services or change startup mode. 
 
hon21
 
21. Storage tab helps to manage allocated storage to server. 
 
hon22
 
In this blog post, I tried to go through each option but I like to encourage you to go and check its capabilities in details. It is easy to implement yet powerful. This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter

Step-by-Step guide to manage Azure Storage using Azure CLI 2.0 – Part 02

This is last part of my blog post series which is covering Azure CLI 2.0 functions. If you didn’t read part 01 yet please read it before start on this. You can find it on http://www.rebeladmin.com/2017/10/step-step-guide-manage-azure-storage-using-azure-cli-2-0-part-01/ 

In my demo setup, I have two VM running. One is created using Azure Managed disks. In part 01 I explained how to add additional disk. It is currently having a 100GB additional disk attached. 

Expand Disks

Let’s see how we can expand disks using Azure CLI. Before do this, make sure you log in to Azure CLI using az login

let’s start with expanding azure managed disks. First, we can verify the VM’s storage configuration using, 

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore1

In there I have two disks. One is for OS called osdisk_6469626e28 and the other data disk called DataDisk01

We can’t increase the disk on a running VM. Not even a data disk. So first we need to deallocate the VM. We can do it using. 

az vm deallocate --resource-group rebeladminrg01 --name REBLEVM101

in above command –resource-group defines resource group VM belongs to. –name defines the VM name. 

clistore2

once it is completed we can increase the disk sizes. 

I need to expand os Disk size to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name osdisk_6469626e28 --size-gb 150

clistore3

I also like to expand data disk to 150 GB. I can do it using,

az disk update --resource-group rebeladminrg01 --name DataDisk01 --size-gb 150

clistore4

in above commands, –resource-group defines resource group disks belongs to. –name defines the disk’s name. 

after finish, we can start the VM using,

az vm start --resource-group rebeladminrg01 --name REBLEVM101

once VM is up we can go in and expand the disk in OS level. 

clistore5

if you looking to expand disk for unmanaged disks it can be done via interface or Azure CLI 1.0. more info can find in https://docs.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks-nodejs

the document itself for Linux vm but expand part work same way. 

Snapshots

We also can take snapshots of disk as quick recovery option. It is full copy of a disk in the time it’s taken. It can keep as a backup or attach to another machine for troubleshooting. 

In my demo, I am going to take snapshot of an azure managed OS disk. Before do that I need to find the disk ID. It can be done using

az vm show --resource-group rebeladminrg01 --name REBLEVM101

clistore6

then we can take snapshot using,

az snapshot create -g rebeladminrg01 --source "/subscriptions/xxxxx/resourceGroups/REBELADMINRG01/providers/Microsoft.Compute/disks/osdisk_6469626e28" --name vm101osDisk-backup

clistore7

in above –source defines the disk id and –name defines the snapshot name.

if it is a unmanaged disk, snapshot works on different way. You can read more about it from https://docs.microsoft.com/en-us/azure/virtual-machines/linux/incremental-snapshots 

Convert to Managed Disks

if required we can convert vm with unmanaged disks to managed disk. To do that first we need to deallocate the VM.

az vm deallocate --resource-group rebeladminrg01 --name REBELVM102

then we can start the converting process using,

az vm convert --resource-group rebeladminrg01 --name REBELVM102

once process is finished it will start the VM. 

clistore8

clistore9

Manage blobs

We also can create, manage and delete blobs using Azure CLI. 

To create a container we can simply use,

az storage container create --name datastorage01 --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above –name defines the container name. AccountName specify the storage account name and AccountKey specify the auth key for the storage account. 

By default, the container data is set to private. If need it can set to public read access for blobs (blob) or public read and list access to whole container (container). It can define using –public-access

Once container is created we can upload blob using,

az storage blob upload --file C:\myzip1.zip --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the local file path. –container-name defines the container name it is uploading to. –name defines the blob name once it is uploaded. 

clistore10

to verify, we can list down the files in blob using,

az storage blob list --container-name datastorage01 --output table --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore11

we can download blob to local storage using,

az storage blob download --container-name datastorage01 --name myzip1.zip --file C:\myzip2.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=xxxx/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

in above, –file defines the path and name it will have when downloaded to the local storage.

clistore12

we can delete a blob using command similar to,

az storage blob delete --container-name datastorage01 --name myzip1.zip --connection-string "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=rebelstorage01;AccountKey=1WzgTd/ixi+FKRr3YUS9CgEhCciGVIyI9+6CtqjTIiPvbXkmpFDK9sINE28jdbIwLLOUZyiAtQ3Edzx2y89RPQ=="

clistore13

This marks the end of the blog post and hope it was useful. If you have any questions feel free to contact me on rebeladm@live.com also follow me on twitter @rebeladm to get updates about new blog posts.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • Live
  • RSS
  • StumbleUpon
  • Twitter