Author: Adam Fowler

Group Policy Not Processing > Unhealthy DFS?

This is a scenario I ran into, so thought it was worth noting the steps.

I’d pushed out an environment variable in Group Policy Preferences, but a particular person hadn’t got it. I’d confirmed this by running the ‘set’ command on their PC and the environment variable wasn’t listed (shortcut on this, if you use ‘set x’ with x = the first letter of the environment variable, it will only show those with that first letter. ‘set u’ is a quick way to see who’s logged on under the ‘username’ variable’).

After confirming they didn’t have this new variable, I tried to refresh Group Policy with the ‘gpupdate /force’ command. Alarm bells went off when I saw this result:

The processing of Group Policy failed. Windows attempted to read the file \\fakedomain.com\SysVol\fakedomain.com\Policies\{389D2400-A8FE-44CD-B7B7-3914920183F8}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may betransient and could be caused by one or more of the following:
a) Name Resolution/Network Connectivity to the current domain controller.
b) File Replication Service Latency (a file created on another domain controller
has not replicated to the current domain controller).
c) The Distributed File System (DFS) client has been disabled.

The important part of that was that it was unable to read a gpt.ini file. I followed the path specified in the user’s context still – the path under Policies didn’t exist! SysVol is normally a DFS share, so I tested for myself and the path existed. What was different between me and them? Lots probably, but I was at a different site connecting to a different DFS server.

Going to the properties of any folder in that DFS path, you can change the server you’re pointing to:

dfs

This way you can toggle back and forth. From this, I confirmed that one DFS server was missing the folder in question, along with a lot of others.

From that, I RDP’d to the server and had a look in Event Viewer > Applications and Services > DFS Replication to see if there were any errors or warnings. There was a few warnings around losing connectivity, so I decided to restart the DFS Replication Service to see if it just needed a kick:

services

After restarting, it was back to Event Viewer to see if it was happy or not. It was not.

Event 2231 DFSR:

The DFS Replication service stopped replication on volume C:. This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication.

Additional Information:
Volume: C:
GUID: 992BDBB2-4593-11E3-93E8-806E5F6E6963

Recovery Steps
1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.
2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:
wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid=”992BDBB2-4593-11E3-93E8-806E5F6E6963 ” call ResumeReplication

For more information, see http://support.microsoft.com/kb/2663685.

That was nice, it was giving me the exact command to run to fix it, which I did. This showed the next problem in Event Viewer:

Event 4012 DFSR:

The DFS Replication service stopped replication on the folder with the following local path: C:\Windows\SYSVOL_DFSR\domain. This server has been disconnected from other partners for 73 days, which is longer than the time allowed by the MaxOfflineTimeInDays parameter (60). DFS Replication considers the data in this folder to be stale, and this server will not replicate the folder until this error is corrected.

To resume replication of this folder, use the DFS Management snap-in to remove this server from the replication group, and then add it back to the group. This causes the server to perform an initial synchronization task, which replaces the stale data with fresh data from other members of the replication group.

Additional Information:
Error: 9061 (The replicated folder has been offline for too long.)
Replicated Folder Name: SYSVOL Share
Replicated Folder ID: 0CD8DE8C-6293-4640-8911-67FCEBE60CD1
Replication Group Name: Domain System Volume
Replication Group ID: F84F2F63-3623-4911-B7B7-FBBD8968DBFE
Member ID: A45C340E-F890-4FD9-9FE5-9E38DB4EB590

Yikes, older than 60 days and nobody had even noticed. This can get tricky to try and remove and re-add a SYSVOL share, so it’s worth changing the MaxOfflineTimeInDays value. I set it to 300 with this command:

wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=300

After that, restarting the DFS Replication service and running the previous command from Event Viewer did the trick. It started syncing up and from looking in the Policies folder, I could see more folders turn up, including the original missing one from the gpupdate command.

Waiting a few minutes for this to stop, I changed the MaxOfflineTimeInDays value back to the 60 default.

Going back to the original user, running ‘gpupdate /force’ worked without any errors, and after a reboot, the missing envrionment variable being pushed by Group Policy Preferences had deployed.

Now on my ‘things to do’ list, is to work out DFS replication monitoring to resolve this in a lot less than 60 days! :)

Null and Not Null with PowerShell

Finding out if an object has a null (i.e. blank) value or not isn’t a difficult task to do.

Consider this scenario – you’ve found a bunch of old disabled accounts that someone forgot to remove the ‘Manager’ field. Finding accounts that have another field that would be populated for a current employee but blank for a departed would be a reasonable way of finding the problem accounts, then you could null the ‘Manager field. (note – you could just refine your search to disabled accounts but that’s not as fun).

To find all Active Directory users that have a blank ‘Department’ field is easily done with this command:

get-aduser -filter * -properties department | where department -eq $null

Then, showing the users that don’t have a blank ‘Department’ field is a slight change. You can’t use !$null (!=not), but you can use -ne (not equals)

get-aduser -filter * -properties department | where department -ne $null

You can also check for users that have a manger by switching ‘department’ to ‘manager’:

get-aduser -filter * -properties maanger | where manager -ne $null

Easy. Adding in a second ‘where’ statement so we can get results of users that have a manager, but no department means we have to add in a few extra characters to make PowerShell happy:

get-aduser -filter * -properties department,manager | where {($_.department -eq $null) -and ($_.manager -ne $null)}

The results can be a bit hard to read, so piping (|) to a select command will just show us the results of each user we want to see:

get-aduser -filter * -properties department,manager | where {($_.department -eq $null) -and ($_.manager -ne $null)} | select name

Finally, to blank the ‘manager’ field, we can swap the ‘select name’ command with this:

get-aduser -filter * -properties department,manager | where {($_.department -eq $null) -and ($_.manager -ne $null)} |  set-aduser -manager $null

You can then go back to a previous command to confirm you get no results. As always, check your data first before blanking out a bunch of user’s values!

Update

As @mickesunkan pointed out, the above isn’t the most efficient way to do searches. I’m sure I’ve mentioned this before, but I’m not always going to write the cleanest, quickest way of doing something. For a once off tasks this really doesn’t matter. For a daily task it starts to matter – not really by itself, but if you keep making more and more inefficient scripts, you’re putting extra unnecessary load on your environment with lots of LDAP lookups.

Above, I’m just getting ALL AD users. You could use a better filter and narrow down to a certain OU. You could also put part of your ‘where’ command into the filter, such as this:

get-aduser -properties manager,department -filter {department -notlike “*”}

This doesn’t work for the ‘Manager’ field though, you’ll see this error:

get-aduser : Operator(s): The following: ”Eq’, ‘Ne” are the only operator(s) supported for searching on extended attribute: ‘Manager’.

I couldn’t work out a way of putting the $null value as part of the filter, but if you do – please share :)

 

@mickesunkan also wrote this github code showing a few differnet ways to do this search, and which way is most efficient. Thanks Micke!

 

 

 

Group Policy Preferences – Replace Existing File

I’ve written before on how great Group Policy Preferences are, and thought I’d write a quick ‘how to’ on a likely common scenario – replacing an older file with a new one, but only if it already exists.

Pushing out a file via Group Policy Preferences is quite easy and has been around for a long time.

When creating a new file rule, you’ll see 4 options under ‘Action’ – Create, Replace, Update and Delete:

gpp2

Create will only copy the file from the source to the destination if the file doesn’t exist at the destination
Replace will actually remove a file (if one exists), and copy the source to the destination regardless if a file existed or not
Update is the misleading one, it will modify the file attributes of the destination file to match the source – if the files themselves are different, it won’t copy them. If the file doesn’t exist, it will copy the file to the destination though!
Delete will delete the file(s) specified.

None of these provide a solution to ‘Replace file only if it exists’ though. There’s two obvious ways this can be achieved; you can use ‘Replace’ but this will continually replace the file every time Group Policy is run, which in the user context is every 90 minutes. You also can’t use the option ‘Apply once and do not reapply’ because it will run regardless of the file existing or not – which means if the file isn’t there before group policy runs, the file may be replaced by a software install or other mechanism, and with the order out of whack, resulting in the wrong file being left there in the end.

The next logical way to make sure the order is correct is to use Item Level Targeting. Under the ‘Common’ tab, you can tick the box for ‘Item Level Targeting’ and point to the file in question:

gpp3

This will only run once though, and that is regardless of the ‘Item Level Targeting’ being true or false. That only controls whether the policy does what it’s configured to do, at the client side it’s still ‘run’ the policy, it just had nothing to do.

thommck had the best answer on how to get around this that I’ve found – use a custom WMI query. You’ll need to remove the ‘Apply once and do not reapply’ tick, but the file itself will only be copied over when both targeting rules are true. Please read his post for all the details, but the second item will need to be a WMI query, and have a string similar to this:

SELECT LastModified FROM CIM_DataFile WHERE name=”C:\\windows\regedit.exe” AND LastModified < ‘20160701000000.000000+060’

(Update 20th September 2024)

jszabo_98 in the comments has mentioned that he had to make a few small changes to the above WMI query example to work:
get-wmiobject -query ‘SELECT LastModified FROM CIM_DataFile WHERE name=”C:\\windows\\regedit.exe” AND LastModified < “20160701000000.000000+060″‘

(End of update)

This is checking the date of the file, and will only be ‘true’ if it’s less than that date.

Keep in mind that this is less than ideal, as WMI queries aren’t the most efficient way of processing group policy preferences, but it may be better than copying files around your network to every PC, every 90 minutes.

Azure AD Connect Health with AD DS

Azure AD Connect Health with AD DS is now in preview!

You’ll need Azure AD Premium for this, but it’s a little agent that gets installed on each of your domain controllers and provides health and alerting via Azure AD Connect Health.

The service is a light health and monitoring solution which reports back on some basics such as these:

azure health 3

Also, it will show any replication issues and other DC related problems for you to re-mediate. You can also configure email alerts, so you know when a problem is detected, rather than relying on checking the health page to notice something.

The setup of Azure AD Connect Health with AD DS is incredibly easy – download and install the agent (check you meet the prerequisites first!), use credentials of an Azure AD global administrator (set up a service account for this), and you’re done. If you install it on a server that doesn’t have the required Windows Server roles, you’ll get an error such as ” Microsoft.Identity.Health.Common.RoleNotFoundException: No role was registered.

The two other currently Health services are for ADFS and Azure AD Connect, so check those out too if you haven’t already.

One issue I had after installing was that I couldn’t see the box for Active Directory Domain Services in the Azure portal, it was just blank:


Pasted image at 2016_07_21 12_22 PM

After trying to work out why for a while, @kengoodwin pointed out that I should try resetting the view. This is done by clicking one of the ‘Add tiles’ options, then at the top of the screen choosing hte ‘Restore default’ option.

Doing this resulted in my tiles showing as they should – I’d never made adjustments to my tiles, but had previously gone into edit mode and saved the zero changes I did, which I believe stopped the portal from adding in the new tiles once the new health service was detected. This is how it should look:

ad health 2

Much better!

If you have Azure AD premium, then check out this free extra!

There’s Some Spam On Your Slacks

I’m a member of a few different Slack channels – they’re great for collaboration, helping others out and asking for assistance when you get stuck on something.

The biggest one is Windows Admins; winadmins.slack.com with over 1700 members (highly recommended if you’re a Windows Administrator).

An interesting event occured today, where an account called ‘jb’ joined, and immediately posted this:

 

Rather spammy in itself from where I sit, and a few others piped up being unimpressed with this action. ‘jb’ apologised and removed the post.

Doing this in a sysadmin channel however, is asking for a bit of further investigation. Putting aside the name itself (which along with the logo, looks like it should be a product for a completely different industry), it was a bit weird that ‘jb’ appeared to be doing marketing, but had also signed up with an email address which was admin@theirdomain – not something that a marketer would have access to.

I’ve censored the image as I don’t have permission to use it, and it’s not about them at all – but for context, it was a black and white face shot of a young, white female, with their title as ‘designer and inventor’

Slack for iOS Upload

A reverse google image search on the profile picture used revealed this:

person

…which turns out to be photos of people at a clothing launch in Berlin, and ‘jb’s’ photo was a cropped version of that. Now, it could be that this fashion industry person in Berlin is also the the person that runs this Japanese based tech company’s PR, AND has access to the admin email account for their domain.

Asking this mystery person what was going on was just met with silence, and then the account became inactive. What happened?? We may never know.

There’s a few take away points from all from this:

  1. Don’t steal a photo from the internet to use as your marketing tool, reverse Google images is good enough to find even part of a photo if it’s indexed.
  2. Don’t go into a sysadmin channel and spam your product; it won’t end with a positive experience from the people who generally have to stop spam.
  3. Slack communities should be treated as open available information – if an account gets approval, they could be scraping the conversations (and using for legitimate business purposes too)
  4. Don’t be fake when peddling your wares; people see through it.
  5. Spellcheck your automated messages; morarale isn’t a word.

Again, I don’t know how much of this applies to the company in question, draw your own conclusions. Maybe it was an elaborate test to see how the mood changed in the Slack channel?