Opening QDF Files

I had the problem of trying to open some .QDF files.

A google revealed they were most likely Quicken files, but had little luck going down that path.

Eventually I found out they were Reckon files. The QDF extension was a bit confusing, until I read this. In Australia, Quicken was localised by another company who it seems have parted ways. This resulted in Reckon being the Australian ‘version’ of Quicken.

Intuit owns Quicken, as well as QuickBooks. They have an online version too, descriptively called QuickBooks Online.

The result of all this was that I had Reckon files, with a .QDF extension – but an export of the files were also provided, and they had a .RKN file extension instead.

If you want to view these files, Reckon provides 60 day trial software that seems to have no limitations, available here. Note that if one version prompts saying you need to purchase addins, try a different version instead. I had better success with Reckon Accounts Home & Business 2016 with the particular files I was working on.

Also, if you get stuck then you could try some of Inuit’s free conversion utilities from older Quicken files to newer Quicken files, which can also be read by Quickbooks.

Credit to Reckon’s support who explained some of this to me!

I hope this helps anyone trying to open .QDF files or .RKN files, particularly if you’re from Australia!

ALPAKA 7ven Messenger Bag Review

I’ve generally been a backpack person. Decent quality backpacks have held what I needed, and it helped that my biennial trip to a Microsoft conference resulted in a new backpack each time.

Recently though, I was introduced to the ALPAKA 7ven Messenger Bag. I’ve been given a pre-production sample to use… which I really wasn’t sure if I’d like it or not based on the style of bag with my backpack history, but was happy to at least try it.

20160816_171245ALPAKA 7ven Messenger Bag

ALPAKA have just launched their Kickstarter for this bag too, which makes this review rather timely. They have a lot more photos and technical information about the bag there too – if you’re curious, have a look at their campaign.

Anyway, back to my experience. The bag turned up a month or so ago, and I’d already watched some videos on it’s features (again, check out the Kickstarter for those) and was getting rather interested in how it all worked.

The first thing I wanted to play with – and needed to get past anyway, were the magnetised latches. I hadn’t seen these in real life before, and wondered how they worked. Through some impressive engineering the latches go straight in and ‘click’, but will only come out if you slide them sideways:

20160816171725Fancy German-Designed Magnetic Latches

There are three of these clips on the front, and the middle one took me a minute to work out how to do it – not that it’s tricky, I just didn’t realise it was another of these clips! I was already impressed by the engineering on a simple clip, and continued to explore how the bag works:


20160816_171401Open Bag

The left and right clips had two different ‘clip slots’ they could go into, depending how you wanted to close the bag. There’s compartments all over the place, and a velcro laptop protector strip that can either be tucked inside the bag, or onto the front. You can see above the entire front pouch has the slightly fluffy material that velcro sticks to – it’s also softer than what you’d normally expect from velcro fluff.

There’s leather bits such as the handle and some of the corners, and the bag itself is waterproof too. I found that even though this is a pre-production sample, the quality was very high and after a month of constant use, there’s no visible wear and tear on anything… and I’m generally throwing the bag down whenever I reach my destination.

Of the bag’s three holding styles (suitcase, over one shoulder, messenger) I found the messenger mode to be the best for me. Here’s a picture of myself awkwardly posing for a selfie, wearing the bag:

20160715_173428Strap padding is removable but it has a picture of an alpaka on it.


It was actually really comfortable in this mode, which I wasn’t expecting at all comparing to a backpack. There’s an option to pull out another strap and clip it across for extra stability, but I found I didn’t need to do this for normal walking – but I can see it useful if you had to jog/run/bike somewhere, and it’s another magnetised clip that’s easy to attach and remove. Also there’s a small zip in the front to store something for easy access – not big enough for a mobile phone, but a bus ticket or credit card will fit fine.

One of the other big impressions the bag left on me was the thoughtfulness that went into the design. Most straps have a strap management solution and are adjustable, so you can size the straps to your requirements, but also slide the strap holders in place or tuck extra cables away, meaning you don’t have a bunch of straps dangling off your bag.

Size wise, I first saw the bag and wondered how much it could hold, and if it could hold a decent sized laptop. Here’s the bag behind a Lenovo ThinkPad P50 – a 15.6″ laptop:

20160720_192304Laptop Out

and here it is inside the 7ven Messenger Bag:

20160818_103953Laptop In

It passed the ‘will a big laptop fit?’ test rather well.

Since mucking around with the bag, I’ve been using it every day for work. It’s easy to flick over the head and walk around, and is better balanced than having a backpack over 1 arm – I’ve found I walk better when using it.

I also didn’t think I’d be that interested in a bag, but this bag certainly proved that wrong. I’ve shown it in person to several people who have all liked it; it looks smart, while being very practical.

I’m very happy that ALPAKA sent me this to keep – if you want your own, you’d better get onto the Kickstarter!





Windows Update Disables SSL 3

This page can’t be displayed

Turn on TLS 1.0, TLS 1.1, and TLS 1.2 in Advanced settings and try connecting to again. If this error persists, it is possible that this site uses an unsupported protocol. Please contact the site administrator.

This is the error that started turning up on a particular site in the last few days, for Internet Explorer 11.

Microsoft Edge showed a different page, making it sound like the site wasn’t resolving to DNS queries at all, or that I was offline:

Hmm, we can’t reach this page.

Try this

The workaround I’ve found for the moment in my particular scenario, was to enable the ‘Use SSL 3.0’ option under Internet Explorer’s Advanced options:


Worth doing for testing, but do not leave this ticked if possible. This is not best security practice, but may be the only way you can access a particular site.

Edge as far as I can tell, can’t be changed to support a site like this.

I have a suspicion that a Windows Update in the last week or two has just toggled this tickbox, so you may need to put it back. It’s an older type of encryption which the POODLE exploit was centred around, so the sever hosting this cert should be updated as soon as possible.


Update after more testing:

This is due to “Update to add new cipher suites to Internet Explorer and Microsoft Edge in Windows”

Windows 10 Anniversary Update included patches that blocked SSL 3 support by default.

Windows 7 SP1, Windows 8.1 etc were patched back in June 2016 – KB3161639

That June patch has been superseeded by a July 2016 patch KB3172614

This is what the Advanced Internet Options in Internet Explorer look like before the update (not the Use SSL 2.0 option, and Use SSL 3.0 is ticked):


Group Policy Not Processing > Unhealthy DFS?

This is a scenario I ran into, so thought it was worth noting the steps.

I’d pushed out an environment variable in Group Policy Preferences, but a particular person hadn’t got it. I’d confirmed this by running the ‘set’ command on their PC and the environment variable wasn’t listed (shortcut on this, if you use ‘set x’ with x = the first letter of the environment variable, it will only show those with that first letter. ‘set u’ is a quick way to see who’s logged on under the ‘username’ variable’).

After confirming they didn’t have this new variable, I tried to refresh Group Policy with the ‘gpupdate /force’ command. Alarm bells went off when I saw this result:

The processing of Group Policy failed. Windows attempted to read the file \\\SysVol\\Policies\{389D2400-A8FE-44CD-B7B7-3914920183F8}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may betransient and could be caused by one or more of the following:
a) Name Resolution/Network Connectivity to the current domain controller.
b) File Replication Service Latency (a file created on another domain controller
has not replicated to the current domain controller).
c) The Distributed File System (DFS) client has been disabled.

The important part of that was that it was unable to read a gpt.ini file. I followed the path specified in the user’s context still – the path under Policies didn’t exist! SysVol is normally a DFS share, so I tested for myself and the path existed. What was different between me and them? Lots probably, but I was at a different site connecting to a different DFS server.

Going to the properties of any folder in that DFS path, you can change the server you’re pointing to:


This way you can toggle back and forth. From this, I confirmed that one DFS server was missing the folder in question, along with a lot of others.

From that, I RDP’d to the server and had a look in Event Viewer > Applications and Services > DFS Replication to see if there were any errors or warnings. There was a few warnings around losing connectivity, so I decided to restart the DFS Replication Service to see if it just needed a kick:


After restarting, it was back to Event Viewer to see if it was happy or not. It was not.

Event 2231 DFSR:

The DFS Replication service stopped replication on volume C:. This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication.

Additional Information:
Volume: C:
GUID: 992BDBB2-4593-11E3-93E8-806E5F6E6963

Recovery Steps
1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.
2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:
wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid=”992BDBB2-4593-11E3-93E8-806E5F6E6963 ” call ResumeReplication

For more information, see

That was nice, it was giving me the exact command to run to fix it, which I did. This showed the next problem in Event Viewer:

Event 4012 DFSR:

The DFS Replication service stopped replication on the folder with the following local path: C:\Windows\SYSVOL_DFSR\domain. This server has been disconnected from other partners for 73 days, which is longer than the time allowed by the MaxOfflineTimeInDays parameter (60). DFS Replication considers the data in this folder to be stale, and this server will not replicate the folder until this error is corrected.

To resume replication of this folder, use the DFS Management snap-in to remove this server from the replication group, and then add it back to the group. This causes the server to perform an initial synchronization task, which replaces the stale data with fresh data from other members of the replication group.

Additional Information:
Error: 9061 (The replicated folder has been offline for too long.)
Replicated Folder Name: SYSVOL Share
Replicated Folder ID: 0CD8DE8C-6293-4640-8911-67FCEBE60CD1
Replication Group Name: Domain System Volume
Replication Group ID: F84F2F63-3623-4911-B7B7-FBBD8968DBFE
Member ID: A45C340E-F890-4FD9-9FE5-9E38DB4EB590

Yikes, older than 60 days and nobody had even noticed. This can get tricky to try and remove and re-add a SYSVOL share, so it’s worth changing the MaxOfflineTimeInDays value. I set it to 300 with this command:

wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=300

After that, restarting the DFS Replication service and running the previous command from Event Viewer did the trick. It started syncing up and from looking in the Policies folder, I could see more folders turn up, including the original missing one from the gpupdate command.

Waiting a few minutes for this to stop, I changed the MaxOfflineTimeInDays value back to the 60 default.

Going back to the original user, running ‘gpupdate /force’ worked without any errors, and after a reboot, the missing envrionment variable being pushed by Group Policy Preferences had deployed.

Now on my ‘things to do’ list, is to work out DFS replication monitoring to resolve this in a lot less than 60 days! 🙂

Null and Not Null with PowerShell

Finding out if an object has a null (i.e. blank) value or not isn’t a difficult task to do.

Consider this scenario – you’ve found a bunch of old disabled accounts that someone forgot to remove the ‘Manager’ field. Finding accounts that have another field that would be populated for a current employee but blank for a departed would be a reasonable way of finding the problem accounts, then you could null the ‘Manager field. (note – you could just refine your search to disabled accounts but that’s not as fun).

To find all Active Directory users that have a blank ‘Department’ field is easily done with this command:

get-aduser -filter * -properties department | where department -eq $null

Then, showing the users that don’t have a blank ‘Department’ field is a slight change. You can’t use !$null (!=not), but you can use -ne (not equals)

get-aduser -filter * -properties department | where department -ne $null

You can also check for users that have a manger by switching ‘department’ to ‘manager’:

get-aduser -filter * -properties maanger | where manager -ne $null

Easy. Adding in a second ‘where’ statement so we can get results of users that have a manager, but no department means we have to add in a few extra characters to make PowerShell happy:

get-aduser -filter * -properties department,manager | where {($_.department -eq $null) -and ($_.manager -ne $null)}

The results can be a bit hard to read, so piping (|) to a select command will just show us the results of each user we want to see:

get-aduser -filter * -properties department,manager | where {($_.department -eq $null) -and ($_.manager -ne $null)} | select name

Finally, to blank the ‘manager’ field, we can swap the ‘select name’ command with this:

get-aduser -filter * -properties department,manager | where {($_.department -eq $null) -and ($_.manager -ne $null)} |  set-aduser -manager $null

You can then go back to a previous command to confirm you get no results. As always, check your data first before blanking out a bunch of user’s values!


As @mickesunkan pointed out, the above isn’t the most efficient way to do searches. I’m sure I’ve mentioned this before, but I’m not always going to write the cleanest, quickest way of doing something. For a once off tasks this really doesn’t matter. For a daily task it starts to matter – not really by itself, but if you keep making more and more inefficient scripts, you’re putting extra unnecessary load on your environment with lots of LDAP lookups.

Above, I’m just getting ALL AD users. You could use a better filter and narrow down to a certain OU. You could also put part of your ‘where’ command into the filter, such as this:

get-aduser -properties manager,department -filter {department -notlike “*”}

This doesn’t work for the ‘Manager’ field though, you’ll see this error:

get-aduser : Operator(s): The following: ”Eq’, ‘Ne” are the only operator(s) supported for searching on extended attribute: ‘Manager’.

I couldn’t work out a way of putting the $null value as part of the filter, but if you do – please share 🙂


@mickesunkan also wrote this github code showing a few differnet ways to do this search, and which way is most efficient. Thanks Micke!




Group Policy Preferences – Replace Existing File

I’ve written before on how great Group Policy Preferences are, and thought I’d write a quick ‘how to’ on a likely common scenario – replacing an older file with a new one, but only if it already exists.

Pushing out a file via Group Policy Preferences is quite easy and has been around for a long time.

When creating a new file rule, you’ll see 4 options under ‘Action’ – Create, Replace, Update and Delete:


Create will only copy the file from the source to the destination if the file doesn’t exist at the destination
Replace will actually remove a file (if one exists), and copy the source to the destination regardless if a file existed or not
Update is the misleading one, it will modify the file attributes of the destination file to match the source – if the files themselves are different, it won’t copy them. If the file doesn’t exist, it will copy the file to the destination though!
Delete will delete the file(s) specified.

None of these provide a solution to ‘Replace file only if it exists’ though. There’s two obvious ways this can be achieved; you can use ‘Replace’ but this will continually replace the file every time Group Policy is run, which in the user context is every 90 minutes. You also can’t use the option ‘Apply once and do not reapply’ because it will run regardless of the file existing or not – which means if the file isn’t there before group policy runs, the file may be replaced by a software install or other mechanism, and with the order out of whack, resulting in the wrong file being left there in the end.

The next logical way to make sure the order is correct is to use Item Level Targeting. Under the ‘Common’ tab, you can tick the box for ‘Item Level Targeting’ and point to the file in question:


This will only run once though, and that is regardless of the ‘Item Level Targeting’ being true or false. That only controls whether the policy does what it’s configured to do, at the client side it’s still ‘run’ the policy, it just had nothing to do.

thommck had the best answer on how to get around this that I’ve found – use a custom WMI query. You’ll need to remove the ‘Apply once and do not reapply’ tick, but the file itself will only be copied over when both targeting rules are true. Please read his post for all the details, but the second item will need to be a WMI query, and have a string similar to this:

SELECT LastModified FROM CIM_DataFile WHERE name=”C:\\windows\regedit.exe” AND LastModified < ‘20160701000000.000000+060’

This is checking the date of the file, and will only be ‘true’ if it’s less than that date.

Keep in mind that this is less than ideal, as WMI queries aren’t the most efficient way of processing group policy preferences, but it may be better than copying files around your network to every PC, every 90 minutes.

Azure AD Connect Health with AD DS

Azure AD Connect Health with AD DS is now in preview!

You’ll need Azure AD Premium for this, but it’s a little agent that gets installed on each of your domain controllers and provides health and alerting via Azure AD Connect Health.

The service is a light health and monitoring solution which reports back on some basics such as these:

azure health 3

Also, it will show any replication issues and other DC related problems for you to re-mediate. You can also configure email alerts, so you know when a problem is detected, rather than relying on checking the health page to notice something.

The setup of Azure AD Connect Health with AD DS is incredibly easy – download and install the agent (check you meet the prerequisites first!), use credentials of an Azure AD global administrator (set up a service account for this), and you’re done. If you install it on a server that doesn’t have the required Windows Server roles, you’ll get an error such as ” Microsoft.Identity.Health.Common.RoleNotFoundException: No role was registered.

The two other currently Health services are for ADFS and Azure AD Connect, so check those out too if you haven’t already.

One issue I had after installing was that I couldn’t see the box for Active Directory Domain Services in the Azure portal, it was just blank:

Pasted image at 2016_07_21 12_22 PM

After trying to work out why for a while, @kengoodwin pointed out that I should try resetting the view. This is done by clicking one of the ‘Add tiles’ options, then at the top of the screen choosing hte ‘Restore default’ option.

Doing this resulted in my tiles showing as they should – I’d never made adjustments to my tiles, but had previously gone into edit mode and saved the zero changes I did, which I believe stopped the portal from adding in the new tiles once the new health service was detected. This is how it should look:

ad health 2

Much better!

If you have Azure AD premium, then check out this free extra!