Updating Active Directory from a CSV

Scenario:

You’ve been asked to populate everyone’s Active Directory job title. The payroll system is correct, and they’re able to export you a list of usernames and correct job titles. All you need to do is get that into AD.

Solution:

You could do this manually of course, but that’s no fun and a waste of time. This is one of those scenarios where you’ll hopefully think ‘PowerShell can do this!’ and possibly wonder how. That’s what I did anyway, so set out to make it work.

Here’s a fake example of the data I was working with, in a file called fake.csv:

EMPLOYEENAME,JOBTITLE
AFOWLER,IT OPERATIONS MANAGER
RSOLE,JANITOR

Tip: If you open a csv file in Excel it is a bit easier to read.

From this data, we want to match the EMPLOYEENAME to the correct AD account, then update the Job Title field from the JOBTITLE entry of the csv file.

A script that will do this is:

Import-module ActiveDirectory
$data = import-csv -path c:\fake.csv
foreach ($user in $data){
Get-ADUser -Filter “SamAccountName -eq ‘$($user.employeename)’” | Set-ADUser -Replace @{title = “$($user.Job Title)”}
}

So, what’s happening here? It can take a bit to get your head around especially if you’re not used to programming (like me), so I’ll try to explain it:

Import-module ActiveDirectory
Importing the ActiveDirectory module so the Get-ADUser command works. If you can’t load the module, install RSAT (Remote Server Administration Tools) which includes the AD module.

$data = import-csv -path c:\fake.csv
This is setting the $data variable to memory, which will contain all the contents of the fake.csv file.

foreach ($user in $data){
This is saying ‘for each line of information from the $data variable (which is the csv file), map that to $user and do the following”

Get-ADUser -Filter “SamAccountName -eq ‘$($user.employeename)’” | Set-ADUser -Replace @{title = “$($user.Job Title)”}
This is getting any AD User where their SamAccountName matches the employeename column of the $user variable (which is the current line of information from the csv at time of processing). Then with the pipe | it will use the result to then Set the AD User’s title field (where the job title goes) to the Job Title part of our $user variable. This command will run twice, because there are two lines for ‘foreach’ to process.

}
This is closing off the command which each ‘foreach’ command runs.

 

I hope that explains it enough so you’re able to manipulate the script to your own requirements.

 

KMS and MAK Licensing

Microsoft licensing is one of the things that puts fear into most people who have dipped their toes into it. Understanding how to be compliant with Microsoft – and how to make sure the company’s money is being spent properly – can be daunting. From an admin’s side, this is often not a concern as it’s not a part of their job – but at least understanding the implications of installing 10 different SQL servers in the environment is a necessity. One of the more fundamental models Microsoft now uses with Windows Server, Windows Client (e.g. Windows 7) and Office is using a key to register the products. So, how do you do it if you’re on a Volume Licensing Agreement?

What are KMS and MAKs?

With a Volume License Agreement with Microsoft, you are normally given two types of keys: Multiple Activation Key (MAK) and Key Management Services (KMS). The MAK will normally have an activation count, while the KMS does not. Simply put, MAK is a key that registers direct back to Microsoft with a certain amount of allowed activations. KMS on the other hand, lets all your clients use a generic key to talk back to a KMS Server on premise, and that centralised server talks back to Microsoft. The MAK side of things is pretty straight forward: you put a different key in per client, it will phone home and then either activate or fail. This works, but isn’t the way you should do things in a large environment. KMS gives you that automation.

KMS sounds great, how do I set that up?

I’ve written about this before on How To Enable Office 2013 KMS Host and How to add your KMS keys for Windows 8 and Server 2012, so I’ll just clarify how the process works. There are two types of KMS keys – client and server. The client key is normally the default key installed when using a volume license version of software, and the keys are publicly available. Here’s a list on Technet of keys. The KMS server needs to be configured as per my “How to” articles. Having a KMS Client key registered on your client will make it go through a different process of phoning home and check DNS records. The KMS server on premise gets the request, approves and activates the client. Technet as per usual has great documentation covering all of this, available here.

I’ve set it up and it’s not working – help!

OK, don’t get too worried here. First, you need to have enough clients trying to register on your KMS server before it will activate any of them. For Windows Client operating systems, you’ll need 25. Yes, that’s a lot. For Windows Servers and Microsoft Office, you’ll only need 5.

For an example, let’s say you’ve installed Microsoft Visio 2013 and it’s not registered (if you’re unsure if it’s registered or not, in Visio go to File > Account. On the right hand side it will tell you if the product is activated or not). Start by checking that you have a correct KMS key entered – you can re-enter it in the product.

You can force the activation of Office 2013 by going to the folder where it’s installed (by default it’s C:\Program Files (x86)\Microsoft Office\Office15) and running the command:

cscript OSPP.VBS /act

This will either tell you that you’ve successfully activated your product, or give an error. One of the most common errors is:

ERROR CODE: 0xC004F038
ERROR DESCRIPTION: The Software Licensing Service reported that the product could not be activated. The count reported by your Key Management Service (KMS) is insufficient. Please contact your system administrator.

Nice description. So, next up you’ll need to check your count reported on the KMS server. Forgot what server your KMS server is? You might be able to find out by using the command:

slmgr /dlv

which will give you an informative window telling you the KMS machine name from DNS. Or you can check your DNS for the entry from the DNS console under DNS Server > Forward Lookup Zones > internal domain name > _tcp > _VLMCS record.

On your KMS server, you can display the client count so far, to see if it’s hit the magic 5 with this command:

cscript slmgr.vbs /dlv 2E28138A-847F-42BC-9752-61B03FFF33CD

The string on the end is the Office 2013 Activation ID. For Office 2010 it’s “bfe7a195-4f8f-4f0b-a622-cf13c7d16864″. You’ll see a lot of information, but the important part is this:

Key Management Service is enabled on this machine
Current count: 5
Listening on Port: 1688
DNS publishing enabled
KMS priority: Normal

In this example, I’ve just hit the 5 so clients will now activate. If it was less than 5, I’d still be getting the previous error on the clients I had.

If you generally want to see what licenses you have on the KMS server, you can run the ‘Volume Activation Management Tool’ which is available as part of the Windows Assessment and Deployment Kit (ADK). Install instructions from Technet are available here. This tool will visually show you what products you have licensed, but won’t go into great detail. It’s good to have just as an overview.

There are other scenarios and issues that can happen with KMS activation. On the KMS server, you can check the KMS event logs in Event Viewer under Applications and Service Logs > Key Management Service. On the client, there are a huge amount of switches you can use with the slmgr.vbs script – you can run it without any switches to see them all.

Samsung Gear 2 Neo Review

The Samsung Galaxy S5 and Samsung Gear 2 (+Neo/Fit versions) released in Q2 this year, with high expectation. The Galaxy series of phones is one of the best selling in the world, and a product update to the Gear smartwatch had many consumers eagerly awaiting the release. The Samsung Galaxy S5 is a decent update to the Galaxy line (I’ll echo the phrase “evolution not revolution”), but the Gear 2 I believe still has a long way to go.

I’ve been playing with the combination of the Samsung Galaxy S5 mobile phone and Samsung Gear 2 Neo for the last few weeks. I really wanted to like these complementary devices, I had been waiting for weeks on their arrival. Disappointingly I’m not convinced about the usefulness of smart watches, and I’ll explain why.

First to clarify for those wondering – there are three versions of the Samsung Gear 2. The Neo version is similar to the vanilla Gear 2, but is missing the camera. I’d suggest to avoid confusion they call the Neo the Samsung Gear 2, and the Gear 2 the Gear 2 Cam… but I’m not in marketing so maybe that didn’t test well with focus groups. The third version is the Samsung Gear 2 Fit, which is a longer and skinner version, missing the camera and IR sensor.

Feature Samsung Gear 2 Samsung Gear 2 Neo Samsung Gear 2 Fit
Camera Yes No No
Screen 1.84-inch narrow 1.63-inch square 1.63-inch square
IR Sensor Yes Yes No

To start with, the Neo has a strange clipping mechanism on the strap. It just pushes in, and actually works quite well but took me a moment to work out due to requiring enough force to make me worried I was about to break something.

Once on my wrist, I found it to be very comfortable and sleek. It feels reasonably natural to wear, and the wrist strap doesn’t dig in. I was feeling good about this watch… until I turned it on.

There was a bit more of a process to get the watch up and running than I expected. On the Samsung Galaxy S5, I had to go into the Samsung Store (not the Google Play Store) to find the Gear Manager app. With all the pre-installed apps already from Samsung, it was a bit annoying to not find it already installed. I think this is because Samsung want to get you into their own App store to buy all the extra applications and watch faces, or I could just be a bit cynical.

Being a watch, I started by wanting to find the nicest watch face possible. The default was rather brightly coloured face, so I changed it to a much more sensible inbuilt time/date/applications display, with a very unexciting black background:

20140528_172105

 

I thought this looked rather smart. I played around with the apps for a bit, checked the weather and came to the realisation that battery life was still a big issue, which meant this smartwatch was a backwards step in telling time.

There’s the obvious annoyance of having to charge your watch every few days, rather than changing the batteries every few years (or never if you’ve got a fancy kinetic watch). Putting that aside, the usability of a smartwatch that’s trying really hard to preserve battery life is frustratingly annoying.

The watch display on the Gear 2 Neo is off by default. Completely black. When you make a motion with your arm to look at the time, it usually detects the movement and turns on the display for you. That’s great, but it takes half a second. If you have a normal watch, you’re used to a half second glance and you’re on your way. With this watch, you’re waiting that half a second that feels a lot longer for the display to light up. Sometimes it doesn’t even work, and you’ll have to press the button below the display with your other hand. You might as well have pulled your phone out of your pocket at this stage.

Your standard watch doesn’t have a pedometer. This smartwatch does, but you have to turn it on and off. It doesn’t just continually keep track. On the flip size, at night it will continually buzz or beep when anything happens on your phone such as a email or notification. This can be turned off by enabling sleep mode, but again this seemed to be a manual function. I had a brief look and couldn’t find a way to automate this (such as setting the times 10pm to 6am for sleep mode), which was another frustration.

To me, this is a watch that is the start of a good idea. Battery life needs to be improved vastly, and so does the flexibility around how you choose use the watch. I ended up concluding that this didn’t do as good of a job at being a watch as my analog watch, and until that’s fixed, the Samsung Gear 2 smart functions are icing on a stale cake.

Originally published at WeBreakTech

Coping with Infinite Email

Automatic Deletion of Deleted Items with Retention Policies

Exchange 2010 and 2013 have an option called “Retention Policies”. I’ll base the below on what I see for Exchange 2010, but most of not all should apply to 2013 also.

Retention Policies are useful if you need to keep your user’s mailboxes clean, as well as trying to avoid a Deleted Items folder with every single email the employee has received in their time with the company. You can work out what the company agrees with for what can and can’t be auto deleted, and save a lot of money on space for both live information and backups.

The Retention Policies are made up of “Retention Policy Tags” and these tags “control the lifespan of messages in the mailbox” as quoted by one of the wizards that you configure this in mailbox. The Retention Policy is then targeted at the mailboxes you want to apply these settings to.

Gandalf-You-Shall-Not-Pass-Ian-McKellenMaybe not this wizard.

It’s worth noting that a mailbox can only have one Retention Policy linked to it, so you need to plan overlapping settings accordingly.

So, what can a Retention Policy Tag do? You give it a ‘Tag Type’ which is either a folder in someone’s mailbox (e.g. Deleted Items) or every other folder that isn’t an inbuilt folder. From that definition of what folder the tag is on, you can either set an age limit for all items in that folder, or set the items to never age.

deleted items

The Age limit is a number in days. This number actually means something different depending what Tag Type was targeted. For an email in the Deleted Items folder, it’s based on the date the item was deleted by stamping it at the time of deletion. There’s some caveats around that, so refer to this chart on TechNet which lays out how the Retention Age is calculated.

There’s also a Default Archive and Retention Policy (called MRM Policy in Exchange 2013) that is applied to all mailboxes that have no other policy applied (remember that can only be one). So if you have simple requirements, use this policy. For more complex requirements, you’ll need multiple policies and either manual management of mailboxes to apply the right policy, or use a script that’s run at regular intervals.

Once you’re set up, the policies are enforced by the Managed Folder Assistant. This runs on an Exchange server, which is controlled from the service Microsoft Exchange Mailbox Assistants. This used to be schedule based (Exchange 2010 pre-SP1) but SP1 onward and Exchange 2013, this is an always running throttled process. It’ll do it when it’s the ‘right time’ based on several criteria and checks. If you want to know the specifics, read this from TechNet.

To check that the policy has applied, you can go to the properties of the folder of the mailbox in question (for me it’s Deleted Items) and you’ll see the policy listed:

deleted items 2

You can also look at the individual emails to see both the retention policy applied, and when the email will expire. This is what I see from Outlook 2010:

deleted items 3

If you want to process a particular mailbox right now because you’ve just configured something, you can use the PowerShell command:

Start-ManagedFolderAssistant -Identity “guyinaccounts”

If you want to do more than a single mailbox, you’ll need to pipe it. Again, more details here on TechNet. The Event Viewer on your Exchange server should tell you how it went, but from some of the information I’ve read, a Retention Policy that’s only just been targeted to a mailbox can take up to 48 hours to actually recognise and start processing. For me it took more than a few hours before I could see the policies on my emails.

One last point, when you first create and apply a policy is when Exchange will start tagging emails. For my example, I set it to 60 days Delete and Allow Recovery, on the Deleted Items. This caused all exisiting deleted items that went back a few years to get marked for deletion 60 days from when I applied the policy. It won’t go back and instantly delete your older items.

Originally posted at WeBreakTech

Who Will Be There For The Long Run?

You may have noticed that the theme on my blog has changed. The theme I was using was a light version of a pro product, which I didn’t buy. I was looking at changing some small settings and discovered that the creator of the theme had stopped supporting it a few months ago.

Knowing that I’d probably have issues in the future, I decided to find a different theme. It had to work with the content I already had and look pleasing enough to me. I also didn’t want a v1.0 theme, because that gives me no assurance that the creator has any interest in updating it when future WordPress versions are released.

I realised that this same methodology is how I approach most pieces of software. Ideally it needs to have been around for a little while to prove they can deliver, and keep their product updated. It needs to have good support, either from the community or the creators. It needs to integrate well with existing systems, but also not cause you to be locked in to the product itself.

After working in I.T. for a while, I’ve found this is instinctively how I think. A big factor would be learning this from when things go wrong – from implementations, upgrades or changeovers and considering what decisions should have been made early on to prevent this.

This in itself causes issues, because how can a software solution get customers if everyone wants something that’s already proven? Companies will often take risks if option B is substantially cheaper than option A, or the vendor of the software have proven themselves with other solutions… but generally it’s safer to go with the proven solution.

Maybe this methodology is changing with the rapid release cycle we’re now seeing globally. It’ll probably cause more issues due to less testing time and more updates, which instinctively is the opposite of what we’ve all learnt to do in IT. This applies to the cloud too – you’re putting your faith in a 3rd party, but you have no visibility or control over changes. Without that visibility, how do you know everything of yours will work after the fact? Or will you be left trying to find another cloud vendor that works with your existing setup?

So yes, I have a new theme. It works, and it’s free. It’s newer than v1.0 so at least there’s some evidence that it will be maintained, but they may stop this at any time. I’m not giving them any money so I can’t complain, but it’s still the fundamental basis of my decision process. It’s luckily quite easy to change themes because of the well designed plug and play style of themes. This is what I expect from any software vendor (but rarely met), and anything beyond increases the risk of pain – it may not be now, but chances are it will come.

ioSafe 214 NAS Review

The ioSafe 214 NAS was provided to me by ioSafe to check out. I’ve looked at a few NAS units before, but generally low end devices. This unit is far from low end, having both advanced management capabilities and superb physical protection.

diskstation

“Superb” is a big call, but this NAS is fireproof and waterproof. Trevor Pott and Josh Folland tested the fire side of this here (The Register) which is rated at 1550ºF for 1/2 an hour, and the water side is rated at 72 hours with 10 foot depth. There’s a bunch of videos on YouTube too if you want to check those out. I chose not to test these specifications as I really liked the unit.

Full specifications are available here from ioSafe’s website, but here’s a quick rundown. The NAS is dual bay, and will officially take up to two 4TB SATA drives. There are 3 USB interfaces (a single USB2 on the front, and two USB3′s on the back), with the back also containing a single gigabit ethernet port and a power port. The only other item of interest is the copy button on the front which I’ll go into later.

The ioSafe 214 is ‘powered by Synology DSM’ which I think just means it has a Synology 214 inside it… which I was very impressed by. I’d pictured the web interface of the NAS as some unexciting poorly designed experience, but this was similar to using a desktop with shortcuts and programs.

Here’s the ‘desktop’ which you’ll see after logging onto the NAS via HTTP:

iosafe1

 

I’m still impressed now after using this for a few weeks. The left hand side contains these highlights:

File Station – This lets you create and manage shares and the files/folders within

Control Panel – This opens the control panel as per the screenshot above. There’s a huge amount of options here, including setting up LDAP/Active Directory connectivity, user management, device updates, index your media located on the drives and so on.

Package Center – this is the Synology App Store. You might think this isn’t exciting, but for starters everything is free. There’s tools like Antivirus and DNS Server, but also Asterisk (want to run your phone system off this?), Mail Server, MediaWiki, RADIUS Server, Tomcat, VPN Server , WordPress and so on. This turns a basic NAS into a server with a multitude of abilities.

One extra application of note is the ‘Download Station’. This will download from a bunch of different protocols: BitTorrent, FTP, HTTP, Newsgroups, eMule (is that still used?) and a few others I haven’t even heard of before. I’m sure a lot of people leave a box on just for downloads, so this would eliminate the need for that.

On the right hand side are ‘Widgets’ – yep, just like the ones from Windows Vista and 7, and were killed off due to vunerabilities. Anyway that doesn’t apply here, these are configurable but I decided to show the connected users, storage use, system health and finally the resource monitor that displays usage of CPU/RAM/LAN.

There’s also a few other important areas a few clicks away, with the most important being ‘Storage Manager’:

iosafe2

 

This is where you can create iSCSI LUNs and manage the physical hard drives inside the ioSafe. Creating a LUN was really easy, and they have the ability to thin provision. This means you can over-subscribe the storage – for example, you might have 2tb free like I do above, but you could create a LUN with 2TB of space, and another with 1TB. It only uses the space you actually write to, but you avoid having to guess and lock yourself in to certain LUN sizes early on. The only risk is if you run out of disk space you’ll start to get issues, and you wouldn’t realise it just looking at the LUN from a remote PC.

Personally I created a LUN that took up the whole 2TB available (1.79TB of real space) and then created another small 1GB LUN which I used as a Quorum for clustering.

Also as a quick speed test, I copied the Windows Server 2012 R2 ISO (which weighs in at 3.97GB) from a local machine to the NAS via iSCSI, and it copied over at 33 seconds. The copy averaged 115MB/s.

Copying a file back to the local host was much slower, which would be an indication of the local single spindle of the HDD, and came in at 45 seconds for the copy, averaging around 80MB/s.

The final area worth mentioning is Backup & Replication:

iosafe3

 

Again, there are a lot of options here. This takes away from relying on a remote device such as a PC to do backups, and allowing the NAS to look after itself. You can back up contents from one area on the NAS to another, or plug in an external disk via the USB3 ports and take it away for offsite backup requirements. There’s even Amazon S3 as a backup point – not something I’d use for large amounts of data, but it’s a nice addition.

So what is the end result from all this? It’s a NAS that is easy to set up and maintain from Synology, wrapped up in great armour from ioSafe without having ridiculous pricing. This unit is ideal for a home user or small business that needs 4TB or less data highly secured – and for an extra few hundred vs a non ‘armoured’ NAS, it’s an easy decision.

Note: If you want the same features but need more drives, ioSafe also have an ioSafe 1513+ which has five HDD bays instead of two.

TechEd North America – Done for 2014

Originally posted at WeBreakTech

TechEd North America 2014 is now over. You can read about the first two days of my experience here. The second half wasn’t too different to the first half unsurprisingly, and there wasn’t a huge amount of excitement in the air.

Wednesday morning started off slowly. There were a LOT of vendor parties on the Tuesday night beforehand, so maybe it was a difficult morning for many attendees. There wasn’t much to do as once breakfast was over, there were breakout sessions to attend (where you go into a room and listen to a presentation – one of the biggest parts of TechEd), but the Expo hall (where all the vendor booth are) didn’t open until 11am.

I found it difficult to push myself to attend the breakout sessions because they were all available on the next day for free via their Channel 9 service. It’s a great idea from Microsoft but many attendees I spoke to shared the lackluster of going to these too, saying they could watch them online later.

There were some highlights of sessions though. Anything with Mark Russinovich (creater of SysInternals) was highly talked about, and I attended “Case of the Unexplained: Troubleshooting with Mark Russinovich” which was really interesting to watch.

For lunch, I caught up with Nutanix to have a look at their offering over lunch. They treated me in style, by giving me a Texas-style hat and using someone else’s leg power to get me there and back:

Bnoa_cNCUAAZRXV

 

I learnt that Nutanix offer a well priced sever based solution that’s half way between a single rackmount server, and a full chassis/blade setup that also uses shared storage between the nodes (i.e. blade servers). I’ll definitely be looking into that further from both a writing view as well as investigating for my place of work.

After that, I explored the Expo again, speaking to more vendors. Yes there was a lot of goodies given away (generally called ‘loot’) but again according to other attendees, there was a lot less than previous years. I didn’t really try and came back with a suitcase full of novelties which my work colleagues will hopefully go through and find some cool bits and pieces to make up for my absense.

Wednesday night came, and night time means more parties. I went to the Petri meet and greet where as the title suggests, I met and greeted another bunch of great people. After that the jet lag had gotten the better of me, so I went back to the hotel to order room service and pass out.

Thursday saw the final of Speaker Idol. It’s a competition run by Microsoft in the American Idol format (apparently?) where people perform 5 minute presentations until a winner is chosen, and that winner gets to present a full breakout session at next year’s TechEd. Aidan Finn ended up winning (and he wrote about the experience here) who was highly deserving of the achievement, but so were the other presenters I saw.

I had dinner with the friendly eNow mob who make reporting and monitoring tools for Exchange, Lync and others, as a +1 to someone who was actually invited.

The closing event was held at the local baseball stadium: The Minute Maid Park.

WP_20140515_20_11_04_Pro

Not having been to an American stadium before, it was more of a novelty to me more than others. Jugglers, artists, many stadium type food stalls and a mechanical bull surrounded the outskirts while attendees took tours of the pitch itself and listened to the bands that played. Here’s the full list of everything that was available. Disappointingly I wasn’t feeling 100% due to a cold, otherwise I would have sampled some of the nachos covered in American liquid cheese – something rarely seen in Australia.

Overall I’m really glad I went (I may not have been as positive on the very long plane ride home) as I met a bunch of great people – Particularly Kyle Murley and Phoummala Schmitt who both looked out for me, as well as Trevor Pott who convinced me to go in the first place. I made lots of new contacts, and had the opportunity to say hi to tech greats like Mary Jo Foley.