Samsung Gear 2 Neo Review

The Samsung Galaxy S5 and Samsung Gear 2 (+Neo/Fit versions) released in Q2 this year, with high expectation. The Galaxy series of phones is one of the best selling in the world, and a product update to the Gear smartwatch had many consumers eagerly awaiting the release. The Samsung Galaxy S5 is a decent update to the Galaxy line (I’ll echo the phrase “evolution not revolution”), but the Gear 2 I believe still has a long way to go.

I’ve been playing with the combination of the Samsung Galaxy S5 mobile phone and Samsung Gear 2 Neo for the last few weeks. I really wanted to like these complementary devices, I had been waiting for weeks on their arrival. Disappointingly I’m not convinced about the usefulness of smart watches, and I’ll explain why.

First to clarify for those wondering – there are three versions of the Samsung Gear 2. The Neo version is similar to the vanilla Gear 2, but is missing the camera. I’d suggest to avoid confusion they call the Neo the Samsung Gear 2, and the Gear 2 the Gear 2 Cam… but I’m not in marketing so maybe that didn’t test well with focus groups. The third version is the Samsung Gear 2 Fit, which is a longer and skinner version, missing the camera and IR sensor.

Feature Samsung Gear 2 Samsung Gear 2 Neo Samsung Gear 2 Fit
Camera Yes No No
Screen 1.84-inch narrow 1.63-inch square 1.63-inch square
IR Sensor Yes Yes No

To start with, the Neo has a strange clipping mechanism on the strap. It just pushes in, and actually works quite well but took me a moment to work out due to requiring enough force to make me worried I was about to break something.

Once on my wrist, I found it to be very comfortable and sleek. It feels reasonably natural to wear, and the wrist strap doesn’t dig in. I was feeling good about this watch… until I turned it on.

There was a bit more of a process to get the watch up and running than I expected. On the Samsung Galaxy S5, I had to go into the Samsung Store (not the Google Play Store) to find the Gear Manager app. With all the pre-installed apps already from Samsung, it was a bit annoying to not find it already installed. I think this is because Samsung want to get you into their own App store to buy all the extra applications and watch faces, or I could just be a bit cynical.

Being a watch, I started by wanting to find the nicest watch face possible. The default was rather brightly coloured face, so I changed it to a much more sensible inbuilt time/date/applications display, with a very unexciting black background:



I thought this looked rather smart. I played around with the apps for a bit, checked the weather and came to the realisation that battery life was still a big issue, which meant this smartwatch was a backwards step in telling time.

There’s the obvious annoyance of having to charge your watch every few days, rather than changing the batteries every few years (or never if you’ve got a fancy kinetic watch). Putting that aside, the usability of a smartwatch that’s trying really hard to preserve battery life is frustratingly annoying.

The watch display on the Gear 2 Neo is off by default. Completely black. When you make a motion with your arm to look at the time, it usually detects the movement and turns on the display for you. That’s great, but it takes half a second. If you have a normal watch, you’re used to a half second glance and you’re on your way. With this watch, you’re waiting that half a second that feels a lot longer for the display to light up. Sometimes it doesn’t even work, and you’ll have to press the button below the display with your other hand. You might as well have pulled your phone out of your pocket at this stage.

Your standard watch doesn’t have a pedometer. This smartwatch does, but you have to turn it on and off. It doesn’t just continually keep track. On the flip size, at night it will continually buzz or beep when anything happens on your phone such as a email or notification. This can be turned off by enabling sleep mode, but again this seemed to be a manual function. I had a brief look and couldn’t find a way to automate this (such as setting the times 10pm to 6am for sleep mode), which was another frustration.

To me, this is a watch that is the start of a good idea. Battery life needs to be improved vastly, and so does the flexibility around how you choose use the watch. I ended up concluding that this didn’t do as good of a job at being a watch as my analog watch, and until that’s fixed, the Samsung Gear 2 smart functions are icing on a stale cake.

Originally published at WeBreakTech

Coping with Infinite Email

Automatic Deletion of Deleted Items with Retention Policies

Exchange 2010 and 2013 have an option called “Retention Policies”. I’ll base the below on what I see for Exchange 2010, but most of not all should apply to 2013 also.

Retention Policies are useful if you need to keep your user’s mailboxes clean, as well as trying to avoid a Deleted Items folder with every single email the employee has received in their time with the company. You can work out what the company agrees with for what can and can’t be auto deleted, and save a lot of money on space for both live information and backups.

The Retention Policies are made up of “Retention Policy Tags” and these tags “control the lifespan of messages in the mailbox” as quoted by one of the wizards that you configure this in mailbox. The Retention Policy is then targeted at the mailboxes you want to apply these settings to.

Gandalf-You-Shall-Not-Pass-Ian-McKellenMaybe not this wizard.

It’s worth noting that a mailbox can only have one Retention Policy linked to it, so you need to plan overlapping settings accordingly.

So, what can a Retention Policy Tag do? You give it a ‘Tag Type’ which is either a folder in someone’s mailbox (e.g. Deleted Items) or every other folder that isn’t an inbuilt folder. From that definition of what folder the tag is on, you can either set an age limit for all items in that folder, or set the items to never age.

deleted items

The Age limit is a number in days. This number actually means something different depending what Tag Type was targeted. For an email in the Deleted Items folder, it’s based on the date the item was deleted by stamping it at the time of deletion. There’s some caveats around that, so refer to this chart on TechNet which lays out how the Retention Age is calculated.

There’s also a Default Archive and Retention Policy (called MRM Policy in Exchange 2013) that is applied to all mailboxes that have no other policy applied (remember that can only be one). So if you have simple requirements, use this policy. For more complex requirements, you’ll need multiple policies and either manual management of mailboxes to apply the right policy, or use a script that’s run at regular intervals.

Once you’re set up, the policies are enforced by the Managed Folder Assistant. This runs on an Exchange server, which is controlled from the service Microsoft Exchange Mailbox Assistants. This used to be schedule based (Exchange 2010 pre-SP1) but SP1 onward and Exchange 2013, this is an always running throttled process. It’ll do it when it’s the ‘right time’ based on several criteria and checks. If you want to know the specifics, read this from TechNet.

To check that the policy has applied, you can go to the properties of the folder of the mailbox in question (for me it’s Deleted Items) and you’ll see the policy listed:

deleted items 2

You can also look at the individual emails to see both the retention policy applied, and when the email will expire. This is what I see from Outlook 2010:

deleted items 3

If you want to process a particular mailbox right now because you’ve just configured something, you can use the PowerShell command:

Start-ManagedFolderAssistant -Identity “guyinaccounts”

If you want to do more than a single mailbox, you’ll need to pipe it. Again, more details here on TechNet. The Event Viewer on your Exchange server should tell you how it went, but from some of the information I’ve read, a Retention Policy that’s only just been targeted to a mailbox can take up to 48 hours to actually recognise and start processing. For me it took more than a few hours before I could see the policies on my emails.

One last point, when you first create and apply a policy is when Exchange will start tagging emails. For my example, I set it to 60 days Delete and Allow Recovery, on the Deleted Items. This caused all exisiting deleted items that went back a few years to get marked for deletion 60 days from when I applied the policy. It won’t go back and instantly delete your older items.

Originally posted at WeBreakTech

Who Will Be There For The Long Run?

You may have noticed that the theme on my blog has changed. The theme I was using was a light version of a pro product, which I didn’t buy. I was looking at changing some small settings and discovered that the creator of the theme had stopped supporting it a few months ago.

Knowing that I’d probably have issues in the future, I decided to find a different theme. It had to work with the content I already had and look pleasing enough to me. I also didn’t want a v1.0 theme, because that gives me no assurance that the creator has any interest in updating it when future WordPress versions are released.

I realised that this same methodology is how I approach most pieces of software. Ideally it needs to have been around for a little while to prove they can deliver, and keep their product updated. It needs to have good support, either from the community or the creators. It needs to integrate well with existing systems, but also not cause you to be locked in to the product itself.

After working in I.T. for a while, I’ve found this is instinctively how I think. A big factor would be learning this from when things go wrong – from implementations, upgrades or changeovers and considering what decisions should have been made early on to prevent this.

This in itself causes issues, because how can a software solution get customers if everyone wants something that’s already proven? Companies will often take risks if option B is substantially cheaper than option A, or the vendor of the software have proven themselves with other solutions… but generally it’s safer to go with the proven solution.

Maybe this methodology is changing with the rapid release cycle we’re now seeing globally. It’ll probably cause more issues due to less testing time and more updates, which instinctively is the opposite of what we’ve all learnt to do in IT. This applies to the cloud too – you’re putting your faith in a 3rd party, but you have no visibility or control over changes. Without that visibility, how do you know everything of yours will work after the fact? Or will you be left trying to find another cloud vendor that works with your existing setup?

So yes, I have a new theme. It works, and it’s free. It’s newer than v1.0 so at least there’s some evidence that it will be maintained, but they may stop this at any time. I’m not giving them any money so I can’t complain, but it’s still the fundamental basis of my decision process. It’s luckily quite easy to change themes because of the well designed plug and play style of themes. This is what I expect from any software vendor (but rarely met), and anything beyond increases the risk of pain – it may not be now, but chances are it will come.

ioSafe 214 NAS Review

The ioSafe 214 NAS was provided to me by ioSafe to check out. I’ve looked at a few NAS units before, but generally low end devices. This unit is far from low end, having both advanced management capabilities and superb physical protection.


“Superb” is a big call, but this NAS is fireproof and waterproof. Trevor Pott and Josh Folland tested the fire side of this here (The Register) which is rated at 1550ºF for 1/2 an hour, and the water side is rated at 72 hours with 10 foot depth. There’s a bunch of videos on YouTube too if you want to check those out. I chose not to test these specifications as I really liked the unit.

Full specifications are available here from ioSafe’s website, but here’s a quick rundown. The NAS is dual bay, and will officially take up to two 4TB SATA drives. There are 3 USB interfaces (a single USB2 on the front, and two USB3′s on the back), with the back also containing a single gigabit ethernet port and a power port. The only other item of interest is the copy button on the front which I’ll go into later.

The ioSafe 214 is ‘powered by Synology DSM’ which I think just means it has a Synology 214 inside it… which I was very impressed by. I’d pictured the web interface of the NAS as some unexciting poorly designed experience, but this was similar to using a desktop with shortcuts and programs.

Here’s the ‘desktop’ which you’ll see after logging onto the NAS via HTTP:



I’m still impressed now after using this for a few weeks. The left hand side contains these highlights:

File Station – This lets you create and manage shares and the files/folders within

Control Panel – This opens the control panel as per the screenshot above. There’s a huge amount of options here, including setting up LDAP/Active Directory connectivity, user management, device updates, index your media located on the drives and so on.

Package Center – this is the Synology App Store. You might think this isn’t exciting, but for starters everything is free. There’s tools like Antivirus and DNS Server, but also Asterisk (want to run your phone system off this?), Mail Server, MediaWiki, RADIUS Server, Tomcat, VPN Server , WordPress and so on. This turns a basic NAS into a server with a multitude of abilities.

One extra application of note is the ‘Download Station’. This will download from a bunch of different protocols: BitTorrent, FTP, HTTP, Newsgroups, eMule (is that still used?) and a few others I haven’t even heard of before. I’m sure a lot of people leave a box on just for downloads, so this would eliminate the need for that.

On the right hand side are ‘Widgets’ – yep, just like the ones from Windows Vista and 7, and were killed off due to vunerabilities. Anyway that doesn’t apply here, these are configurable but I decided to show the connected users, storage use, system health and finally the resource monitor that displays usage of CPU/RAM/LAN.

There’s also a few other important areas a few clicks away, with the most important being ‘Storage Manager’:



This is where you can create iSCSI LUNs and manage the physical hard drives inside the ioSafe. Creating a LUN was really easy, and they have the ability to thin provision. This means you can over-subscribe the storage – for example, you might have 2tb free like I do above, but you could create a LUN with 2TB of space, and another with 1TB. It only uses the space you actually write to, but you avoid having to guess and lock yourself in to certain LUN sizes early on. The only risk is if you run out of disk space you’ll start to get issues, and you wouldn’t realise it just looking at the LUN from a remote PC.

Personally I created a LUN that took up the whole 2TB available (1.79TB of real space) and then created another small 1GB LUN which I used as a Quorum for clustering.

Also as a quick speed test, I copied the Windows Server 2012 R2 ISO (which weighs in at 3.97GB) from a local machine to the NAS via iSCSI, and it copied over at 33 seconds. The copy averaged 115MB/s.

Copying a file back to the local host was much slower, which would be an indication of the local single spindle of the HDD, and came in at 45 seconds for the copy, averaging around 80MB/s.

The final area worth mentioning is Backup & Replication:



Again, there are a lot of options here. This takes away from relying on a remote device such as a PC to do backups, and allowing the NAS to look after itself. You can back up contents from one area on the NAS to another, or plug in an external disk via the USB3 ports and take it away for offsite backup requirements. There’s even Amazon S3 as a backup point – not something I’d use for large amounts of data, but it’s a nice addition.

So what is the end result from all this? It’s a NAS that is easy to set up and maintain from Synology, wrapped up in great armour from ioSafe without having ridiculous pricing. This unit is ideal for a home user or small business that needs 4TB or less data highly secured – and for an extra few hundred vs a non ‘armoured’ NAS, it’s an easy decision.

Note: If you want the same features but need more drives, ioSafe also have an ioSafe 1513+ which has five HDD bays instead of two.

TechEd North America – Done for 2014

Originally posted at WeBreakTech

TechEd North America 2014 is now over. You can read about the first two days of my experience here. The second half wasn’t too different to the first half unsurprisingly, and there wasn’t a huge amount of excitement in the air.

Wednesday morning started off slowly. There were a LOT of vendor parties on the Tuesday night beforehand, so maybe it was a difficult morning for many attendees. There wasn’t much to do as once breakfast was over, there were breakout sessions to attend (where you go into a room and listen to a presentation – one of the biggest parts of TechEd), but the Expo hall (where all the vendor booth are) didn’t open until 11am.

I found it difficult to push myself to attend the breakout sessions because they were all available on the next day for free via their Channel 9 service. It’s a great idea from Microsoft but many attendees I spoke to shared the lackluster of going to these too, saying they could watch them online later.

There were some highlights of sessions though. Anything with Mark Russinovich (creater of SysInternals) was highly talked about, and I attended “Case of the Unexplained: Troubleshooting with Mark Russinovich” which was really interesting to watch.

For lunch, I caught up with Nutanix to have a look at their offering over lunch. They treated me in style, by giving me a Texas-style hat and using someone else’s leg power to get me there and back:



I learnt that Nutanix offer a well priced sever based solution that’s half way between a single rackmount server, and a full chassis/blade setup that also uses shared storage between the nodes (i.e. blade servers). I’ll definitely be looking into that further from both a writing view as well as investigating for my place of work.

After that, I explored the Expo again, speaking to more vendors. Yes there was a lot of goodies given away (generally called ‘loot’) but again according to other attendees, there was a lot less than previous years. I didn’t really try and came back with a suitcase full of novelties which my work colleagues will hopefully go through and find some cool bits and pieces to make up for my absense.

Wednesday night came, and night time means more parties. I went to the Petri meet and greet where as the title suggests, I met and greeted another bunch of great people. After that the jet lag had gotten the better of me, so I went back to the hotel to order room service and pass out.

Thursday saw the final of Speaker Idol. It’s a competition run by Microsoft in the American Idol format (apparently?) where people perform 5 minute presentations until a winner is chosen, and that winner gets to present a full breakout session at next year’s TechEd. Aidan Finn ended up winning (and he wrote about the experience here) who was highly deserving of the achievement, but so were the other presenters I saw.

I had dinner with the friendly eNow mob who make reporting and monitoring tools for Exchange, Lync and others, as a +1 to someone who was actually invited.

The closing event was held at the local baseball stadium: The Minute Maid Park.


Not having been to an American stadium before, it was more of a novelty to me more than others. Jugglers, artists, many stadium type food stalls and a mechanical bull surrounded the outskirts while attendees took tours of the pitch itself and listened to the bands that played. Here’s the full list of everything that was available. Disappointingly I wasn’t feeling 100% due to a cold, otherwise I would have sampled some of the nachos covered in American liquid cheese – something rarely seen in Australia.

Overall I’m really glad I went (I may not have been as positive on the very long plane ride home) as I met a bunch of great people – Particularly Kyle Murley and Phoummala Schmitt who both looked out for me, as well as Trevor Pott who convinced me to go in the first place. I made lots of new contacts, and had the opportunity to say hi to tech greats like Mary Jo Foley.

TechEd North America – Half Way Mark

Originally posted at WeBreakTech

It’s now Wednesday 14th May, and we’re at the half way mark of TechEd North America 2014. This is my first TechEd outside of Australia, and it’s been an interesting experience. A lot of the following reflections will be due to my TechEd Australia exposure which gave me certain expectations.

For starters, the community is really a great group. Almost everyone is very courteous and respectful which is inviting and welcoming to someone who’s traveled here by themselves. It’s very easy to just start talking to someone, as everyone seems genuinely interested to find out more about others and have a chat. For example, as I was sitting writing this, someone mentioned that I should eat something as I hadn’t really eaten much of it. We had a quick chat about jetlag, and I thanked him for his concern.

I’ve been told it’s a sold out event, with about 11,000 people in attendance, which dwarf’s Australia’s 2000-3000 headcount. The venue itself, the Houston Convention Center is huge, along with all the areas inside. The general dining area looks bigger than a soccer field to me.

The Expo area is about as big, which contains all the vendors giving away shirts, pens and strange plastic items, while trying to convince you to know more about their products. The staff are quite nice too, not being too pushy. There’s also a yo-yo professional, a magician and probably other novelties that I’ve not seen yet.


Many competitions are going on with the vendors too. One had a chance to go bowling with Steve Wozniak and I was standing next to him which was awesome. Sadly didn’t win the bowling part though:



There’s a motorbike to win, countless Microsoft Surfaces, headphones and other bits and pieces that vendors are using to get the attendees to come visit.

Moving onto the keynote (which I liveblogged here), the focus was Mobile first, Cloud first. There wasn’t much noise from the crowd for the whole keynote, as most were probably coming to terms with having to start worrying about Azure now. Microsoft made it very clear that Azure was THE way now, not just an option.

The announcements of the keynote were all features for Azure. Good features which others have written about in detail, but no new products or services. Not even a mention of the upcoming Surface 3 and Surface mini. No mention of Nokia either, but there was an iPad on stage to show off some Microsoft technologies. Times have changed!

There’s hundreds of sessions going on every day, so we’re rather spoilt for choice. I’ve only been to a few and expecting to focus on that more today, but they’re a bit part of what makes up TechEd and so far have been very informative. The 1 hour and 15 minute format means they don’t go on for too long, but don’t feel rushed.

Microsoft also decided to make one exam free to all attendees – the 70-409 Server Virtualization with Windows Server Hyper-V and System Center. I decided to take it and passed, which was a nice bonus.

Vendor parties make up all the non-TechEd times and there’s many going on at once – again spoilt for choice. It’s another great way to meet others and find out what’s going on for other IT professionals, while sampling the local food and beverages.

There hasn’t been a huge buzz from attendees, but everyone is still happy to be here. It’s been a good two days so far, and I’m looking forward to the next two!



Microsoft TechEd AU Split Up

An interesting official announcement for Microsoft TechEd. As I’ve mentioned before, I’m off to TechEd North America next week as press, and looking forward to an amazing event. In contrast, TechEd Australia has just been announced as being broken up from a single event into multiple cities here.

From the page:

For 20 years, TechEd has been hosted as one event at a single location.

This year that format will be changing. Microsoft is excited to announce, that in 2014 TechEd will be held in multiple cities, in order to increase the accessibility to Australia’s premium learning conference for technical professionals. This new approach will provide the opportunity for more people to benefit from the TechEd experience throughout the year.

TechEd 2014. Coming to a town near you.
Kicking off in Melbourne before being repeated in Sydney, these events will retain the quality of the TechEd brand by focussing on two days of deep technical training, access to experts and hands-on technology.

On twitter there are already some negative comments around this:

People are pretty upset. The clash with Europe is interesting – you’d have to think that some amazing speakers can’t attend both because of the conflict, but at the same time there should be enough great speakers for one of the biggest software companies in the world.

The event is also cut down to two days – previously it was 3 to 4 days. So will less time, there must be less content?

The other reason I think people are upset, is that it’s often their only big reward in the year. Going off to a nice location, catching up with all your fellow minded IT people and being swept up in the conference itself. That will change by splitting it into two events.

Brisbane/Gold Coast people may not be able to go because they’re used to being able to travel locally. Sydney and Melbourne people will most likely go to their local conference instead, which takes them less away from their work. Friends and contacts people have made and see each year may go to difference conferences.

There are some upsides though. Smaller companies that only have a few staff and can’t afford to send them all at the same time can now send them separately to each event, without people missing out. I think this is what they mean by the accessibility reasons, with most attendees coming from Melbourne and Sydney anyway.

Tickets should be cheaper too, less time should equal less money.

The other interesting side is how sponsors will take this. Will they spend less because they have to spread it over two events, with fewer attendees at each event?

Although there’s negativity now (which I completely understand), it will really depend on how the events go. Then it will be up to Microsoft to weigh each option, but as it is now they’re planning for more cities in 2015. Maybe it’s about getting more exposure, and getting more people onto the Microsoft Cloud?

Otherwise if you don’t like it, you could always go to TechEd New Zealand (TechEd North America is sold out!) :)