Bring Your Own Device or Bring Your Own Security Disaster? Some simple MDM policy advice

consultant-placeholder10 Ken Munro 29 Nov 2014


Maybe you have an explicit Bring Your Own Device policy, or maybe it’s just happened through an inability to stop senior execs connecting personal iPads to Exchange? So often personal devices end up connecting to corporate email; it just kind of ‘happened’ despite your protests about porous security policy.

Once connected, it can be very hard to retrospectively enforce security. “What do you mean, you want us to use a six digit PIN? It’s been fine so far.” And “Why can’t my kids watch videos and play games on my iPad? It’s mine, I just get work email on it occasionally.”


Having the answers to those ‘why can’t I’ questions can be very helpful when creating and enforcing mobile device security policy. Here are a few examples that I hope will help.

Q: Why can’t I get work email on my personal phone/tablet?

A: You can, but before you do that, there are some things to consider:

Who will you let connect?

Who are you going to let retrieve email on their phones? Directors or other senior execs? Sales people, marketing, engineers, customer services, IT, or just plain every employee? Who has the most sensitive information? Who are the most vocal and empowered to make change – i.e. the senior execs and directors. Protecting them is critical.

If you allow a small user base to access, then you could consider providing a standardised phone. Few argue about being given iPhones! However, if you allow a large population of users, you will probably allow them to ‘bring their own device’.

BYOD sounds great doesn’t it? Reduce the cost of providing phones to staff, and they get work email anywhere. Except, I have encountered many organisations that find the support overhead of helping users out with some bizarre phone O/S flavour often outweighs the cost saving from not buying handsets and contracts.

Will you let any old phone connect, or will you specify handsets and software versions?

An older iPhone or Android handset will likely have old software versions on it. These will be vulnerable to compromise; a stolen phone becomes a major headache. Are you going to tell the ICO or FCA that you’ve had a data breach because you lost a corporate phone containing customer data on a spreadsheet in the users email account?

So, maybe you accept the latest version of O/S software and possibly a version prior. You can guarantee that as soon as a new software version comes out, at least one user will be forced to update their phone over the air, no doubt in a foreign country, before they can retrieve that ‘critical email’.

How are you going to deal with that? Will you ‘flex’ the policy to allow that user to retrieve email before updating? Will you remember to re-enforce it afterwards?

Some phones have hardware security issues that you simply can’t fix. Older iPhones and numerous older Android devices have bugs that can’t be fixed with software updates.

So maybe you specify iPhone 5 and above. That was fairly easy, but what about Android? Every handset manufacturer has a slightly different approach to security. Are you going to review every possible handset for hardware security flaws? There are many different attacks, including JTAG and USB based memory scrapes.

How about specifying the five most popular handsets? But what happens when someone says ‘why can’t I connect my Samsung/HTC/Google thing that I’ve just bought and committed to a 2 year contract on’?

Exchange offers some control over device security, but not much.

For example, it is usually possible to enforce a PIN and sometimes encryption on the end device, but not always. As a result, many businesses employ Mobile Device Management (MDM) software to bring more granular control to their BYOD phones.

Many vendors offer MDM products

They fall in to two broad categories:Policy enforcement and Containerisation

Policy enforcement is by far the weakest approach. If a function is available on a handset, the MDM enforces it. For example, a remote wipe on an Android handset might call the policy “USES_POLICY_WIPE_DATA” which then calls the function “RecoverySystem.rebootWipeUserData”.

Depending on the handset and Android version, that might be the equivalent of deleting the file allocation table in DOS (so all data is essentially intact and recoverable), or it might zero the whole user data partition on the phone. The problem is, you don’t know, unless you have a detailed knowledge of the handset!

However, policy enforcement MDM products are much less intrusive upon the user experience. They collect email etc in much the same way as they were used to.

Containerisation involves creating an O/S independent encrypted storage area in which all corporate data resides. As the crypto is known and controlled by the MDM, you have confidence in its quality. The user leaves? Simply delete the container. Their personal content on the phone is unaffected.

Except, this approach is usually more invasive – the user often has to authenticate twice – once to the phone, once to the container. The interface is often slightly different to the native email client they are used to.

Containerisation is my preference every time – we have shown numerous live demonstrations, working around a policy enforcement MDM and extracting data from a ‘stolen’ phone.

PIN length and PIN patterns

A four digit PIN is not enough. It’s too easy to shoulder surf, too easy to work out from smears on the screen. Given the encryption keys for most handsets are the PIN, would you encrypt a corporate laptop with a four digit code? No, yet it’s got similar data on it, given the sharing of critical documents on email.

Some handsets have vulnerabilities that allow the PIN to be cracked. Older iPhones and iPads did, as do some Android handsets, particularly if ‘rooted’ (see later). A longer PIN makes that crack exponentially harder, so takes much, much longer. It also protects encrypted data on the handset better in the event of theft.

As an example of timings, an encrypted Android partition with a four digit PIN on KitKat can be cracked in less than an hour on my laptop. Earlier versions of Android can be done in minutes.

Six digits make the cracking process significantly harder. They give enough time to deal with a reported theft, expire domain creds, attempt a remote wipe and assess what data might be cached locally on the device.

A word of caution for Android – PIN patterns are often used. These are quite cool, as one just has to remember the pattern, not the PIN. Except…

…patterns are way easier to shoulder surf, but more importantly they reduce PIN complexity through common use: rather than jump around the keypad with the pattern, the user is far more likely to move to the next adjacent number. A recent paper showed that a six digit pattern PIN was likely to have only 1500 combinations rather than the expected one million.

Don’t use patterns, do use six digits. Or, even better use a password (if the device supports a password).

Rooting and jailbreaking

‘It’s my handset, so it’s mine to do with as I please’ says the user.

Er – no. You need to be sure that there is no malware on the phone that could be used to place a back door or keylogger. It is much easier to do this if the user has rooted or jailbroken their device, as many of the O/S security restrictions will be disabled. Users often do this in order to install cracked software rather than pay for it from the AppStore or Play stores.

Your MDM should be able to detect rooting/jailbreaking consistently every time. Rooted phones should not be able to sync with corporate systems.

Keep an eye on xCon which can be used to stop MDM products detecting a jailbroken iPhone.

Fingerprint authentication

This is a subject in its own right, but worth discussing briefly here, given prevalence of print readers in later phones. Who needs a PIN when I can use my finger?

As has been shown by others, fingerprints can be lifted fairly easily, often from the back of the very phone that has been locked with them. That’s one problem. Worse is the problem of revocation:

Assume a phone is stolen and compromised. How do you ensure that the fingerprint hash stored on the device hasn’t been cracked? Is the users fingerprint now in the public domain? How do you revoke and replace a fingerprint???

Fingerprints can be a useful additional layer of security, but should never be the sole method of authentication. PINs work for a reason…