Blog: Honeypots

Work on our honeynet continues apace

Ken Munro 30 Jul 2014

BotOrNot

A month in, our fake profiles have had significant interest from social networks. Invites have been received and accepted from direct competitors, industry bodies, industry ‘names’ and a fair few recruiters.

The more genuine connections they have, the more plausible they are as real people. Hence, the more likely they are to be the recipients of targeted malware; the more useful threat intel we receive.

I’ve shared their names at a couple of talks I’ve given recently, where I know that the audience is likely to be trustworthy, but I’m not going to start announcing who they are on the interwebs. That would be Honeynet suicide!

The next challenge is to ensure the profiles stay current and up to date

So far, we have been maintaining them with status updates that mostly reflect information that we’ve been sharing from the business, as you would expect.

Ideally, one would have someone creating fake Facebook and Twitter ‘lives’ for them, but that’s a lot of hard work. Imagine the complexity of maintaining several distinct social media profiles and making them seem real. I struggle with making my own life seem real!

So that’s where an interesting paper that popped up recently drew my attention, with a high level write up here.

Twitter bots would be the ideal route to populate their Twitter profiles with content that appears fresh, but it’s always a risk: too automated and it becomes obvious that the profile is fake.

The authors have released a tool that attempts to determine whether a Twitter handle is a bot or not.

BotOrNotScreen

The perfect tool to use to work out if the bot content we are using to populate a profile is detectable as a bot.

One turns the tool against its original purpose, as is so often done in security. Use the detection tool to determine if one is vulnerable to detection or not.