In 1995, law enforcement officials convinced a judge that Kevin Mitnick had the ability to start a nuclear war by whistling into a pay phone.
Mitnick had become a high-profile hacker on the FBI’s Most Wanted list after a two-year cat-and-mouse chase where he eluded the FBI under various false identities.
One time when he learned that they were on their way to his apartment, he kindly left them a box of donuts in the fridge before disappearing.
His crime? Mitnick had hacked into over 40 major corporations.
He had the uncanny ability to weasel his way into nearly any network’s core - from a military computer to FBI and DMV records.
He had intercepted and stolen computer passwords, altered computer networks, broke into private e-mails, and sweet-talked his way into getting privileged access to proprietary information.
And he did this mainly for the thrills - he was a hacker for the hell of it.
In Mitnick’s eyes, everything on the web is transparent.
If there’s a will, there's a way for anything you write in email, every conversation you have in chat, every link you’ve visited, to be read - unless it’s been heavily encrypted.
Mitnick had access to information that could do terrible damage - social security numbers, credit card numbers, proprietary software, email logins.
He could have coerced and blackmailed people for lots of money.
But he didn’t. His central addiction was curiosity. It was a game for him.
Today, the potential to do damage is exponentially greater in the age of big data, big tech, and information oversharing.
Central databases store what we like, where we’ve been, who our friends are, and whom we’ve slept with.
The list of known data breaches since 2010 has been staggering - it includes LinkedIn, Uber, Equifax, JPMorgan, Sony, Anthem, Citigroup, Dropbox, eBay, Evernote, and many more.
And since many people reuse passwords across similar sites, any password hack can lead to vulnerabilities in other sites.
Meanwhile company employees have been spying on user activity...for purposes related to work, of course.
Recently, it was reported that a whistleblower working at Lyft expressed concern that several employees looked up user data.
Government employees do it too. The 2013 Edward Snowden files gave a glimpse of the extent of surveillance at the National Security Agency.
The NSA had wide-reaching surveillance tools for searching nearly everything a user does on the Internet.
They had legal authority to request user data from companies including tech giants like Google, Facebook, and Apple.
They were also intercepting 200 million text messages every day.
In some cases, NSA employees had been caught spying on love interests, a practice now referred to as “LOVEINT”.
And it’s not going away - on January 11, 2018, the U.S. House of Representatives voted 256-164 in favor of extending the NSA’s surveillance program for another six years with minimal changes.
Ultimately, these organizations are comprised of humans, none of whom are infallible, and each with their own motivations, biases, and triggers.
And they have all sorts of personal information at their fingertips - information they can use to satisfy whatever desires are pulling at them.
Now consider the implications of wider business AI adoption.
As we increasingly rely on complex algorithms to help us make high-stakes decisions, a not-so-nice version of Mitnick can engineer his way to accessing the code base.
A large attack can disrupt the power grid, shut down hospitals, compromise a national security system, or as Elon Musk fears, start World War III.
Projects like SingularityNet aim to create a decentralized marketplace for AI to enable anyone, not just the big tech companies, to buy and sell AI at scale.
But this will open another can of worms. More actors participating will create more and easier opportunities for social engineering.
Today, Kevin Mitnick runs a consulting firm that helps corporations protect themselves against the methods he knows intimately.
Here’s what’s interesting - by almost every account, Mitnick was technically dull.
He accomplished most of his conquests through superb social engineering.
Mitnick explains that social engineering is using manipulation, influence and deception to get a person, such as a trusted insider within an organization, to comply with a request. That request is usually to release information or to perform some sort of action item that benefits the attacker.
Social engineers understand an important truth: When it comes to security, people are the weakest link, not the technology.
A company can spends millions of dollars on the best data security measures, but a Mitnick just needs to persuade one human willing to cooperate and he’s in.
Mitnick would achieve this by imitating a lineman's jargon, impersonating a superior, conning unsuspecting employees, and exploiting his knowledge of a phone company's organizational chart.
Today, his attacks are much easier.
He can research LinkedIn for employee information and take advantage of the blurred boundaries between professional and private social networks.
To protect yourself and your company against manipulation, it helps to understand Mitnick’s Social Engineering Attack Cycle from his book, The Art of Deception: Controlling the Human Element of Security.
Step 1: Research
In the Research phase, Mitnick would gather as much information about a target as possible in order to develop a strategy for building rapport.
This can even come purely from publicly available sources like company websites, social networking sites, personal blogs and forums.
Guys like Mitnick love it when you overshare and ignore your privacy settings.
Step 2: Develop rapport and trust
Next, Mitnick would determine the proper pretext. A pretext is a devised scenario that explains to the target why the attacker is engaging.
A good pretext must be believable and withstand scrutiny.
You’re more likely to divulge information to an attacker if you perceive a relationship exists. So this step is not trivial and can be time-consuming, but a good pretext makes it easier.
One helpful framework for thinking about influence is Matthew Kohut’s matrix below that considers two axes: (1) the level of stakes and (2) the context of the interaction.
First, how high are the stakes of doing what the attacker asks? You’re more likely to do something if it requires little effort or perceived risk on your part.
Think about how easy it is to click on a friend’s Facebook or Twitter link.
Second, how strong is the relationship? An attacker might establish an effective pretext and build instant rapport, but if he asks for a favor too soon, the relationship will feel transactional.
And rapport quickly disappears.
Social engineers sometimes have to build the relationship over multiple interactions, all of which must have a proper pretext without giving the appearance of manipulation.
If the perceived relationship is strong, then the target may even enjoy doing high stakes favors.
So the best human hackers must exercise patience when cultivating relationships.
Step 3: Exploit Trust
Next, Mitnick would exploit trust in order to elicit information.
He might start by priming you - putting you in a desired emotional state, such as feeling sad or happy - that then leads you to divulge information.
For example, he might relate to a sad story to evoke you into remembering a sad incident, and subsequently to feel sad.
And then the information elicitation can begin.
He might be looking for something as sensitive as a password, or as casual as knowing another person’s whereabouts at a particular time.
After he gets what he needs, he will bring you back to a normal emotional state in order to avoid further consequences.
He wants you feeling good, and not guilty, about what you just shared.
Step 4: Utilise information
All the steps above will have achieved nothing if the information is not utilised to achieve a goal.
Here, Mitnick might discover that information was incorrect or insufficient to execute his goal.
Whatever the roadblock, he would find a way. He’d repeat the cycle on another target, sometimes many more, until he got what he needed.
Then, he moved onto the next attack.
Mitnick’s Social Engineering Attack Cycle is not just limited to breaching company security systems.
As long as you hold some piece of information that someone else wants, you can be targeted.
Private investigators and spies have used the honeytrap strategy. The prototypical honeypot is an attractive female who baits a target into letting his guard down and divulging secrets.
Ordinary honeytraps are common too. Lonely people who feel misunderstood are particularly susceptible to attention from attractive strangers with less-than-pure motives.
Media mogul Harvey Weinstein employed former spies to use this framework on journalists and actresses.
Opportunistic employees will use it to get embarrassing information about their coworkers from the company gossip.
And you can use it to infiltrate a social tribe, or make friends with a potential employer, or gather intelligence on a competitor.
The applications are endless.
Mitnick’s framework for social engineering is just one of many models, none of which are difficult to understand.
But here’s the hard part.
When social engineering is done well in practice, you won’t even know you were a target.
It can feel like you just made a strong connection with a new friend.
Or the attack was so subtle that you won’t even remember it.
Unless you develop your social intelligence and become attuned to the nuances and subtleties of human behavior, you’ll likely be like everyone else: too stuck in your own head to realize you’re being manipulated.