in

Hacked by a machine

Hacked by a machine

When your computer’s been hacked, it can feel intensely personal. As your heart sinks and your blood pressure rises, you might imagine the thief sitting at a keyboard in a darkened room far away, laughing as they scoop up your most important data. Not the adversary you are looking for. Popular imagination depicts a hacker as a thief behind a keyboard in a darkened room. But these days, thanks to the rise of AI, the hacker making off with your data may not even be human.

But there may not be a person behind it at all. These days, an adversary can save a lot of time by using machine learning, says Ian Molloy , a member of the Information Security Group at IBM’s Thomas J. Watson Research Center .

“For phishing and spear-phishing, there are different ways of pulling in information intelligence about a given person,” says Molloy. “Recent works show that I can crawl Twitter and LinkedIn profiles with machine learning and then use that information to craft emails that would try to convince you to provide information that you wouldn’t necessarily give up otherwise.”

Alarming as this is, it’s important to note that hackers aren’t the only ones with access to artificial intelligence (AI) tools. Security professionals also increasingly rely on machine learning to streamline defensive efforts.

This ‘combat by AI’ was the focal point of Molloy’s speech at June’s ISC High Performance 2019 conference in Frankfurt, Germany. It’s a balance we’ll need to reckon with if we truly desire security and privacy.

Thinking like a hacker

The practice of cybersecurity revolves around thinking like a hacker to anticipate their moves. Combating AI-assisted hacking is no different. Small change, devastating impact. Even small alterations to data can force an AI model to misclassify data—imagine the impact that could have on a self-driving car’s ability to identify pedestrians. Courtesy Statistical Visual Computing Lab/UC San Diego.

“Adversaries choose machine learning tools for many of the same reasons we do,” says Molloy. “When we think about how attackers are going to start using machine learning, they’re going to use it to make themselves faster, more efficient, and stealthier.”

Along with using these tools to boost existing attacks, Molloy warns that a hacker might also target a machine learning system itself. One such assault is what’s known as an evasion attack . This is when an adversary makes such a small change to a piece of data that it’s impossible for a person to notice. However, this tiny alteration can force an AI model to misclassify data.

Imagine how this could impact a self-driving car’s ability to identify a pedestrian in its path.

Another way a hacker can attack a machine learning model is with a method called poisoning . In this scenario, the adversary gains access to the data used to train an AI system and uses it to reduce the accuracy or performance of the trained model—with potentially devastating results.

“To begin, they can tweak the data such that the model will always have poor performance,” says Molloy. “The second thing they can do is they actually insert a malicious Trojan or a backdoor into the model.”

Clearly, machine learning is a supremely effective tool in a cybercriminal’s arsenal. However, focusing solely on the abuse of this technology would be a disservice to the people who intend to use it for good. Swinging the double-edged sword

Despite these relatively new and unique forms of hacking, it’s important not to despair. Security professionals are working hard to use machine learning to its full potential while also trying to understand how to combat it. A good example of this is IBM’s Adversarial Robustness Toolbox . This open-source software library is meant to defend neural networks. Its specialties include defending against poisoning and protecting against backdoor attacks such as trojans. It includes an interactive page designed to teach people how different attacks and defenses can alter machine learning outputs.

Additionally, it would be foolish to overlook the value machine learning provides to security professionals simply by providing alerts based on activity that could be considered out of the ordinary, such as an abundance of login attempts. As Molloy explains, these tools are built around increasing efficiency.

“Where machine learning really comes into play is when you have an alert,” says Molloy. “Normally, an analyst team would spend minutes to potentially hours investigating, looking at IP addresses and doing queries.”

Molloy continues, “[The tool] provides all the information you need to know, contextualized around that specific threat and alert. You can then give it to the analysts and they can apply their domain knowledge to explore further.”

Humans have historically relied on machines to make our physical work easier. Now, we are developing the capability to allow them to make our mental work easier too.

Technological advances like machine learning can often feel like a roller coaster. One minute you’re hearing about how AI security systems are protecting your financial information, the next you learn about a hacker using these tools for personal gain.

Through the ups and downs, it’s important to remember that a technology can’t be defined by how people use or misuse it. We’ll need to keep an open mind if we want to improve on the best of machine learning while protecting against the worst.

Source: sciencenode.org

What do you think?

30 points
Upvote Downvote

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Google's AutoML Reminds Us That Machines Are Increasingly Helping To Build Our AI Advances

Google’s AutoML Reminds Us That Machines Are Increasingly Helping To Build Our AI Advances

Kiwi business WellMe using artificial intelligence and 'genetics' to tell us how to live

Kiwi business WellMe using artificial intelligence and ‘genetics’ to tell us how to live