AI and cyber-security: Defenders, hackers eye new tools
There’s a reason why security experts picked 2019 as the year in which the first artificial intelligence hack takes place. Peter Bailey explains how hackers and defenders are arming themselves. […]
There’s a reason why security experts picked 2019 as the year in which the first artificial intelligence hack takes place. Peter Bailey explains how hackers and defenders are arming themselves.
With cybercrime as much a business as any other, albeit one on the wrong side of the law, hackers are already sizing up the potential for artificial intelligence (AI) to further their goals. It’s the flip side of a coin: on the one side IT professionals are using AI to help identify and eliminate threats more effectively, and even anticipate attacks before they happen. On the other, intelligent malware offers the potential of adapting its payload and evading detection.
Already, AI is available in existing security tools. Machine learning, which is a subset of AI, is ‘trained’ with the help of a human expert to look for potential attacks or anomalies within computer systems that might signal a hack or breach. This differs from rigid rules-based technology as it allows organisations to analyse huge quantities of data with algorithms which dynamically identify unusual behaviours, pinpointing and stopping attacks before they can cause damage.
But where defenders are adopting new technology, you can be sure that attackers are not far behind to fight fire with fire. Hacking after all is a business, and one that needs to take advantage of cutting-edge innovation to be successful. At the same time however, many successful hack attempts are made by exploiting known and sometimes quite obvious weaknesses – as an example, Insecam is a platform which offers livestreams from thousands of unsecured security cameras around the world, including New Zealand.
Just like any other business, success for hacking outfits is measured on profit, with organisations looking for every competitive advantage they can get. That’s why security software vendors and market watchers are anticipating a rise in ‘polymorphic’ malware which adapts and evolves to evade detection, using a similar sort of AI to dynamically achieve its nefarious goals. This kind of malware can know when it’s being looked for, and so changes its nature to stay under the radar. It means attacks are likely to be more successful and more personalised.
Alongside this, hackers are also embracing automation. And while there are undoubtedly hacking employees sitting at computers and phones combining technology attacks with social engineering techniques, hack bosses have the same human resourcing issues as any other employer. Cut the wage bill and profits go up.
“There haven’t been any specific incidences of intelligent malware in the wild yet, but there’s little doubt it’s just a matter of time, and it will probably happen soon.”
Automation powered by a suitably adapted and enhanced Siri or Alexa styled voice could quite conceivably work around the clock, ringing people up, offering tech support or claiming to be someone from the IT department. As soon as a potential victim is on the line and co-operating, the job can then be handed to a skilled hacker.
So, the question is, where are we right now? There haven’t been any specific incidences of intelligent malware in the wild yet, but there’s little doubt it’s just a matter of time, and it will probably happen soon. Security research firm McAfee Labs confirms in a recent report that hackers are likely to turn to AI to increase their chances of success.
Among the AI techniques being used is a new approach to malware called ‘process Doppelganging’ which writes malicious code that appears to be legitimate activity. The report says that, “by adding technologies such as artificial intelligence, evasion techniques will be able to further circumvent protections.”
The other question which crops up in many discussions surrounding AI and automation is, ‘do you need to be worried’? The answer is ‘not any more than you previously were’, providing you’re taking your cyber security protection seriously. Security is provided based on the best technology and knowledge available on the day. It’s always had an element of dynamism, as attackers will modify the tools and behaviours they use as new technology becomes available. The trick is that defenders must do the same.
Cyber security’s major tool of defence is knowledge. Knowing what, how, why, where and when attackers are likely to target your organisation is as crucial today as it has always been. Be aware of AI. It can be used for or against us, so we must be one step ahead before it’s too late.
Peter Bailey (pictured) is GM of Aura Information Security.