How is AI changing cyber security?

By David Johnson Published on: 27th May 2024
How is AI changing cyber security?

Many organisations have started the process of understanding the threats associated with AI. With over 300 million users expected to be using AI by the end of 2024, it’s clear from the evolution of the internet that not all users will be using AI for good.

At Communicate, so far in 2024 (April 2024) we have seen threat actors using AI to trick IT support into resetting passwords or even MFA, which was infamously claimed by Microsoft to make 99% of attacks preventable just 5 years ago.

Now, the headlines have stories of journalists proving the use of AI to bypass voice recognition to access their own bank account, and even fake video calls causing workers to pay out nearly £20 million to threat actors.

More recently a proof of concept has been tested around the bypassing of CAPTCHA codes, showing Bing Chat being able to decipher these codes given a reasonable excuse or pretext. The evolution of AI and the ability to bypass its own ethical restrictions will bring new challenges for companies and consumers around the globe.

Winning the technological arms race can become as much of a driver for threat actors to attack using AI tools as financial gain. According to the CrowdStrike Global Threat Report, attacks targeting cloud systems nearly doubled in 2022, and the number of hacking groups capable of launching such attacks has tripled. Even if it’s the innovation of the century, cyber criminals aren’t just focusing on artificial intelligence. With emerging technology like Quantum computing being the next target, their goal is to widen the attack surface and reach as much as possible.

But it’s not all bad news. Cyber security companies can utilise emerging technology for the benefit of their clients. The evolution of detection technology is one example where AI is changing cyber security for net good, by reducing false positives significantly and identifying indicators of attack from newly emerging threats with high accuracy and in almost real-time.

It’s easy to jump into projects to upscale using AI, but it’s important to consider which choices will last the test of time, and to not get swept up into the latest shiny cyber security solution. Just one example is a major bank that invested £17.4 million dollars in a voice recognition solution, which they’re now replacing with the previous solution using security codes sent through to mobile devices. This begs the question, if it isn’t broken, should you fix?

As of today, the threat is real and companies need to think about 3 areas:

  1. How do they allow their staff to utilise AI?
  2. What threats do their organisation face from threat actors utilising AI?
  3. How can they future-proof decisions on security projects?

 

Don’t forget the basics:

Focus on emerging technology is essential, but it’s worth noting that ransomware is still the most common and costly threat to businesses, with ransomware-as a-service (RAAS), creating a doubling of victims in 2023 from 2022, and this trend seems be continuing in 2024. And, all but one malware/ransomware incident we investigated in 2023 had been due to High or Critical vulnerabilities over 30 days old being exploited. So whilst AI is changing cyber security, the basics remain the same.

The brick-and-mortar of cyber security in the UK, Cyber Essentials, is designed to help you to guard against the most common cyber threats and requires companies to patch/update/fix all High or Critical risk Vulnerabilities within 14 days.

Whilst AI is proliferating the way threat actors are and cyber security organisations attack and respond, it’s important to not forget the basics and think strategically before implementing any major changes to your infrastructure.

If you want to discuss anything from AI to Cyber Essentials further, just request a chat and we’ll get an expert in touch with you.

Speak to our engineers and experts.