I can’t believe another year has flown by, but wow what a year it's been! I suspect many of us saw smartphones and the cloud as the biggest transformations of our generation, but the reality is that these have been gazumped by the introduction of generative AI.
In the last six to nine months, I don’t think I’ve been at any cybersecurity meeting where it's not a key topic - be it how the adversary is already taking advantage of it, how businesses are looking to embed it into the processes or how cybersecurity capabilities are starting to leverage it. Like everything, the pace of evolution is relentless. As such, it’s no great surprise that generative AI has a strong influence on my predictions for 2024!
With the increase in volumes of successfully breached companies getting close to saturation (at least 89% were successfully breached in the last 24mths), adversaries must find new targets. AI Chatbots such as ChatGPT can now enable anyone to be extremely capable at communicating in any language. As such, we must expect to see attacks move into more local languages.
In our research last year, we were already seeing these attacks extending beyond the English language, and found the greatest impact occurred in non-English speaking countries. Why? I would suspect simply because they hadn’t built the experience in dealing with such attacks and AI Chatbots further accelerate this trend.
Previously, AI chatbots enabled anyone to quickly gather and aggregate public domain information. But, this was based on legacy data. Now, ChatGPT 4.0 has enabled access to live internet information through API’s. With it becoming easier to gather and build a detailed profile, as well as using AI Chatbots to dynamically draft personalized communications, we have to be prepared for ever more personalized attacks. Expect especially to see an uptick in whaling attacks, as well as those in the supply and communications flows of these targets.
Back in 2019, we saw a security vendor’s AI based security engine be exploited by adversaries. They learn how to game the scoring that sat behind the AI capability into thinking something that was known as malicious was instead benign. As AI systems continue to grow in complexity, scale and use, we must expect more focus on finding exploits and vulnerabilities into what are ever more critical business systems driven by AI. One specific area that I foresee will gain more scrutiny is the synthetic user. For example, the autonomous accounts that often bridge AI to third-party applications or AI to AI applications.
With AI tools enabling the generation of content from scraping of data to build out a rich profile on individuals, we should foresee challenges in how much data can be aggregated and who is responsible for it; the tool that aggregates it or the person using the tool? In the short term, expect increased requests for the right to be forgotten. But businesses will also have to take an increased interest into employees personal data that is publicly visible. Businesses will also look further at what data they hold and if it can be accessed by such AI tools. In the longer term, we should also assume further revisions to data privacy laws.
In recent years the explosion of SaaS tools has created a challenge for businesses around their ability to effectively leverage single sign on solutions. Now with the ability to scrape public information through Gen AI tools we have two incremental issues: First, we must expect that this data scraping will be married up to password brute forcing tools ie. Pet names, family members names (all the typical things users do to help themselves remember their passwords), as such, ensuring that all passwords are strong will never be more key.
At the same time, generative AI tools increase the ability to trick employees into believing that they are having a conversation with a trusted colleague or third party as they allow real world context. Because of this, we will see growth of adding other methods of verifying the person is who they say they are. Therefore, multi-factor authentication needs to effectively grow across border aspects of the business.
Looking at the ISC2 cybersecurity workforce study, whilst the number of people working in cybersecurity grew by circa 13%, the number of unfilled vacancies grew much faster, leaving over one in two roles unfilled. This is going to challenge organizations to get better at using their staff to their best potential, which in turn means further consolidating cybersecurity capabilities, and I suspect many challenge what they manage themselves versus what they consume as outcome based services.
At one time, security was seen as something that had to be done internally, but email filtering broke that mindset. Today, many organizations outsource parts of the SoC and IR capabilities. Now the question becomes: how much can and should be outsourced?
At one end of the spectrum, Generative AI allows more people with less skills to do things as it allows natural language translation of more complex tasks. For example, we can expect it to allow tier 1 SoC analysts to work in a much more simple way as they will no longer need to translate multiple vendor’s logs as the AI will convert them. At the same time, we are already seeing tools like WormGPT make malware generation simpler for adversaries as they can also use natural language.
On the flip side, companies that want to develop or customize their use of Generative AI tools will need ML engineers, AI engineers and AI scientists, all of which are even more scarce resources today than normal cybersecurity experts.
As the use of generative AI becomes embedded into business applications, the use of synthetic user accounts to both test and monitor, dynamically query and exchange data between capabilities will likely grow. The challenge here is that these accounts are often instigated to test capabilities and once set up, they run the risk of becoming the forgotten accounts in business systems that are generally over-permissioned at creation.
In years gone by I remember meeting with organisations that had offline systems, and was told they were “too sensitive to secure”, an oxymoron if ever I heard it! Likewise, I also remember talking to other organisations that had installed traditional endpoint AV and had killed their legacy OT systems. These legacy systems often have a long lifespan and so they couldn’t take the additional load.
Whilst so much technology is moving to the cloud, there is an ever growing part of many businesses’ critical systems that aren't cloud enabled, but due to their critical nature, we are seeing them being targeted by adversaries and all too often we also see them being caught up in the collateral damage of many more generic attacks. The reality is that closed or offline networks are not always 100% offline, and with new regulations, such as the EU NIS directive v2, focusing on critical business systems and the supporting supply chain, businesses are having to review and find security solutions that are designed and fit for purpose of running in completely offline environments.