2024 has been a roller coaster year for the cybersecurity industry, and a very eventful period for cybersecurity leaders around the world.
According to a recent UK government report, half of businesses (50%) and around a third of charities (32%) reported having experienced some form of cybersecurity breach or attack in the last 12 months.
That’s a fairly shocking figure, so what’s fuelling this rise? Well if we only look at the headlines for clues, the results might be misleading. The biggest cybersecurity story in the past year wasn’t actually down to a cyber attack, but rather a glitch that caused a global outage when the security company CrowdStrike sent out a corrupted software update to a huge number of customers. More on the cybersecurity implications of that later though. This headline-grabbing incident followed on the heels of a high-profile ransomware attack on NHS England, in which a Russian cyber-criminal group shared almost 400GB of private information on the dark web.
Many of the other big cybersecurity stories this year have centred on AI, unsurprisingly. Earlier this year, during World Cup fever, many of us laughed along at the AI-generated fake interviews with England Manager Gareth Southgate, purporting to show him making crude remarks about his players. However, on a more serious note, with estimates that over 49% of the world’s population will take part in elections this year, there have been serious concerns about the risk of AI supercharging electoral disinformation. By way of example, in August Donald Trump posted images on social media of the pop star Taylor Swift, dressed as Uncle Sam, with the meme, ‘Taylor Swift wants you to vote for Trump’, following her endorsement of Vice President Kamala Harris.
AI is undoubtedly one of the greatest trends reshaping the cybersecurity landscape, as is the proliferation of IOT and hybrid working. But the sheer size, complexity and interconnectivity of IT systems and platforms, often spanning multiple continents, has determined the huge reach and dramatic impact of recent cyber incidents.
(Incidentally, If you haven’t yet had a chance to watch our recent Chief Disruptor Forum ‘Evaluating the AI cybersecurity landscape’, it is a great chance to get up to speed with some of the latest trends and developments that are shaping this rapidly evolving space.)
So back to this month’s ‘Insights With Impact’; as ever we aim to uncover the reality beyond the headlines, so in this edition, we’ll explore the extent to which these developments are changing the nature of the cybersecurity threat on the ground. We asked Chief Disruptor members, ”Which of the following Cybersecurity threats pose the greatest threat to your organisation?” The four options in our poll were: ‘Internal data breach’, ‘State-sponsored’, ‘Opportunistic’ and ‘AI deep fake’. Of course, there’s considerable overlap between these four issues and shouldn’t be viewed in isolation. For example in the case of former US Army soldier Bradley Manning, accused of stealing more than half a million classified US government documents, he was found guilty of both espionage and data breach-related crimes.
The highest response in our poll was perhaps the more prosaic of the four options; ‘internal data breach’. The internal data breach is an age-old cybersecurity challenge and despite a year of monumental change and development, this is the greatest concern to Chief Disruptor members. An internal data breach can be caused by a number of factors, but perhaps the most common and preventable is human error; something as simple as accidentally sending sensitive documents to the wrong person. Stolen or compromised credentials are another common attack vector as are malware and ransomware.
We spoke with Andrew Podd, the former Chief Risk Officer at RSA who selected this option in our poll. He told us,
“A lot of risk and cyber incidents happen by dint of human beings in their actions. My experience in banking and insurance is that something in the region of 75% to 80% of actual risk incidents or operational losses that occur are often a result of humans inadvertently sharing one client’s data with somebody else, maybe cutting and pasting an email and forgetting to change some of the required information or inadvertently clicking on a bogus phishing scam type email that may or not have been generated artificially.”
Keiron Holyome, Vice President, UK, Ireland & Middle East, BlackBerry was in agreement with Andrew’s analysis. He told us,
“Though technology is advancing and making cyber threats more robust, the fundamentals still apply. It’s critical to teach those in your organisations to be vigilant and give them tools to help them know what to look out for, because they could be your best line of defence, but also your greatest weakness if not prepared.”
The good news is that this is an area where organisations can take steps to better control these risks. And education is as ever key.
We spoke with Soumyadeep (Sam) Roy Chowdhury, Senior Industry Analyst, Growth Opportunity Analytics, Frost & Sullivan who told us,
"Fundamental cyber hygiene practices, such as strong and regularly updated passwords, timely software updates, multi-factor authentication (MFA), and managed device access, can form the foundation of defence against both traditional and AI-enhanced cyber risks. For AI-specific vulnerabilities, robust model governance and data integrity, bolstered by regular audits and comprehensive risk analyses, are crucial."
The ‘state-sponsored attack’ was the second highest response in our poll. State-sponsored attacks are notoriously difficult to attribute, partly due to the tendency for states to deny the offence, alongside the fact that criminal groups are usually responsible for doing the government’s dirty work.
There are a number of different goals of state-sponsored attacks from espionage to disruption to political messaging. Earlier this month, Germany's domestic intelligence agency, the Bundesverfassungsschutz (BfV), warned that Russian military intelligence has been behind a series of cyber-attacks on NATO and EU countries. The BfV said the attacks were carried out by Russian military intelligence's Unit 29155, which has been linked to the poisonings of a former Russian double agent and his daughter in Salisbury in 2018. The warning comes amid increased fears in Europe of Russian hackers and spies since Russia's war on Ukraine began two years ago. Keiron Holyome, Vice President, UK, Ireland & Middle East, BlackBerry told us,
“Our intelligence team has long tracked the work of Russian-centric threat actors who are known to be either affiliated with the state or having pro-Russia sentiments. Russian cyber interference is a leading threat quarter over quarter, and organisations should take it seriously.”
It’s unsurprising that ‘opportunistic post-outage’ also scored fairly highly in the poll. The fallout from the Crowdstrike outage was unprecedented. According to Microsoft, it affected 8.5 million Windows devices, suggesting it could be the worst cyber event in history. Significantly, the outage prompted warnings by cybersecurity experts and agencies around the world about a wave of opportunistic hacking attempts linked to the incident. Keiron Holyome, Vice President, UK, Ireland & Middle East, BlackBerry told us,
“The Crowdstrike outage served as a stark reminder that the best defence is a good offence. Understanding your vulnerabilities and risks through regular testing is paramount, not only when deploying new software but consistently over time. To protect against potential threat actors who seek to take advantage of IT outages, a combination of AI-enabled internal and external penetration testing assessments remains vital.”
‘AI deep fake’ was the lowest response in our poll. Earlier in this article we touched on the potential for AI deep fake in super-charging electoral disinformation. In the run up to the UK General Election, accounts were created for sharing deep-fake images smearing UK politicians including Labour's Wes Streeting. While some of the fake clips and comments shared by this group of accounts on X were clearly satirical and designed for amusement, others contained more politically damaging content. One post included a doctored video of Wes Streeting, the now Health Secretary, on the BBC's Politics Live show. As the presenter discusses politician Diane Abbott, the footage is doctored to sound as though Mr Streeting is saying "silly woman" under his breath. Keiron Holyome, Vice President, UK, Ireland & Middle East, BlackBerry added his thoughts on the danger of misinformation.
“Social media is a powerful tool during election time. The huge reach of social media platforms makes them fertile ground for misinformation campaigns from political actors. On social media, deep fakes can be easily weaponised, and it is highly probable that we will continue to see deep fakes used as a tool of disinformation as many high-profile elections occur throughout 2024.”
Heena Juneja, Industry Principal, Growth Opportunity Analytics, Frost & Sullivan, also shared her concerns on the dangers of AI deep fake encouraging hackers to automate disinformation attacks and commit financial fraud. She warned,
“It can lead to identity and intellectual property theft, and the manipulation of images for sabotaging the reputation of individuals and businesses. To maintain cyber hygiene researchers and organisations are working to build software and filtering programmes. Also, governments and organisations are setting regulations, policies, and best practices to control the uploading of illegitimate visual content. This circles back to the need for cybersecurity awareness as a prominent concern across all geographies."
The issue of AI ethics is of huge concern to individuals and organisations keen to ensure that they don’t compromise societal values such as equality and fairness, or privacy and security. We’ll be exploring these important issues further at our next Chief Disruptor Forum on Enabling Ethical AI on 23 October.
Let’s double back again to address whether we’ve seen a disconnect between the poll responses and media reports: The results of our poll suggest that the headlines don’t entirely reflect leaders’ cybersecurity concerns, and the age-old challenge of ‘internal data breach’ is still the greatest cybersecurity challenge. But the influence of new threats such as AI is also clear and reflects the fact that existing threats such as phishing or misinformation are amplified or made more exploitable by AI.
There’s so much more to discuss on this rapidly evolving issue so I hope you’ll be able to join us at a number of very relevant Chief Disruptor activities taking place in the next couple of months including our Chief Disruptor Breakfast Club on ‘Harnessing the power of AI’ on Friday 22 November and our next Chief Disruptor Forum on ‘Delivering Tangible Results from Enterprise-wide Generative AI’ on Friday 15 November.
Until then, let’s finish with a final thought from a Chief Disruptor member: Many would argue that the greatest threat to organisations is actually complacency, and with that in mind, Soumyadeep (Sam) Roy Chowdhury, Senior Industry Analyst, Growth Opportunity Analytics, Frost & Sullivan reminds us below of the importance of robust planning and backup and the importance of getting the fundamentals right.
"Given the rapid nature of adversarial AI attacks, a continuously evolving and strong incident response strategy, along with a robust backup and recovery plan are essential for effective defence. It is not just organisations but individuals who have to adopt these cybersecurity best practices, in all seriousness. The cost of loopholes in the aforementioned cyber hygiene are much higher in the age of exploitative AI models."
Leave a Comment