Artificial intelligence (AI) being used for malicious intent has surfaced as a significant concern within the digital space. Cyber criminals are using Large Language Models (LLMs), like ChatGPT, and deepfake technology to launch cyber-attacks and scams. In this blog, we focus on the darker facets of AI, shedding light on the exploitation of AI systems, its impact on the threat landscape, and what organisations can do now to better protect themselves and their most sensitive assets against this new wave of threats.
Malicious ChatGPT Prompts for Sale on the Dark Web Marketplace
- Recent reports reveal a disturbing trend where thousands of malicious prompts designed to jailbreak and exploit AI are up for sale on the dark web. These prompts deceive AI models, enabling threat actors to steal data, orchestrate sophisticated scams and other illegal activities with alarming efficiency.
- According to recent research carried out by Kaspersky, thousands of these nefarious prompts and compromised premium ChatGPT accounts are now available for purchase, posing a significant threat to ChatGPT, its users and their data. (source: The Register)
Dark Web Risk
Using AI To Boost Your Cyber Security
With the growing influence of Artificial Intelligence (AI) on various facets of our lives, its profound impact on cyber security is undeniable.
This eBook explores how AI is revolutionising traditional cyber security measures and shaping the landscape of cyber security strategies and practices.
Utilising the expertise of a distinguished panel featured at the Future Insight Technology (FIT) 2023 Conference, the four prominent speakers each bring experience and insight to the discussion.
Deepfake and AI: Partners in Crime
AI and deepfake technologies are becoming more readily available. OpenAI, for example, recently announced their new generative AI, Sora, that can create video from text. And, although this advancement in technology and its availability is exciting, it is also inevitable that there will be cyber criminals looking to use it maliciously.
Around the globe we are already seeing examples of these technologies being exploited by advanced threat actors, including cyber criminals, nation states or nation sponsored hacker groups.
$25 million theft executed through a sophisticated deepfake scam
A recent article by Ars Technica has shed light on a ground-breaking cyber crime incident considered to be the first successful heist of its kind: a $25 million theft executed through a sophisticated deepfake scam. The scam involved the creation of highly convincing AI generated deepfake videos, which were used to impersonate key individuals within a financial institution.
By leveraging these deepfake videos, the scammer manipulated employees into authorising fraudulent transactions, resulting in the substantial loss. This unprecedented heist marks a significant escalation in the sophistication of cyber criminal tactics, underscoring the evolving threat landscape faced by organisations worldwide. As the prevalence of AI-driven scams will inevitably continue to rise, it becomes increasingly crucial for businesses to bolster their cyber security posture and remain vigilant against such deceptive schemes.
Deepfake news segments
Iran-backed hackers had recently disrupted TV streaming services in the United Arab Emirates (UAE) by injecting deepfake news segments into the broadcasts according to The Guardian. These deceptive deepfake videos, generated using AI technology, were designed to resemble legitimate news reports, spread misinformation, and sow discord among viewers. This incident underscores the growing threat posed by state-sponsored threat actors and the increasing weaponisation of deepfake technology for political purposes.
As nations continue to grapple with the challenges of cyber warfare and disinformation campaigns, it becomes imperative for governments to collaborate and implement international legislation that both prohibits and protects against the use of such attack methods, as well as educate and inform organisations across all industries about AI threats and how best to protect themselves and their assets. Additionally, organisations need to enhance and adapt their cyber security capabilities to be able to identify and defend against orchestrated AI driven attacks, which is backed up by a recent assessment conducted by the NCSC. The assessment focuses on how AI will impact the efficacy of cyber operations and the implications for the cyber threat over the next two years. (source: NCSC)
Global Cyber Threats Expected to Rise With AI, NCSC Warns
According to the above-mentioned assessment by the NCSC, AI is poised to significantly impact the cyber threat landscape in the near future. The report suggests that AI will almost certainly be utilised by cyber adversaries to enhance their capabilities, including the development of more advanced attack techniques and procedures (TTPs).
As AI technologies evolve, cyber criminals are increasingly going to automate tasks, evade detection, and execute targeted attacks with greater precision. This assessment underscores the urgent need for organisations to adapt their cyber security strategies to effectively mitigate the evolving threats posed by AI-driven cyber-attacks. This includes enhancing detection and response capabilities, investing in AI-powered security solutions, enforcing zero trust policies, implementing a culture of sufficient cyber awareness and vigilance amongst staff, and staying informed about emerging AI-driven threat vectors.
While ChatGPT and other LLMs may not yet be capable of being used to write sophisticated malware to be sold at scale on the dark web or be in possession of nefarious nation states, we may not be far away from AI being used to orchestrate attack chains or write malware that can evade detection. A separate recent report from the National Cyber Security Centre (NCSC) sheds light on how AI driven ransomware attacks could become a reality by 2025. (source: NCSC)
What Can Organisations do to Protect Themselves Against AI Threats?
As AI technologies are rapidly evolving, the application of its use for both good and bad is evolving with it, leading to a rapid shift in the threat landscape. It is imperative for organisations to not just understand how to defend against AI driven threats, but to learn how to use AI technologies securely and in a manner that best protects their assets and does not expose them to new vulnerabilities or risk.
Already we are seeing collaboration amongst the international community to tackle this very issue. A recent publication on how to engage with Artificial Intelligence has been developed by the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) in collaboration with the NCSC, United States (US) Cyber security and Infrastructure Security Agency (CISA), Canadian Centre for Cyber Security (CCCS) and several other cyber security/government agencies from international partners. The publication highlights some key threats related to AI systems and summarises steps organisations should take when engaging with AI technologies (both in-house and 3rd parties) to mitigate risk. (source: ASD’s ACSC)
While this new wave of advanced threats seems daunting and paints a bleak future for stakeholders responsible for managing risk, there are several steps organisations can do to protect against these threats. Many of these types of attacks still rely on the presence of human error and social engineering. Regularly training your people and creating a positive cyber awareness culture are key to reducing this type of threat.
Further to this, unsecured vulnerabilities are a common route of entry for cyber criminals and can be identified with regular vulnerability scanning and penetration testing to identify your security weak spots.
Organisations across all sectors, of all sizes should not neglect the fundamental steps that make up the foundations of any cyber security strategy. Few organisations have the right tools, people, and processes in-house to manage their security program around-the-clock while proactively defending against new and emerging threats. Adopting security defences like Sophos MDR can provide an elite team of threat hunters and response experts to take targeted actions on your behalf to neutralise even the most sophisticated threats.
In Conclusion
For better or worse, AI is going to change how we live our lives greatly, and while its application for solving huge problems on a global scale is something to be embraced, we should also be aware of its capacity to cause great harm. Organisations need to adapt to the new world of AI driven technologies and attacks, whilst continuing to invest in the foundations of their cyber security posture.
Take the first step to improving your cyber security posture, looking at ten key areas you and your organisation should focus on, backed by NCSC guidance. Book your free 30-minute guided posture assessment with a CyberLab expert.