“Generative AI tools can be used to automate social engineering attacks, especially more sophisticated or complicated phishing efforts like quishing.”
Please tell us about your role at Darktrace and your journey in the cyber security space.
As the VP of Strategic Cyber AI for Darktrace, I offer executive leadership for the application of artificial intelligence to cyber security. That includes using my AI expertise to advise and work with customers as well as supporting thought leadership initiatives, product strategy, research, and communications. When I started at NASA 25 years ago, I was getting my degree in Computer Science focusing on AI. Then, I joined the intelligence community as a developer but quickly pivoted to cyber security operations.
After a decade, I took all that I had learned about cyber-attacks and went back to data science and machine learning. We started at the network level since that is where it is hardest for adversaries to hide. We took big network data sets with hundreds of threat intelligence data feeds and started mining, parsing, fusing, processing, and automating the threat hunter analysis process. Then I started helping others with building out their threat hunting data science efforts. We utilized various supervised machine learning techniques to provide context to communications, reducing the laborious threat analysis effort.
After serving as the network product security architect, I detoured into deception to gain current insight into the cyber threat landscape as well as how to best use syslog telemetry in AI efforts. And that is when I landed at Darktrace, a leader in cybersecurity AI that is on a mission to free the world from cyber disruption by using unique AI techniques to autonomously perform some of the most laborious and difficult efforts of security defense at machine speed.
In your recent research data, you have mentioned that multi-stage payload attacks are targeting customers.
Could you please shed some light on what these attacks look like? How can CISOs identify these attacks?
Our recent research data shows a significant uptick in the volume of QR code phishing attacks (a multi-stage payload attack), otherwise known as “Quishing”.
The increased use of QR codes in phishing attacks demonstrates that attackers are pivoting their attack tactics, embracing new, more automated techniques that can thwart traditional defenses with greater agility and efficiency.
Traditional solutions scan for malicious links in easy-to-find places. In contrast, finding QR codes within emails and determining their appropriate destination requires rigorous image recognition techniques to mitigate risks.
We also found that quishing attacks are often accompanied by high levels of targeting and newly created sender domains, further decreasing the likelihood of such emails being detected by traditional email security solutions. These traditional solutions rely on signatures and known-bad lists to detect bad actors and malicious activity.
The most common social engineering technique that accompanies malicious QR codes is the impersonation of internal IT teams, specifically emails claiming users need to update two-factor authentication configurations. When setting up two-factor authentication, most instructions require users to scan a QR code. Thus, attackers are now mimicking this process to expedite attacks.
For example, in June 2023, Darktrace protected a tech company against a quishing attack where five of its senior employees were sent malicious emails impersonating the company’s IT department. The emails contained a QR code that led to a login page designed to harvest the credentials of these senior staff members.
These attacks are difficult for CISOs to identify.
To protect against such attacks, CISOs should consider investing in AI technology that can learn their organizations typical communication patterns and understand what constitutes normal (and abnormal) behavior within email environments instead of relying on historical attack data and patterns.
Impersonation of IT teams are new types of cyber security threats that we are hearing about in 2023. How do attackers actually simulate an IT team without detection?
Attackers frequently pivot and adjust their techniques as efficacy declines.
Between May and July this year, our research team observed that VIP impersonation (phishing emails that mimic senior executives) decreased 11%, while impersonation of the internal IT team increased by 19%. The shift suggests that employees have become more aware of impersonation of senior executives so attackers are increasingly pivoting to internal IT impersonation as a social engineering tactic.
It’s important to remember that an attacker will never send a singular link within a blank email hoping a user will click on it. Instead, they add convincing and deceptive anecdotes to achieve their objective.
For example, sending an email impersonating an internal IT notification system to trick the recipient into sharing their passwords and credentials or downloading malware onto their device. This could be an email that arrives in an inbox claiming to be a notification from the company mail server: your inbox has reached maximum capacity and all emails will be deleted in the next hour unless you take steps to solve the problem. You follow the link which says ‘click to expand your storage’ and see the company login page requesting your email address and password. Of course, this page has no authenticity and when you complete it, your credentials are simply harvested by the attacker.
What do you mean by the phrase “Right AI for the Right Job”?
Whether it be a public chatbot that can create a new story or an enterprise AI system that helps make business critical decisions, each AI technology is unique and formed using a few essential components. That includes compute resources to build and run these systems as well as AI algorithms.
However, it’s the data, and how that data interacts with those algorithms that is critical. Each AI technique has its strengths and limitations.
To help ensure the application of AI achieves the desired outcome, organizations must be using the right AI, trained on the right data, and applying it to the right task. It doesn’t make sense for a public chatbot to be used to interpret private medical images, or to use an AI system that is trained on medical images for creative content generation. It will not be effective or accurate. It is imperative that organizations begin applying the right AI for the right job.
Also, most AI today is trained periodically in offline training labs, where the AI data pipeline uses huge amounts of combined historic training data to create static AI model outputs. But learning from data in real time offers huge efficiency gains.
At Darktrace, training on each customer’s data in real time lies at the heart of everything we do. Instead of taking their data to the AI, we take our AI to their data so that it can learn in real time from everything that happens in their digital world – whether that’s email, cloud systems, applications, OT systems, network and beyond. This deep understanding of each business and its unique risks and behaviors, enables organizations to enhance their protection from new or novel threats.
What are the challenges in cybersecurity that go beyond the scope of AI and automation?
While AI and automation will exacerbate the cybersecurity challenges that businesses face, there are still a number of other challenges that businesses are dealing with. One of the most prominent is human error.
According to the World Economic Forum, 95% of cybersecurity incidents occur due to human error. This can include everything from software misconfigurations to weak passwords or credentials and unpermitted access to systems. Not having sufficient processes, procedures and education campaigns in place to limit these exposures can be detrimental to an organization. Humans continue to remain the last line of defense in most organizations, which is why championing and embedding security into all aspects of the business remains so important.
Another ongoing challenge is the cyber skills shortage.
The cyber industry is facing unparalleled demand for skilled security professionals. According to (ISC)2 2022 workforce study, there is currently a need for more than 3.4 million security professionals. AI can help augment security teams, helping to uplift personnel from constant firefighting and equipping them to proactively track the company’s security posture and step in to mitigate and remediate risks when suspicious activity is detected.
Lastly, supply chain risks are another challenge that businesses should be aware of.
Businesses are increasingly dependent on global systems and third party vendors, so underestimating the risks that can come from your suppliers can be a huge issue. Knowing how your suppliers work, the defenses they have in place, and what happens if they get compromised is important, along with a cybersecurity posture that can detect anomalies like third-party breaches or tonal shifts in the language of an email from a supplier is essential.
Your report mentions “quishing” – how are attackers improvising on automation tools to launch these attacks?
Generative AI tools can be used to automate social engineering attacks, especially more sophisticated or complicated phishing efforts like quishing. For example, tools like AutoGPT could be used to craft linguistically sophisticated phishing emails, to identify domains that can be purchased for infrastructure, to generate a QR code for the link, and to generate content for the embedded link (like credential harvesting login page or content to induce an action of the user or for hosting malware). Generative AI has upskilled more novice threat actors and armed them with tools to facilitate automation.
We’re seeing early signs of attackers using AI and automation to their advantage, and we expect that the speed of these types of attacks will rise as automation and AI are increasingly adopted and applied over time.
Social engineering is a massive challenge for CIOs and CISOs.
We have analyzed the leading cyber security training and learning programs that specifically train employees and customers about social engineering attacks.
Could you tell us what steps Cyber AI Research Centre is taking in this area?
Early on, our team realized that humans and our psychology are at the heart of the security challenge, particularly in relation to email security. Our research found that the top three characteristics that make employees think an email is risky are: being invited to click a link/open an attachment, unknown sender/unexpected content, and poor spelling/grammar. However, today, generative AI is creating a world where ‘bad’ emails may not possess these more obvious characteristics, and malicious clues can often be indistinguishable to the human eye.
Many organizations turn to security awareness and training to help mitigate this issue. However, the age-old ‘generate a non-personalized fake Netflix password reset’ email sent to each and every employee has diminishing returns. We take a different approach when using AI.
Our teams in the Cyber AI Research Centre have been researching and developing new AI techniques to address this challenge.
Darktrace PREVENT uses AI for breach and attack emulation, using critical attack paths that it has identified to create AI-generated social engineering attack simulations, which closely align with real-world attack scenarios. The technology can also identify users who are potentially vulnerable to phishing, allowing IT teams to tailor training based on real-world data. We also recently introduced new features in Darktrace/Email that use AI to identify unusual/risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links and flattening attachments.
AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. AI can also share its analysis in context and in real time when a user is questioning an email, creating a continuous employee-AI feedback loop.
AI is also used to classify emails into complex categories like whether email is attempting in induce the user into an action.
AI performs rigorous image recognition techniques to identify QR codes and determine their destination within emails. AI is used to understand the user and organization at the micro and macro level. Identifying anomalous communications or components within the email with high rarity scores enables machine speed response to defend against attacks.
Your take on the future of data recovery and ransomware response: How can CISOs and Infosec teams leverage AI and automated capabilities to respond to attacks and heal their system?
For most organizations, incident response remains a tedious, manual and time-consuming task. Managing any type of cyber-attacks presents an enormous challenge for security teams who are required to make decisions quickly in the midst of the attack based on hundreds of changing and variable data points and factors. A recent ransomware incident, observed by our team, would have required 60 hours of investigative work to build a complete understanding of the full scope and varied details, despite the malicious activity unfolding across just 10 hours. The pressure and complexity facing these teams is only poised to grow.
AI can be used to augment security teams and better equip them to build cyber resilience so they can more confidently and quickly address live incidents if and when they occur. For example, we recently launched Darktrace HEAL with AI technology that can enable organizations to simulate real-world cyber incidents, allowing teams to prepare for and safely practice their response to complex attacks on their own environments.
AI can also be used to help organizations better understand their attack paths and how an attack has unfolded, reducing information overload, prioritizing actions, and enabling faster decision-making at critical moments.
AI enables autonomous, machine speed, surgical response to contain an on-going incident, giving security teams time to remediate. It can also help teams prioritize effectively during an incident by collecting all of the information on an ongoing incident, performing analysis and presenting priorities.
Burn the midnight candle or soak in the sun?
Both – The OCD side of me pushes to finish projects at all costs that I am obsessing about at the moment. But burnout in this industry is real and often so it is important to balance, take breaks, and recharge. Thankfully, my spouse is an incredible travel planner and always makes me take breaks.
Coffee, or Tea?
Both – But, I depend on coffee.
Your favorite Darktrace product marketing initiative that you want everyone to know about?
Darktrace began ten years ago as an AI Research Centre and has always been focused on sharing our unique AI approach and techniques with our customers and the broader industry. But the widespread availability of generative AI has increased the appetite for technical, in-depth discussions of AI in an incredible way. We are excited to continue to break open the black box of AI and participate in discussions about how AI can be effectively applied to cyber security and used to uplift and augment human security teams.
First memorable experience in your career as a cyber security professional
I have too many to count and/or could even talk about from the Intelligence Community. But, many of my best days came from uncovering incredibly stealthy adversaries and dissecting their innovative tactics. In some of these cases, I felt like a detective mapping out all their infrastructure and victims. It is thrilling to unveil some of the worthiest of adversaries.
One thing you remember about your employee (s):
I have had the pleasure of working with some of the smartest, most hard-working individuals throughout my career. The best teams that I have worked with are full of diversity of thought, passion of the project, as well as problem solvers.
Most useful app that you currently use:
It is too hard to pick just one. All aspects of my life are chaotic so any app that brings coordination, organization, communication and/or productivity optimization are critical. Plus, I don’t need to list out an entry vector for my personal devices here ☺.
Thank you, Nicole! That was fun and we hope to see you back on CIO Influence again.
Nicole Carignan has 25 years of experience in cybersecurity, networks, computer science and IT supporting numerous private and public institutions in cybersecurity solutions and analysis. She has 20 years of federal USG experience. Her expertise in Cyber Threat Intelligence, Data Science, Machine Learning, Artificial Intelligence, Defense in Depth Solutions, Operations Engineering, and Network Security products gives her insight into technology, tools, and techniques for unique solutions.
Darktrace (DARK.L), a global leader in cyber security artificial intelligence, is on a mission to free the world of cyber disruption.
Breakthrough innovations in our Cyber AI Research Centre in Cambridge, UK have resulted in over 145 patents filed and research published to contribute to the cyber security community. Rather than study attacks, Darktrace’s technology continuously learns and updates its knowledge of ‘you’ and applies that understanding to optimise your state of optimal cyber security.
Darktrace is delivering the first ever Cyber AI Loop, fuelling a continuous end-to-end security capability that can autonomously spot and respond to novel in-progress threats within seconds. Darktrace employs over 2,200 people around the world and protects approximately 8,800 customers globally from advanced cyber threats. Darktrace was named one of TIME magazine’s ‘Most Influential Companies’ in 2021.