CIO Influence
Guest Authors IT services Machine Learning Security

Do stop believing: Deepfakes’ journey to be the new cybersecurity threat

Do stop believing: Deepfakes’ journey to be the new cybersecurity threat

Businesses today are confronted with unprecedented geopolitical and technological risks, making agility more crucial than ever. In a world where anything can change in an instant, organisations must adapt quickly to stay ahead and remain resilient in the face of constant disruption. One of the most pressing threats is cybersecurity, with the landscape heavily influenced by AI. The evolution of attacks since the inception of AI has been startling, as it has lowered the barrier to entry for the cybercriminal world. No longer does one need to be a skilled hacker to infiltrate a company’s data – now, a simple prompt and AI can generate a sophisticated attack strategy for you.

Also Read: Zero Trust in the Cloud Era: Securing Hybrid and Multi-Cloud Environments

AI-generated deepfakes are a pressing threat. The ‘innovation’ of cyberthreats keeps evolving and the growth of deepfakes has been exponential. Deepfake content on social media alone grew 550% between 2019 and 2023 and there will be an estimated 8 million deepfakes circulated in the UK in 2025. With the World Economic Forum stating that it is a key global risk, this is something CIOs and business can’t ignore.

Deepfakes are AI-generated depictions of real-life people and as AI has improved, so has its capability to produce even more believable ‘individuals’. At a time when there is more available content online than ever before, AI has a richer selection of assets to use to recreate voices and likenesses of people. These fake identities can then be introduced into virtual meetings, phone calls, or even training videos.

Deepfakes exploit human trust

What makes deepfakes particularly dangerous is their ability to bypass traditional security defenses. They’re designed to exploit human trust – our natural tendency to believe what we see and hear. It is often said that ‘seeing is believing’, but those days are at risk.

Deepfakes have already caused havoc in the political space. For example, a fabricated audio clip of London Mayor Sadiq Khan almost led to serious public unrest as it showed ‘him’ making inflammatory remarks ahead of Armistice Day.

Similarly, 25 million dollars was stolen from an engineering company after hackers used “a digitally cloned version of a senior manager to order financial transfers during a video conference.”

Seeing is not believing

The majority of employees recognise that an odd-looking email from the CEO asking for vouchers is likely a scam. In a moment of busyness, it might still fool people, but many can spot a poorly created scam phishing attempt. However, deepfakes require an entirely different level of cynicism. You can no longer assume anything is real just because it looks or sounds convincing.

Policies and response plans need to be updated to reflect the appearance of deepfakes, incorporating steps for verifying video or audio content:

  • Establish clear policies: implement policies for the verification, detection, and escalation of deepfake threats.
  • Verify sensitive requests: any request involving money, credentials, or confidential data should always be subject to extra verification (e.g., via a call-back or secondary approval).
  • Adapt risk models: update risk models to consider how deepfakes could target critical business functions, such as executive communications, financial approvals, or customer interactions.
  • Incorporate deepfake awareness: include deepfake recognition in regular cybersecurity training to help employees identify red flags and understand the scope of the threat

However, any organisation that leaves any of its cybersecurity posture down to human judgment will eventually suffer a breach.

Read More on CIO Influence: AI-Augmented Risk Scoring in Shared Data Ecosystems

AI is key to an all-encompassing cybersecurity posture  

AI is both the issue and the cure. As deepfakes become more believable and lifelike, AI on the ‘other side’ is improving at spotting what is real and what isn’t. Innovative ML models, multi-modal AI especially, is becoming highly effective at spotting the telltale signs of a deepfake – including unnatural blinking, facial inconsistencies, or mismatched audio-visual elements – factors that can easily deceive the human eye.

Yet, not all deepfake detection solutions have been created equal. Adopt one that is zero trust, application-agnostic and can detect deepfakes in real-time, especially on leading platforms like Teams, Zoom, Webex, Chrome, YouTube, and Meta. Also consider ease of adoption, prioritising seamless deployment with flexible options that suit varying enterprise needs. Every endpoint needs to be protected, and solutions that can be installed as a lightweight software agent on personal computers and laptops, or packaged with secure SSDs create a unified defence layer that spans data protection, ransomware prevention, and deepfake detection.

Ultimately, for businesses to be truly secure from deepfakes, they need to start by defending the hardware and then expanding to establish a multilevel posture that monitors, flags, and secures at every level. Adopting a secure-by-design approach is crucial. By deploying solutions that embed AI-driven security features into hardware and endpoints, businesses can ensure systems are operating and defending round the clock, even without the broader protection of a corporate network.”

[To share your insights with us, please write to psen@itechseries.com]

Related posts

New Commvault Survey Uncovers Five Capabilities that Helped Companies Recover Faster from Cyberattacks

PR Newswire

Blue Cross Blue Shield Association Collaborates with Cyversity to Offer Cybersecurity Mentorship and Training Opportunities for Next Generation of Diverse Leaders

PR Newswire

JFrog Unveils New Industry-First Capabilities for Its DevOps Platform To Enhance Binary

CIO Influence News Desk