When AI Isn’t AI... and the CEO isn’t the CEO

Chatbots are humans, deepfakes Zoom in, and attackers attack.

In partnership with

💡 Fun Cyber Quote: 
“Amateurs hack systems, professionals hack people.” 
— Bruce Schneier

📬 This Week’s Clickables

  • 📌 Big News – Flash-patched CVEs & deepfake zooms

  • 🚨 Can’t Miss – Poisoned repos & botnet scans

  • 🤖 AI in Cyber – Jailbreaking LLMs & poisoned cloud security

  • 🧪 Strange Cyber Story – The “AI” startup powered by 700 hidden engineers

🚨 Big Stories This Week

🔦 Flash Exploits: 45 CVEs Attacked Within 24 Hours of Disclosure

Intro
The patching window is officially broken. A new report reveals nearly a third of exploited vulnerabilities were hit within a single day of disclosure in Q1 2025.

What Happened
According to VulnCheck, of the 159 CVEs exploited in the wild during Q1, 28.3% were attacked within 24 hours of being publicly posted. This means cybercriminals are increasingly turning vulnerability feeds and advisory disclosures into same-day weaponization pipelines.

Why It’s Important
This speed leaves no room for manual processes—security teams relying on traditional patch cycles are now operating on outdated timelines. Flash exploitation is no longer a theoretical edge case; it’s routine.

The Other Side
Some organizations still rely on third-party vendors or operate within industrial and OT environments that can't apply patches immediately. This creates persistent risk windows even when vulnerabilities are known.

The Takeaway
Automated patch triage, vulnerability exploit simulations, and zero-trust segmentation are now foundational—not optional. If your team is still patching weekly, you're already late.

TL;DR
28% of exploited CVEs in Q1 were attacked within 24 hours. Fast patching isn’t good enough—your defenses need to move at attacker speed.

Further Reading:

🕵️‍♂️ BlueNoroff Deepfake Zoom Scam Installs macOS Backdoor

Intro:
The line between phishing and face-to-face manipulation just disappeared. North Korean APT group BlueNoroff has weaponized deepfake video in real-time to target high-value individuals.

What Happened:
In this highly targeted campaign, BlueNoroff used deepfake Zoom calls to impersonate trusted individuals—likely colleagues or contacts—convincing victims to install a macOS backdoor disguised as a legitimate file. The attack successfully infiltrated a crypto foundation employee's system, using real-time AI-generated avatars to earn trust before delivering malware.

Why It’s Important:
This marks one of the first known cases where live deepfake video was used to deploy malware during a virtual meeting. It’s a chilling evolution in social engineering: even seeing someone’s “face” on video is no longer reliable verification.

The Other Side:
While macOS security has traditionally been considered stronger than Windows, this shows attackers are investing serious R&D into cross-platform payloads. Apple users can no longer assume they're safe by default.

The Takeaway:
Implement secondary verification procedures for virtual meetings, especially before executing any software or clicking links. Face isn’t identity anymore—voice, timing, and context matter just as much.

TL;DR:
A North Korean APT used live deepfake Zoom calls to trick a crypto employee into installing malware. Virtual meetings just became a whole new attack surface.

Related reads:

 🔥 Can’t Miss This Week

🤖 AI in Cyber

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

🧟‍♂️ Strange Cyber Story of the Week

🧪 The “AI” Chatbot That Was Actually 700 People

Intro:
Builder.ai pitched itself as the future of app development—an AI that builds software for you, no code needed. But behind the scenes? A team of 700 engineers in India manually fulfilling requests while customers chatted with a fake “assistant” named Natasha.

What Happened:
The startup raised over $250M and was backed by Microsoft, claiming it had a revolutionary AI product. Instead, it operated more like a high-tech call center wrapped in buzzwords and UI polish. After whistleblowers revealed the deception, the company filed for bankruptcy and is now under investigation.

Why It’s Important:
This wasn’t just “fake it till you make it”—this was AI-washing at scale, duping investors and customers alike. It shows how easy it is for startups to hide human labor behind a chatbot icon in the age of generative AI hype.

The Other Side:
Some insiders called it a classic “Wizard of Oz” approach—manual execution while building toward automation. But when you’re valued at $1.5B and still pretending your chatbot is AI, the smoke and mirrors don’t look clever—they look fraudulent.

The Takeaway:
Trust, but verify. If your “AI” tool seems too good—or too human—to be true, ask what’s really running under the hood.

TL;DR:
Builder.ai fooled the market with a fake AI assistant powered by real people. Natasha wasn’t synthetic—she was salaried.

More Reading:

Thanks for reading this week’s edition. Like what you see? Forward it!

Hate everything you see or have other feedback? Reply back to this email!