In partnership with

⏱️ ≈ 8-minute read

Editor’s Note: Industrial-scale scam factories, Zendesk impersonation, and an AI-powered Santa all walk into your threat model. This week we’re zooming out on the “scam state” economy, zooming in on help-desk phishing aimed at Zendesk, and tracking how AI is quietly supercharging both fraud and holiday hoaxes. Grab coffee, audit your vendors, and maybe don’t trust every Santa in your inbox.

📜 Table of Contents

📌 Big News — Scam State goes industrial • Zendesk impersonation campaign
🚨 Can’t Miss — Albiriox Android malware • OAST exploit platform • DPRK npm poisoning • CodeRED outage
🤖 AI in Cyber — FBI ATO fraud surge • Shadow identity risks • OpenAI vendor breach • 620% AI-phishing spike
🧪 Strange Cyber Story — Scammer Claus and his AI-powered holiday hoaxes

🚨 Big Stories

🐷 Age of the “Scam State”: Pig-Butchering Goes Industrial

Intro

Pig-butchering scams have evolved from low-end romance cons into a multibillion-dollar industry powered by AI, deepfakes, and forced labor. In parts of Southeast Asia, “scam compounds” now function like industrial parks—except the product is global fraud.

What Happened

Reporting from the region describes large compounds in Cambodia, Laos, and Myanmar where trafficked workers are forced to run romance, crypto, and investment scams around the clock. Scripts, personas, and even fake trading dashboards are generated or optimized by AI, allowing each “agent” to manage many victims in parallel while adapting to their behavior.

Why It’s Important

This isn’t a lone scammer DM’ing your users—it’s a structured business with KPIs, middle management, and its own R&D pipeline. The same AI-driven social-engineering playbooks are already bleeding into business email compromise, vendor fraud, and account-takeover targeting executives and finance teams.

The Other Side

Governments and international bodies are starting to respond: UNODC now calls these operations a human-trafficking and economic-crime crisis, and law-enforcement coalitions like the “Scam Center Strike Force” are trying to disrupt the worst offenders. But corruption, safe havens, and the sheer profitability of the model mean enforcement is still playing catch-up.

The Takeaway

👉 Fraud has become an export industry, and AI is its favorite productivity tool. Treat pig-butchering and high-touch social engineering as a strategic risk to your customers, brand, and payment flows—not just a “consumer awareness” slide.

TL;DR

Scam “mega-factories” in Southeast Asia are industrializing pig-butchering with AI tooling and forced labor, and the resulting fraud is hitting victims, banks, and enterprises worldwide.

Further Reading

Global pig-butchering fraud losses are estimated to have exceeded $75 billion since 2021—supercharged by AI-generated personas and scripts.

🧩 Zendesk in the Crosshairs: Typosquatted Help Desks and RAT-Enabled Phishing

Intro

Attackers finally realized the help desk is the skeleton key of the enterprise. If you can convincingly impersonate support, you don’t need a 0-day—you just need one overworked agent.

What Happened

Researchers are tracking a campaign abusing lookalike Zendesk domains and phishing pages that are nearly indistinguishable from the real console. At the same time, a Scattered Lapsus$-style crew is registering dozens of fake Zendesk domains and submitting malware-laced “tickets” designed to compromise agents’ endpoints and harvest their credentials.

Why It’s Important

Support tools sit at the intersection of identity and incident response. A single compromised help-desk account can reset MFA, reroute mailboxes, reopen closed incidents, and impersonate internal users across multiple business units. In other words: compromise support, and you get a fast lane around a lot of your Zero Trust diagram.

The Other Side

Zendesk and security vendors are advising customers to enforce SSO, use phishing-resistant MFA, and monitor for suspicious ticket behavior and login patterns. But many organizations still classify support platforms as “medium-tier SaaS” rather than identity-adjacent systems, so they don’t get the same policy rigor or telemetry.

The Takeaway

👉 If a platform can change who someone is in your environment—via password resets, MFA changes, or profile edits, it is part of your identity layer. Harden Zendesk and similar tools like you would your IdP.

TL;DR

Typosquatted Zendesk domains and malicious ticket campaigns are targeting support teams to steal credentials and plant RATs, turning help desks into high-value beachheads.

Further Reading

Make Newsletter Magic in Just Minutes

Your readers want great content. You want growth and revenue. beehiiv gives you both. With stunning posts, a website that actually converts, and every monetization tool already baked in, beehiiv is the all-in-one platform for builders. Get started for free, no credit card required.

🔥 Can’t Miss

  • 📱 New “Albiriox” Android Malware Lets Criminals Drain Bank Accounts
    Albiriox is a fast-growing Android trojan spreading through fake courier-tracking apps—perfectly timed for peak holiday delivery season. It abuses Accessibility Services to hijack devices, intercept MFA codes, and perform unauthorized transactions across 400+ financial apps.
    Key takeaway: Holiday shipping makes users impulsive. Now’s the time to remind them not to install “package tracker” apps from links.

  • ☁️ Mystery OAST Platform Exploits 200+ CVEs via Google Cloud
    Researchers uncovered a private exploit platform running on Google Cloud infrastructure that fires off automated probes for more than 200 vulnerabilities. The actor blends public Nuclei templates with custom payloads, enabling rapid iteration and broad targeting.
    Key takeaway: Exploitation is becoming cloud-native. Attack traffic may be coming from IP ranges your filters already trust.

  • 🧪 North Korean Campaign Seeds 197 Malicious npm Packages
    A DPRK-linked group pushed nearly 200 malicious npm packages disguised as interview challenges and development utilities. The packages deploy updated OtterCookie malware capable of credential theft and persistent access.
    Key takeaway: Your supply chain now begins at the developer’s terminal. Treat dependency installs like production-grade risk events.

  • 📢 CodeRED Emergency Alerts Disrupted Across U.S. After Ransomware Breach
    A ransomware attack on Crisis24 temporarily knocked out emergency alerting services for multiple U.S. communities, forcing some to rely on manual notifications and radio channels. The outage exposed how dependent municipalities have become on single-vendor alerting platforms.
    Key takeaway: Critical communications need redundancy—not just a contract.

Simplify Training with AI-Generated Video Guides

Simplify Training with AI-Generated Video Guides

Are you tired of repeating the same instructions to your team? Guidde revolutionizes how you document and share processes with AI-powered how-to videos.

Here’s how:

1️⃣ Instant Creation: Turn complex tasks into stunning step-by-step video guides in seconds.
2️⃣ Fully Automated: Capture workflows with a browser extension that generates visuals, voiceovers, and call-to-actions.
3️⃣ Seamless Sharing: Share or embed guides anywhere effortlessly.

The best part? The browser extension is 100% free.

🤖 AI in Cyber

  • 🏦 FBI: $262M Lost to Account-Takeover Fraud as AI-Enhanced Phishing Grows
    The FBI reports a surge in account-takeover attacks fueled by AI-generated phishing emails, spoofed customer-service chats, and realistic voice impersonation. Criminals are mimicking banks and retailers with uncanny accuracy, making legacy “does this look suspicious?” awareness training nearly obsolete.
    Key takeaway: Defense must shift to phishing-resistant authentication and transaction-level anomaly detection.

  • 👤 AI Adoption Surges While Governance Lags — Shadow Identity Risk
    A new industry report shows AI usage exploding while governance trails far behind. Organizations are spawning untracked model endpoints, API keys, and service identities—creating a new class of “shadow identities” invisible to traditional IAM.
    Key takeaway: Treat AI endpoints as privileged identities, because that’s exactly what they are.

  • 🧾 OpenAI Vendor Breach: Mixpanel Incident Exposes Limited API User Data
    A Mixpanel compromise exposed metadata on OpenAI API users—email addresses and usage patterns rather than chat content. Even so, the dataset gives attackers a curated list of high-value technical targets for phishing.
    Key takeaway: Your AI telemetry vendors are now part of your core security supply chain.

  • 🛍️ Phishing Attacks Surge 620% Before Black Friday
    Darktrace observed a massive spike in phishing tied to Black Friday, much of it written or polished by generative AI. Attackers are blending email, SMS, and voice to push users into “fix your order” or “update your payment” traps.
    Key takeaway: AI-boosted phishing is multi-channel, rapid, and highly personalized—your defenses must match that reality.

🧟‍♂️ Strange Cyber

🎅 “Scammer Claus” Returns — Now With Deepfakes

Intro

It’s beginning to look a lot like phishing. Chapman University’s CISO is warning that “Scammer Claus” is back for 2025—with deepfake Santa videos, AI-generated charity appeals, and holiday-branded lures designed to melt user skepticism faster than snow on a load balancer.

What Happened

Scammers are sending out Santa-themed deepfake videos, fake donation drives, and “exclusive holiday sale” campaigns aimed at students, staff, and families. Many messages link to lookalike payment portals or credential harvesters; some impersonate university departments offering emergency travel funds or surprise tuition credits. It’s all cheer on the surface, credential theft underneath.

Why It’s Important

Holiday scams have always leaned on emotion and urgency, but generative AI gives adversaries personalization and production value at scale. A convincing Santa voiceover and on-brand holiday template can make even security-savvy users drop their guard—especially when everyone’s tired and rushing to wrap up the year.

The Other Side

The FTC and FBI are pushing updated consumer guidance on holiday scams—from checking URLs and avoiding gift-card payments to verifying charities before donating. But most people still don’t expect a Pixar-quality Santa to be a threat actor. The gap between what AI can fake and what users expect is the attacker’s playground.

The Takeaway

👉 If a message combines holidays, urgency, and payment or login requests, treat it as hostile until proven otherwise. Scammer Claus doesn’t care if you’ve been naughty or nice—only if you click.

TL;DR

“Scammer Claus” is back, using AI deepfakes and polished holiday lures to drive credential theft and fake charity donations. Santa’s never looked so legit—or so malicious.

Further Reading

Enjoying Exzec Cyber? Forward this to one person who cares about staying ahead of attacks

Hate everything you see or have other feedback? Reply back to this email!

Keep Reading

No posts found