⏱️ ≈ 7-minute read
Editor’s Note: If 2026 has taught us anything so far, it’s that AI plus bad policy decisions is a cocktail best served with incident response on standby. This week: government leadership drama, malware middlemen, and hackers turning AI endpoints into the cybercrime equivalent of a flea market.

📜 Table of Contents
📌 Big News
CISA leadership chaos meets ChatGPT misuse; Malware brokers keep getting smarter
🚨 Can’t Miss
Nike breach claims, energy grid gaps, automation RCEs, WhatsApp locks things down, and a 13-year ignored warning finally bites
🤖 AI in Cyber
Hijacked AI infrastructure, AI-assisted malware, academic threat modeling, policy fallout, and messaging security
🧪 Strange Cyber Story
Hackers turn exposed AI endpoints into a full-blown crime marketplace
🚨 Big Stories
🕵️ CISA Leadership Turmoil Collides With Unsafe AI Use
Intro: The U.S. government’s top cyber agency is having a moment, and not the good kind.
What Happened: The acting director of CISA became the focus of two overlapping controversies tied to trust and access. According to reporting, he failed a polygraph examination that was required to gain access to highly sensitive cyber intelligence shared by other spy agencies, raising concerns about whether he should be cleared to handle the government’s most guarded threat information. Separately, internal alarms were triggered after sensitive documents were uploaded into ChatGPT, compounding worries about judgment and operational discipline.
Why It’s Important: CISA sits at the center of federal cyber intelligence sharing, and access matters. A failed polygraph tied to eligibility for highly sensitive interagency cyber intel is not just awkward, it can limit trust from partners who rely on strict clearance standards. When combined with unsafe AI usage, the episode raises serious questions about whether leadership is modeling the controls it expects others to follow.
The Other Side: Some officials argue polygraphs are imperfect tools and that the AI incident reflects immature policy frameworks rather than malicious intent. Still, perception matters, especially when the agency preaches zero trust.
👉 Takeaway: If the cyber agency in charge of guidance can’t get AI usage right internally, everyone else should double-check their own policies….now.
TL;DR: CISA leadership + ChatGPT misuse + internal investigations = a warning sign for government AI governance.
Further Reading: The Cyber Express | Politico
The first documented computer virus (Creeper, 1971) didn’t steal data — it just displayed a message saying “I’m the creeper, catch me if you can.” (BBN / ARPANET history)🧬 TA584 Evolves Its Malware Broker Playbook With ClickFix and Tsundere Bot
Intro: Initial access brokers aren’t slowing down, they’re upgrading.
What Happened: Threat actor TA584 was observed weaponizing a social engineering technique known as ClickFix alongside a new malware strain dubbed Tsundere Bot. The campaign focuses on gaining early access and selling footholds to downstream ransomware and extortion groups.
Why It’s Important: This highlights the continued professionalization of the cybercrime supply chain. Access brokers are refining tooling to stay valuable in an increasingly competitive ransomware ecosystem.
The Other Side: Defenders note that better detection of initial access activity can still disrupt these chains before ransomware deployment.
👉 Takeaway: Malware brokers are acting more like SaaS vendors and that should worry defenders.
TL;DR: TA584 is making it easier, faster, and cleaner to sell network access to criminals.
Further Reading: CyberPress
Free, private email that puts your privacy first
Proton Mail’s free plan keeps your inbox private and secure—no ads, no data mining. Built by privacy experts, it gives you real protection with no strings attached.
🔥 Can’t Miss
👟 Nike Investigates Alleged 1.4 TB Data Theft
Hackers claim they stole massive volumes of Nike’s internal design and manufacturing data, including proprietary product and manufacturing information. While customer data exposure has not been confirmed, the incident underscores how attractive intellectual property has become to extortion groups that do not bother encrypting systems.👉 Key takeaway: IP theft is becoming just as valuable as ransomware payouts.
⚡ Global Energy Systems Found Riddled With OT Security Gaps
A survey of more than 100 energy systems uncovered recurring cybersecurity weaknesses across operational technology environments worldwide. Researchers found outdated systems, poor segmentation, and limited visibility that would make lateral movement trivial for a determined attacker.👉 Key takeaway: Critical infrastructure is still playing catch-up on basic cyber hygiene.
🧩 n8n Automation Platform Hit With High-Severity RCE Flaws
Vulnerabilities in the popular workflow automation tool allow authenticated users to execute arbitrary code, putting enterprise pipelines at risk. Because these platforms often connect sensitive systems, exploitation could quickly cascade across environments.👉 Key takeaway: Automation tools are quietly becoming high-value attack surfaces.
🏛️ Federal Government Ignored a Cyber Warning for 13 Years — Until Hackers Used It
A long-standing vulnerability advisory went unaddressed for over a decade and is now being actively exploited in the wild. The story highlights how institutional inertia can quietly turn warnings into real-world incidents.👉 Key takeaway: Legacy neglect eventually becomes an incident.
Close more deals, fast.
When your deal pipeline actually works, nothing slips through the cracks. HubSpot Smart CRM uses AI to track every stage automatically, so you can focus on what matters. Start free today.
🤖 AI in Cyber
🧠 Hackers Hijack and Resell AI Infrastructure
Threat actors are scanning for exposed AI systems and monetizing access through resale and abuse. Researchers say compromised AI infrastructure is quickly becoming just another tradable asset in cybercrime markets.👉 Key takeaway: AI attack surfaces are being industrialized.
🧬 TA584’s Malware Shows AI-Assisted Tradecraft
Automation and AI signals are increasingly embedded in modern malware delivery chains. This allows attackers to scale campaigns faster while reducing the skill required to launch them.👉 Key takeaway: AI isn’t replacing attackers, it’s making them faster.
📚 Academic Study Maps AI-Driven Cyber Threats
A new survey outlines risks ranging from deepfakes and adversarial attacks to automated malware generation. The research shows how defenders are being forced to account for threats that evolve faster than traditional controls.👉 Key takeaway: The threat model is expanding faster than defenses.
🔐 WhatsApp Signals AI-Era Messaging Defense
Platform hardening reflects how AI is reshaping abuse prevention and content moderation. Messaging platforms are increasingly using automated controls to stay ahead of large-scale abuse.👉 Key takeaway: Messaging apps are the new perimeter.
🧟♂️ Strange Cyber
🛒 Hackers Turn Exposed AI Endpoints Into a Crime Marketplace
Intro: What happens when AI services are left wide open on the internet? Apparently, they get flipped for profit.
What Happened: Researchers uncovered a campaign nicknamed “Bizarre Bazaar,” where attackers scan for exposed or poorly authenticated AI endpoints and hijack them. Access is then sold or reused for fraud, automation abuse, and downstream attacks, effectively turning AI systems into black market rentals.
Why It’s Important: This is not just misconfiguration, it is a new cybercrime business model. As organizations rush AI deployments, attackers are capitalizing on speed over security.
The Other Side: Many of these compromises could be prevented with basic authentication, rate limiting, and monitoring.
👉 Takeaway: If your AI endpoint is public, attackers will treat it like inventory.
TL;DR: Exposed AI systems are being hijacked and sold like stolen credentials.
Further Reading: BleepingComputer
Enjoying Exzec Cyber? Forward this to one person who cares about staying ahead of attacks
Hate everything you see or have other feedback? Reply back to this email!

