Daily Vibe Casting
Daily Vibe Casting
Episode #356: 31 March 2026
0:00
-19:06

Episode #356: 31 March 2026

Reusable rockets, agentic coding risks, and a software supply-chain scare collide with energy geopolitics

Overview

Today’s feed had two speeds: hard-nosed engineering reality (dependency attacks, security guardrails, leaked source code) and moonshot confidence (reusable rockets, space data centres, omni-modal models, and plenty of talk about agents writing code while humans sleep). Underneath it all was a familiar tension, progress is fast, but trust, safety, and basic transparency still decide what actually ships.


The big picture

AI tools are moving from “helpful assistant” to “active colleague”, and that brings new habits (overnight refactors), new risks (supply chain compromises), and new cultural norms (public threads calling out incentives and hype). At the same time, space keeps inching from spectacle towards infrastructure, whether that’s boosters hitting flight 34 or startups pitching compute in orbit.

Falcon 9 reusability keeps racking up numbers

SpaceX’s fleet leader booster clocking a 34th launch and landing is the sort of milestone that would have sounded absurd a decade ago. It is not just a record, it’s a reminder that the “reuse is normal” era is here, and cadence is now a feature, not a one-off achievement.

Emergency brakes for JavaScript: axios supply chain attack

Wes Bos’s warning landed with force: stop installing, stop deploying, and assume you might be affected even if you never typed “axios” yourself. It’s the boring truth of modern development, your risk surface is your dependency graph, including the bits you did not know you had.

If you are building today, this is the moment to pin versions, audit lockfiles, and treat “quick updates” with suspicion until the dust settles.

Claude Code everywhere, including places it should not have been

On one side, Alvaro Cintas is pointing to a sprawling, open-source “agent kit” culture around Claude Code: dozens of agents, piles of commands, and security tooling as a first-class feature. On the other, ℏεsam claims Anthropic accidentally leaked Claude Code source, and the internet did what it always does, mirrored it, starred it, patched it, and argued about it.

The through-line is simple: “agents” are becoming an engineering substrate, and that makes source control, release hygiene, and threat modelling feel less optional by the day.

Qwen’s omni-modal push, with “Audio-Visual Vibe Coding” in the mix

Qwen is pitching Qwen3.5-Omni as natively multi-input, covering text, image, audio, and video, and doing it in real time. The headline feature is “Audio-Visual Vibe Coding”, a sign that model makers think the next battleground is not just quality, it’s interaction, how quickly you can go from describing something to seeing it built.

It also nudges the question that keeps coming back: if the weights are closed, how much can the wider community trust the demos, the benchmarks, and the constraints?

Codex as the night shift: developers hand off long jobs before bed

OpenAI Developers shared usage data suggesting a new routine: kick off the gnarly work (refactors, planning, long runs) late in the day, then come back to results in the morning. It is a neat productivity story, but it also changes the shape of review, ownership, and incident response.

If your codebase can move significantly while nobody is watching, your tests, guardrails, and rollback paths stop being “nice to have”.

“Agents do most of our coding”, and the warning label comes with it

Guillermo Rauch summed up what plenty of teams are feeling: the agent era is real, and it is not going away. But he also drew a hard line between playful iteration and production systems, calling out LLM over-confidence and the need for humans to stay on the hook.

It reads like a culture memo as much as a technical one, move fast, but keep someone accountable when the page goes off at 3am.

Space-based compute hits unicorn pace

Y Combinator celebrated Starcloud’s $170M Series A and $1.1B valuation, built around a bold claim: put GPUs in orbit and you get solar power and cooling perks that Earth-based data centres fight for. It is ambitious hardware, ambitious finance, and a bet that launch costs and reliability can be tamed enough to make “compute in space” more than a headline.

Expect the debate to centre on unit economics and radiation resilience, not the pitch deck.

Deepfake calls meet low-tech tests: “show me three fingers”

Trung Phan shared a clip where scam baiter Jim Browning confronts a suspected deepfake on a video call and asks for a simple gesture that many real-time fakes still struggle with. It is funny, but it is also useful, a reminder that verification does not always need a product, sometimes it just needs a prompt that breaks the illusion.

As impersonation gets cheaper, these little social protocols may become standard in workplaces and families.

Corporate transparency and the AI hype backlash, in miniature

ThePrimeagen’s jab about incentives and benhylak’s satire about “a billion lines of code per day” both point at the same fatigue. People are not just sceptical of agent claims, they are sceptical of the motives behind them, and they are quicker than ever to call it out in public.

As AI tooling becomes a business moat, the community’s tolerance for vague metrics and undisclosed stakes looks like it is thinning.

AI mood swings inside the labs: “psychosis” or “months to liftoff”

Austen Allred put words to an emotional undercurrent that keeps surfacing: people close to frontier models are either burned out by the pace, or convinced the big breakthrough is imminent. The argument is not just about capability, it’s about definitions, timelines, and what counts as “there” in the first place.

Whatever you believe about AGI, the human factor is becoming part of the story, pressure, expectations, and the odd unreality of watching tools improve weekly.

Discussion about this episode

User's avatar

Ready for more?