Overview
Today’s feed had two clear threads: the scramble for compute, and the scramble to build faster. Between fresh data centre projects, model rumours, and tools that turn agents into co-workers, the mood was optimistic but slightly frantic. On the human side, recruiter roles bouncing back hints that hiring might be thawing again, even as everyone jokes about rate limits and the new bottlenecks they create.
The big picture
AI is starting to look less like a single product category and more like an industrial stack: capital, power, data centres, inference speed, developer tooling, and then the apps people actually touch. You can see it in the way posts jump from steel beams in Michigan to finance deals in Texas, then straight into CLIs, UI plugins, and “built it in 24 minutes” demos. The undercurrent is that demand is real, but the constraints are too, whether that’s energy, capacity, or simple usage caps.
Recruiter roles bounce back, a quiet hint hiring is warming up
@lennysan points out that recruiter openings have surged back close to 2022 levels. Recruiters tend to move early in the cycle, so this reads like companies are preparing to hire again, not just talking about it.
If you’ve been watching tech roles stall while AI jobs sprint ahead, this is a useful counterpoint: headcount planning might be getting looser, even if it doesn’t feel that way on the ground yet.
Artemis II is days away, and NASA is doing the hype the old-fashioned way
NASA is pushing its “Moonbound” documentary ahead of Artemis II, the first crewed trip around the Moon since the Apollo era. It’s a reminder that big engineering programmes still rely on public attention, not just technical progress.
Also, the replies show the internet being the internet: wonder, cynicism, and “why aren’t they landing?” all in the same scroll.
Big Tech bankrolls the compute race: Google and Anthropic
The Financial Times reports Google is nearing a deal to help finance a multibillion-dollar data centre in Texas leased to Anthropic. It’s not just “more servers”, it’s the sort of long-term infrastructure move that locks in advantage when everyone is short on capacity.
When the competition is measured in training runs and latency, access to power and buildings starts to matter as much as model cleverness.
Stargate’s steel goes up, and the replies drag the conversation elsewhere
@sama shared a construction milestone at OpenAI’s Michigan Stargate site with Oracle and Related Digital. The clip is classic “infrastructure is happening”, but the comment section tells a different story.
Most of the energy is still tied up in product trust and continuity, with people using the moment to relitigate old model retirements and ask for more stability.
A “Claude Mythos” leak fuels the usual AGI chatter
@RoundtableSpace claims someone saved a now-removed Anthropic post about “Claude Mythos”, with bold claims around coding, reasoning, and cyber benchmarks. Whether the details are complete or not, the pattern is familiar: a partial leak turns into sweeping conclusions in minutes.
The more interesting angle is what it implies about priorities, especially the emphasis on security capabilities and controlled access.
Speed wins hearts: DHH tests Kimi K2.5 Turbo on Fireworks
@dhh posted a quick demo of Moonshot AI’s Kimi K2.5 Turbo running on Fireworks AI, and the main takeaway is simple: it’s fast. Not “the smartest”, just rapid, which is often what you want when prototyping.
As models get good enough, latency starts to feel like the feature people notice first.
Agents doing the admin: Midday ships a CLI built for automation
@pontusab introduced the Midday CLI, pitched as a backbone for agents to run finance tasks like invoicing, reconciliation, exports, and reporting. The framing matters: it’s not another dashboard, it’s an interface designed for machines and scripts to drive.
This is where “AI at work” gets concrete, not a chat window, but tools that can slot into actual workflows and leave an audit trail.
Codex gets shadcn/ui, and the boring bits of UI might get even faster
@shadcn says shadcn/ui now ships in the official Codex plugin for building web apps. For anyone who has watched AI codegen produce endless messy layout, this sort of curated component path is appealing.
It nudges builders towards composing known parts, not inventing new “div soup” every time.
“Pixel-perfect reverse-engineering” shows how close cloning is getting
@tom_doerr shared a repo that uses Claude Code and Chrome MCP to recreate websites as pixel-perfect clones. Useful for learning, prototyping, and maybe preservation, but it also raises awkward questions about how design IP is treated when copying becomes a half-hour task.
Even if you never touch it, it’s a snapshot of where dev tooling is heading: browsers as data sources, and code as a generated artefact.
Rate limits are the new “waiting for the build”, and everyone hates it
@NoahKingJr nailed the current mood with a joke about hitting Claude’s usage limit and spending two hours pacing. It’s funny because it’s true, and it underlines a practical reality: the bottleneck is no longer knowing how to do the work, it’s getting time on the machine.
For all the talk of productivity, usage caps and capacity constraints are becoming a daily part of the job.





















