

🌙 Elon’s Moon Factory
📈 Anthropic’s Massive Funding Round
& much much more

💰 Anthropic Lands $30B at $380B Valuation in AI Funding Frenzy
Anthropic has closed a staggering $30 billion funding round, valuing the company at $380 billion post-money — more than double its valuation from just months ago. The deal, led by Coatue and GIC with backing from Microsoft and Nvidia, marks the second-largest private tech raise ever, trailing only OpenAI’s $40+ billion round last year. As AI infrastructure costs skyrocket and enterprise demand for Claude accelerates, the capital arms race among top AI labs is showing no signs of slowing.
Anthropic says annualized revenue has climbed to $14 billion, fueled largely by enterprise adoption and the rapid growth of its coding assistant, Claude Code, which alone generates $2.5 billion annually. With AI-native coding tools shaking up the $2 trillion software sector and rivals like OpenAI pushing aggressively with Codex, the battle for enterprise AI dominance is intensifying fast.
🕰️ OpenAI’s Jony Ive Device Pushed to 2027 Amid Legal Dispute
OpenAI’s first consumer hardware product, designed by former Apple design chief Jony Ive, has been delayed until at least February 2027, according to new court filings tied to a trademark lawsuit. The dispute stems from a complaint by audio startup iyO, prompting OpenAI to abandon plans to use the “io” name for any AI-enabled hardware. The company also confirmed it has yet to produce packaging or marketing materials, signaling the device is still deep in development.
While details remain scarce, the screen-free gadget is described as a pocket-sized, context-aware “third core device” meant to complement laptops and smartphones — not replace them. CEO Sam Altman has reportedly called it “the coolest piece of technology that the world will have ever seen,” but with timelines slipping and viral Super Bowl ad rumors debunked, the hype cycle may need to wait another year.
🚀 Musk Floats Moon Factory to Power xAI’s AI Ambitions
Elon Musk told employees at xAI that the company may need a moon-based factory to build AI satellites, complete with a sci-fi-inspired “mass driver” to launch them into space. The lunar facility, he said, would manufacture computing infrastructure to outscale rivals — part of a broader vision that now includes merging xAI with SpaceX to push AI data centers beyond Earth’s limits. Musk framed the moon as a steppingstone to Mars, pitching a future of self-sustaining lunar cities and interplanetary expansion.
The remarks come as SpaceX reportedly eyes an IPO and as Musk reorganizes xAI for its next growth phase. While his timeline for breakthrough technologies has often slipped in the past, he claimed xAI is moving faster than any competitor and hinted at structural changes to prepare for scale. The bigger signal: Musk sees AI compute as so strategically vital that terrestrial limits may no longer suffice.
🛡️ Anthropic Says Claude Opus 4.6 Sabotage Risk Is “Very Low but Not Negligible”
Anthropic’s newly published Sabotage Risk Report argues that Claude Opus 4.6 is unlikely to autonomously undermine AI safety efforts or pursue catastrophic outcomes, though the company stops short of declaring the risk zero. The 53-page assessment examines whether the model could covertly manipulate research, insert backdoors, exfiltrate weights, or sabotage decision-making, concluding there’s no evidence of coherent misaligned goals and limited capability for long-horizon, opaque planning .
Still, the report flags edge cases: overeager agentic behavior in coding, susceptibility to misuse in GUI settings, and improved subtle side-task execution when explicitly prompted. Anthropic leans heavily on internal monitoring, sandboxing, pull-request review, and model-weight security controls — while acknowledging safeguards are incomplete, especially for rare, context-dependent misalignment. The bottom line: today’s frontier models aren’t rogue saboteurs, but the margin to higher autonomy thresholds is narrowing fast.
🧬 DeepMind’s AlphaGenome Takes Aim at DNA’s Dark Matter
After revolutionizing protein science with AlphaFold, Google DeepMind has unveiled AlphaGenome — an AI system designed to decode the far messier world of human DNA. Trained on massive genomic and molecular datasets, the model can predict how mutations affect gene activity, including whether they flip genes on or off in ways that drive diseases like cancer. Researchers say it’s state-of-the-art across multiple genomic tasks, from splicing to gene regulation, and especially powerful at forecasting mutation impacts.
But unlike AlphaFold’s near-instant scientific canonization, AlphaGenome arrives with caveats. Scientists stress it’s a predictive tool — not a replacement for lab work — and warn that inconsistent training data and the sheer complexity of human genetic variation limit its clinical readiness. The promise is real, but decoding the full blueprint of life remains a long game.

👔 Mustafa Suleyman gives white-collar work 18 months
❌ OAI researcher resigns, amid concerns around ads in ChatGPT
🤖 Is Siri destined to fail?
😬 XAI loses second co-founder in 2 days
📈 Business is booming at OAI
📱 SpaceX is exploring a “Starlink Phone”

One of the reasons AI continues to amaze me. ElevenLabs. Very fun to build with.

Did you know? In 2015, Chilean food tech company NotCo built an AI system called Giuseppe that analyzed thousands of plant molecules to recreate the exact flavor and texture of animal products.
By 2019, their AI-designed product — the NotBurger — launched commercially, and most people couldn’t tell it was entirely plant-based.
Their AI didn’t just suggest ingredients — it calculated molecular compatibility scores between plants to mimic things like milk proteins and meat fats. That was one of the first large-scale examples of AI being used not just to optimize food production, but to invent entirely new food formulations.
See ya later!

