

👊 Altman X Amodei Beef
🦞 OpenClaw aqui-hire
💻 The end of SaaS?
& much much more

🦞 OpenClaw, OpenAI and the future
Peter Steinberger announces he is joining OpenAI to scale his autonomous AI agent project OpenClaw to a broader audience and accelerate development by tapping into the latest research and infrastructure. He explains that the OpenClaw project — originally a fun playground experiment — will be housed in an independent open-source foundation supported by OpenAI, ensuring it stays open and community-driven. Steinberger says his goal isn’t to build a company but to make agents that “actually do things” accessible to everyone, and that OpenAI’s shared vision makes it the right partner to achieve that mission.
🤖 Anthropic Launches Claude Sonnet 4.6
Anthropic has released Claude Sonnet 4.6, the latest upgrade to its mid-tier AI model that significantly boosts coding skills, computer use, long-context reasoning, agent planning, and general knowledge work while keeping the same pricing as the prior Sonnet 4.5. The model now includes a beta 1 million-token context window, letting it process entire codebases, long documents, and complex tasks in one go, and early users often prefer it over its predecessor and even sometimes premium variants. It’s being rolled out as the default in Claude, Claude Cowork, and tools like GitHub Copilot, helping developers, enterprises, and productivity workflows with faster, cheaper, and more capable AI.
💻 The Software Industrial Revolution
2025 marked a true inflection point in how software is created thanks to mature AI coding agents that can reliably produce working software, ushering in what the author calls the Software Industrial Revolution. By analogy to the 18th-century Industrial Revolution — which slashed production costs and drove massive societal change — software production is now poised for dramatic cost reduction, abundance, and scale, reshaping every industry that relies on digital tools. The piece suggests this shift will democratize software creation beyond elite engineers, change how software is funded and built, and unleash a surge of bespoke digital tools across healthcare, science, business, and beyond.
🧠 AI’s not gonna kill SaaS, but it will force some changes
The piece argues that while fears about AI completely replacing traditional SaaS are driving stock sell-offs, they’re overblown — much like early resistance to the cloud didn’t stop that shift. Enterprise software continues to grow, and major SaaS players (e.g., Salesforce, ServiceNow, Adobe, Workday, SAP) are integrating AI into offerings and maintaining revenue growth, suggesting adaptation rather than extinction. The core challenge for companies building their own tools isn’t AI per se, but handling complex enterprise needs like security, data, integration and operational workflows, which incumbents still have an advantage on.
🎵 Google brings AI-generated music to Gemini with Lyria 3
Google’s official blog announces that the Gemini app now includes Lyria 3, the latest music-generation model from Google DeepMind, enabling users to create 30-second custom music tracks from text descriptions or uploaded images and videos. The feature generates original instrumentals and lyrics, and pairs each track with custom cover art, with all outputs watermarked using SynthID to identify AI-made content. Lyria 3 is rolling out in beta within the Gemini app for users aged 18+ in multiple languages, and is designed for creative expression rather than professional music production.
🧠 Pentagon threatens to cut off Anthropic over AI safeguards dispute
The U.S. Department of Defense is weighing ending its partnership with AI company Anthropic—developer of the Claude AI model—due to a dispute over how the military can use its technology. Pentagon officials want all leading AI labs to allow their tools to be used for “all lawful purposes,” including weapons development, intelligence and battlefield operations, but Anthropic has resisted removing limits on things like fully autonomous weapons and mass domestic surveillance, according to Axios.
The tension reportedly escalated after The Wall Street Journal said Claude was used in a U.S. military operation in Venezuela to capture Nicolás Maduro, with Anthropic raising questions about that usage, which frustrated military officials.
The Pentagon could scale back or terminate Anthropic’s contract—valued at up to $200 million and including Claude’s deployment on classified networks—unless the company agrees to broader usage terms, and officials say replacing Claude won’t be simple because rival models lag in classified applications.

✏ Figma might have just killed pencil.dev
🤝 People are freaking out over Amodei and Altman handshake refusal
🤖 Accenture ties AI usage to promotions
🥸 ElevenLabs now insures AI actions

I’ve been using ChatGPT since November 2022. I switched to Claude (experimentally) this week. Absolute game changer. There’s not a single use case that it doesn’t outperform ChatGPT in from my perspective.

🇪🇺 Did you know? The European Union's AI Act — the world's first comprehensive legal framework for artificial intelligence — classifies AI systems by risk level, and bans certain uses of AI outright, like real-time facial recognition in public spaces. One of its most striking provisions: AI systems that could manipulate people using subliminal techniques are completely prohibited, regardless of who's using them. It came into force in August 2024, with rules phasing in over the next few years.
Later folks 🫡

