
EDITOR’S NOTE
The race to own open-weight AI just got a $26 billion price tag.
Nvidia is spending more than most countries' defense budgets to make sure the next great model isn't locked behind an API.
Google just collapsed text, images, video, and audio into a single embedding space, and the implications for multimodal search are still landing.
Nvidia's Nemotron 120B runs five times faster than you'd expect, which matters a lot when your agents need to think in real time.
And Yann LeCun convinced investors to put $1 billion behind the idea that LLMs are a dead end.
The open-source crowd is no longer scraping for scraps. They're buying the bakery.

SIGNAL DROP
Nvidia Is Betting $26 Billion on Open-Weight Models
According to Wired, filings show Nvidia plans to spend $26 billion building open-weight AI models. That's a direct play against OpenAI, Anthropic, and DeepSeek. OpenAI should be nervous: Nvidia controls the hardware AND now wants the model layer too.Google Shipped a True Multimodal Embedding Model
Google released Gemini Embedding 2, which maps text, images, video, audio, and PDFs into a single shared vector space, according to The Decoder. Token limit quadrupled to 8,192. Developers running separate embedding pipelines per modality now have a strong reason to consolidate, and Amazon's Nova and Voyage Multimodal just got benchmarked into an uncomfortable position.Nvidia Dropped a 120B Open-Source Reasoning Model
Nvidia shipped Nemotron 3 Super, a 120 billion parameter hybrid Mamba-attention MoE model built for agentic workloads, per Marktechpost. It claims 7x higher throughput over its prior generation. For anyone building multi-agent systems on a budget, proprietary frontier models just got harder to justify.
DEEP DIVE

The $1 Billion Bet Against the Current
Yann LeCun has been arguing that large language models are fundamentally the wrong architecture for intelligence for years. Most of the industry politely nodded and kept shipping GPT wrappers. Now he's raised $1.07 billion at a $3.5 billion pre-money valuation to prove everyone wrong.
That's Europe's largest seed round. Ever.
What LeCun Is Actually Building
AMI Labs, formally Advanced Machine Intelligence Labs, launched in March 2026 with roughly a dozen employees split across Paris, New York, Singapore, and Montreal. Alexandre LeBrun (formerly of French health startup Nabla) takes the CEO seat. LeCun chairs the board. The investor list reads like a who's who of people with long time horizons: Nvidia, Bezos Expeditions, Singapore's sovereign wealth fund Temasek, and France's Cathay Innovation.
The technical thesis is world models. Not language models that describe the world, but models that represent and reason about the physical environment directly, with robotics and transportation as the primary target applications. LeCun has called Silicon Valley "hypnotized by generative AI," and AMI Labs is the institutional expression of that critique.
Meta isn't an investor, but according to the article, a partnership is expected. So LeCun left but didn't burn the bridge. Smart.
Where the Architecture Disagreement Actually Lives
LeCun's core argument (which he's been making since at least 2022) is that autoregressive token prediction can't produce the kind of causal, physical reasoning you need for an agent operating in the real world. Current LLMs, by this view, are sophisticated pattern matchers. Good at text. Bad at knowing that a glass will fall if you push it off a table, unless that exact scenario appeared in training data.
World models take a different approach: build an internal simulation of how the environment behaves, then reason over that simulation. This is closer to how model-based reinforcement learning works, and it's the architecture that robotics researchers have been pushing toward for a decade. The difference is that nobody has cracked it at scale. That's what AMI Labs is betting it can do.
My read: the technical case is coherent. LeCun isn't a crank. But there's a meaningful gap between "LLMs have limitations" and "we have the architecture that fixes those limitations." AMI Labs is funded on the first claim and needs to deliver on the second. Those are very different things.
The $3.5 Billion Question
A dozen employees. No product. A pre-money valuation larger than most Series C companies ever reach. The investors here aren't naive (Temasek manages over $280 billion in assets; they've seen pitches), so presumably they've seen something beyond the public thesis.
But the structure is unusual. Seed rounds at this size typically come with significant dilution pressure and near-term milestones. A $1 billion seed at $3.5 billion pre-money implies the next round needs to justify something north of $10 billion to not be a down round. That's a lot of world-modeling to do.
And the timeline matters. Robotics and physical AI are notoriously hard to productize. Boston Dynamics has been at it for 30 years. Waymo has burned through billions. LeCun's approach is architecturally different, but the deployment friction doesn't change just because the model design does.
Who This Actually Threatens
Not OpenAI. Not Anthropic. Not Google. At least not yet, and probably not for several years.
The near-term competitive pressure falls on companies like Physical Intelligence (Pi), Figure, and Covariant, all of which are building toward embodied AI with varying amounts of LLM integration. If AMI Labs ships a world model that demonstrably outperforms current approaches on physical reasoning benchmarks, that's a real problem for those companies' technical foundations.
The bigger threat is longer-range. If LeCun is right that the current paradigm hits a ceiling before reaching general reasoning capability, then AMI Labs is positioning for the moment that ceiling becomes obvious to everyone. That could be 3 years away. Could be 10. Could be never.
The Contrarian With Institutional Backing
I think this is genuinely interesting and probably too early. LeCun is one of the few people in AI with the credibility to raise a billion dollars on a thesis that directly contradicts the dominant paradigm. That's worth watching closely. But credibility isn't architecture, and architecture isn't product. AMI Labs has the first. It needs to prove the second and third. The investors are betting on a long arc here. I hope they're right. The current path (more tokens, more compute, better benchmarks) isn't going to get us to a robot that can fold laundry without breaking something.
- The AI finds the signal. We decide what it means.
PARTNER PICK

ActiveCampaign bundles email marketing, CRM, and automation into one platform that actually talks to itself. Most tools feel like three separate things bolted together. This one flows.
The automation builder is the real draw. Set up conditional workflows that trigger based on email opens, form fills, or custom events. No code needed, but deep enough for complex sequences. Pricing scales reasonably from $15/month for basics to enterprise custom deals.
Worth trying if you're juggling multiple tools and want to consolidate without losing sophistication. The learning curve is steeper than Mailchimp, but you get what HubSpot offers at a fraction of the cost.
One real limitation: the UI can feel cluttered when you're first learning it. Too many options crammed into navigation.
Some links are affiliate link. We earn a commission if you subscribe. We only feature tools we'd use ourselves.
TOOL RADAR
Open-source TTS that lets you direct vocal performance using natural language tags like [whispers sweetly] or [laughing nervously]. Sub-150ms latency, 80+ languages, multi-speaker dialogue in a single pass. The Dual-AR architecture (4B slow model for semantics, 400M fast model for acoustics) is genuinely interesting. According to the researchers, it outperforms Google and OpenAI on the Audio Turing Test. Local hosting requirements are still unclear, and the full code may not be released yet.
Worth it if: you need expressive, controllable voice generation without vendor lock-in.
Skip if: you need production-ready self-hosting today.
Same model, different entry point. The HuggingFace weights are available now for those who want to poke at the architecture directly. Community sentiment is cautiously positive, though the sglang server setup isn't documented yet. Early days.
Worth it if: you're a researcher who wants raw model access immediately.
Skip if: you want a working inference stack out of the box.
FACT CHECK
AI MYTH BUSTER
Myth: More parameters = smarter model.
Everyone believes this. It's intuitive. Bigger brain, better thinking. And for a while, the benchmark data seemed to back it up. GPT-3 had 175 billion parameters and blew everything else out of the water. So the obvious conclusion was: keep scaling.
Wrong.
Mistral 7B outperforms models three times its size on several reasoning benchmarks. A 7-billion parameter model. That's like assuming the heaviest boxer always wins. Weight matters until technique matters more.
The confusion comes from conflating parameters with capability. Parameters are just the knobs the model tunes during training. What actually drives performance is the quality of training data, the architecture choices, the fine-tuning strategy, and how well the model is aligned to the task you're actually running. A model trained on carefully curated, high-signal data will consistently beat a bloated model trained on internet garbage. Garbage in, garbage out. Every time.
But the "bigger is better" myth persists because it's measurable. Parameter counts are a clean number you can put in a press release. Data quality is messy, proprietary, and hard to compare. So people anchor on the number they can see.
And the industry is quietly moving past it. Google's Gemini Nano runs on-device with a fraction of the parameters that GPT-4 uses. Apple Intelligence is built on sub-10B models. The frontier labs are optimizing hard for efficiency, not raw size.
So the next time someone cites parameter count as proof of quality, ask them about the training data.
The blunt version: Parameter count is a proxy metric that stopped being reliable two years ago.
QUICK LINKS
Nvidia Will Spend $26 Billion to Build Open-Weight AI Models Nvidia is funding open-source model development over five years, positioning itself to compete with OpenAI and DeepSeek directly.
Nemotron 3 Super Delivers 5x Higher Throughput for Agentic AI A 120B parameter open model with 12B active parameters, already integrated by Perplexity, CodeRabbit, and enterprise platforms like Palantir.
Startup Claims First Full Brain Emulation of a Fruit Fly Eon Systems connected 125,000 neurons and 50 million synapses to a virtual body, producing multiple behaviors from actual neural data.
AMD Formally Launches Ryzen AI Embedded P100 Series AMD's new embedded processors bring on-device AI capabilities to edge devices with 8-12 core configurations.
PICKS OF THE WEEK
What caught our attention this week.
Closely – AI-powered sales intelligence platform that tracks prospect activity and intent signals in real time.
Fish Audio S2 – Open-source text-to-speech with natural emotion tags and multi-speaker dialogue. Outperforms Google and OpenAI on benchmarks.
DeerFlow 2.0 – ByteDance's open agentic framework executes code in sandboxed containers, not just suggests it.
Some links are affiliate link. We earn a commission if you subscribe. We only feature tools we'd use ourselves.
This newsletter runs on an 8-agent AI pipeline we built in-house.
Want that kind of automation for your business?
From scanning 50+ sources to drafting, fact-checking, and formatting - AI agents handle 95% of this newsletter.
The AI finds the signal. We decide what it means.
Research and drafting assisted by AI. All content reviewed, edited, and approved by a human editor before publication.
