
TL;DR
Samsung's betting $73B on AI chips while Microsoft ships MAI-Image-2 and Cursor launches a cheaper code model to challenge OpenAI. Meanwhile, generative AI is cracking wireless vision through walls, a technique that could reshape sensing and robotics.
EDITOR’S NOTE
The cheapest model in the room keeps winning. That pattern is starting to look less like a coincidence.
Samsung is betting $73 billion that silicon sovereignty is the next arms race.
Microsoft's superintelligence team quietly shipped a text-to-image model, and nobody's sure what "superintelligence team" means anymore.
Cursor built a code-only model to match OpenAI and Anthropic at a fraction of the cost, and it might be the most honest product bet of the year.
Researchers taught AI to see through walls using wireless signals, and the implications go well past convenience.
Specialization is beating scale. The generalists had their moment.
SIGNAL DROP

Samsung Doubles Down on AI Chips With $73B Bet
Samsung pledged $73 billion in annual capital spending, its largest single-year investment on record, to claw back ground in AI hardware, according to AI Business. The company has been losing HBM contracts to SK Hynix while TSMC pulls ahead on advanced process nodes. That's a lot of money to still be playing catch-up. Nvidia's supply chain partners should read this carefully: Samsung is clearly not accepting second place quietly.

Microsoft's Superintelligence Team Ships Its First Product
Mustafa Suleyman's superintelligence team shipped MAI-Image-2, a text-to-image model now rolling out across Copilot and Bing Image Creator, with API access coming via Microsoft Foundry, according to The Decoder. The model currently sits third on the Arena.ai leaderboard, behind OpenAI and Google. Third place. For a team with "superintelligence" in its name, that's a rough debut. Adobe's Firefly team now has a well-resourced new competitor eating directly into their enterprise territory.

Cursor Shipped Its Own Model to Escape Anthropic and OpenAI
Cursor released Composer 2, a code-only model priced at $0.50 per million input tokens, compared to Claude Opus 4.6 at $5.00, according to The Decoder. It scores 61.3 on Cursor's internal benchmark, edging past Claude Opus 4.6's 58.2. Smart move. Any developer tool that depends on its direct competitors for inference is one pricing change away from margin collapse.
So What? Capital, independence, and compute are now the only moats worth building.
DEEP DIVE
The Wall Is No Longer a Wall
Sonar has worked this way for decades: send a signal out, measure what bounces back, reconstruct the shape of whatever you can't see. It's crude but effective, like reading a room by clapping your hands. MIT researchers have been running a more sophisticated version of this idea for over ten years, using surface-penetrating wireless signals to let robots locate and manipulate objects hidden behind obstacles. The core physics isn't new. What just changed is what happens after the signal returns.
The bottleneck was always reconstruction precision. The raw signal data is messy. Wireless reflections off a concealed object give you a blurry, ambiguous point cloud, something like trying to identify a face from its shadow. Good enough to say "object is here." Not good enough to say "object is this shape, grab it this way."
That's where the generative AI comes in.
What the Model Actually Does
The MIT team is using generative AI models to take that imprecise reflection data and reconstruct object shapes with meaningfully higher accuracy. The article summary is thin on architecture specifics (my read: this is likely a diffusion-based or similar generative approach that learns a prior over plausible 3D shapes and uses the signal data as a conditioning input, but I'm extrapolating from the broader literature here). The key insight is that generative models are good at exactly this problem: filling in ambiguous, incomplete information by drawing on learned priors about what shapes tend to look like.
Think of it like autocomplete for geometry. The wireless signal gives you a smeared, noisy sketch. The generative model says "given everything I know about object shapes, the most probable thing that produces this sketch is this." The result is a sharper reconstruction than the signal data alone could ever produce.
Not magic. Learned inference.
Ten Years of Foundation, One Bottleneck Cleared
The decade-long research history here is worth sitting with. This wasn't a team that bolted a trendy AI model onto an existing demo for a press release. They built the underlying sensing methodology over years, hit a precision ceiling the physics couldn't clear, and then found that generative AI could clear it for them.
That's a different story than most "AI improves X" announcements. Most of those are applications looking for a problem. This is a genuine technical bottleneck that waited for the right tool.
And the tool arrived about two years late to be fashionable, which is probably why the work is more credible.
Who Actually Wins When Robots See Through Walls
The obvious applications are warehouse robotics and search-and-rescue. A robot that can locate and manipulate hidden objects without line-of-sight is genuinely useful in environments where boxes are stacked, debris is present, or visibility is zero. But the more interesting implication (to me, anyway) is what this does to sensor fusion architectures.
Right now, robots in complex environments rely heavily on cameras and lidar, both of which fail hard when the target is occluded. Adding a reliable through-obstacle sensing modality that produces accurate 3D shape reconstructions changes the sensor stack. Not as a replacement. As a fallback that's actually trustworthy enough to act on.
The gap between "we can detect something is there" and "we can reconstruct what it is precisely enough to grip it" is enormous in robotics terms. If the MIT results hold up at scale, that gap just got a lot smaller.
Where I'd Push Back
My honest skepticism: generative models that fill in missing data are doing inference, not measurement. They're producing the most probable shape, which is not always the correct shape. In a warehouse picking context, "usually right" might be fine. In search-and-rescue, where the hidden object might be a person in an unusual position, "usually right" could matter a lot.
The precision gains are real. But I'd want to see hard numbers on failure modes before trusting this for high-stakes manipulation tasks. The article doesn't provide them, and that absence is doing some work.
Still: a decade of sensing research plus a genuinely well-suited AI technique is a better foundation than 90% of what gets called "AI-powered" anything. This is one to watch.
So What? If you build robotics pipelines, start budgeting for through-obstacle sensing as a real sensor input, not a research curiosity.
- The AI finds the signal. We decide what it means.
PARTNER PICK


Pabbly Connect automates workflows between apps without coding. It's cheaper than Zapier for high-volume users, starting at $19/month for 500 tasks.
Worth trying if you're drowning in manual data entry between tools. Connect your CRM to email, Slack to spreadsheets, whatever. The interface is clunky compared to Make, but the pricing scales better if you're running dozens of automations.
Real limitation: support is slow. You'll spend time troubleshooting alone.
The honest take: Pabbly isn't prettier or smarter than competitors. It's just cheaper at scale, and sometimes that's the only reason you need.
Some links are affiliate link. We earn a commission if you subscribe. We only feature tools we'd use ourselves.
TOOL RADAR

Google's Antigravity agent turns plain-language prompts into full-stack apps, handling Firebase databases, Google authentication, payments, and third-party API connections automatically. It supports React, Angular, and Next.js, and can build real-time multiplayer experiences in the browser. Powered by Gemini 3.1 Pro, it's aimed at developers who want to skip boilerplate and non-programmers who want to ship something real. No pricing mentioned in the source.
Worth it if: you want a working app, not just a prototype.
Skip if: you need fine-grained control over your stack.

The agent behind the AI Studio experience, Antigravity proactively detects what your app needs and provisions it without being asked. Database? It sets up Firestore. Auth? Firebase Authentication. Missing UI library? It installs Framer Motion on its own. Practically speaking, it's scaffolding that thinks ahead. Useful for solo builders who don't want to context-switch every five minutes.
Worth it if: you're prototyping fast and hate configuration.
Skip if: your team already has a preferred backend setup.
ACTIONABLE
AUTOMATION PLAYBOOK

If you're building real-time multiplayer features and dreading the networking boilerplate, try Google AI Studio's new vibe coding mode.
Paste your game loop into the prompt, ask it to "add WebSocket synchronization for player positions," and iterate on the generated code in real-time.
Specific example: feed it a canvas-based 2D game, request "lag compensation for 100ms latency," and test the output instantly without context switching.
The AI handles the tedious sync logic while you focus on game feel. Time saved: 3-4 hours of manual networking implementation per feature.
COMPETITION
PROMPT CORNER
Cursor vs. GitHub Copilot
You're setting up a new project and deciding which AI coding assistant gets billed to the company card. Both cost around $10-19/month. Both autocomplete your code. The choice feels arbitrary. It isn't.
Where they actually differ:
Context window: Cursor ingests your entire codebase and reasons across files. Copilot works best at the function level. Ask Cursor "why is this auth bug happening?" and it traces the call stack across six files. Copilot gives you a good next line.
Chat vs. autocomplete: Copilot is fundamentally an autocomplete tool with chat bolted on. Cursor is a chat-first editor where autocomplete is the bonus. That inversion matters more than it sounds.
IDE lock-in: Copilot lives inside VS Code (and JetBrains, reluctantly). Cursor IS the editor, built on VS Code's foundation. Switching costs are real in both directions.
The underdog win: Copilot's inline suggestions are still faster and less intrusive than Cursor's. For experienced developers who want a quiet assistant, not a co-pilot that talks back, Copilot's autocomplete flow is genuinely smoother.
Verdict:
If you're building anything with more than three files that talk to each other, use Cursor. The codebase-aware reasoning isn't a gimmick. It's the difference between an assistant that knows your project and one that knows Python.
If you're a senior dev who finds AI chat annoying and just wants fast, accurate completions while staying in your existing VS Code setup, Copilot is the right call. Don't fix what isn't broken.
QUICK LINKS
Kitten TTS: Three new models under 25MB – Open-source text-to-speech models (15M to 80M parameters) run on CPU without GPU, enabling on-device voice synthesis.
Canary (YC W26): AI QA that reads your code – Generates and executes end-to-end tests for pull requests by understanding your codebase and user workflows.
Meta deploys AI for content enforcement, cuts third-party vendors – Early tests detect twice as much violating content with 60% fewer errors than human review teams.
DoorDash pays couriers to film tasks for AI training – New Tasks app compensates delivery workers for video submissions to improve AI and robotic systems.
OpenAI acquires Astral to strengthen Codex coding assistant – Developer tools startup joins OpenAI's coding team as Codex hits 2M weekly active users with threefold growth.
TRENDING TOOLS
What caught our attention this week.
WATI – WhatsApp automation platform for customer messaging, commerce, and support at scale.
Google AI Studio (Antigravity) – Build full-stack apps with voice commands: databases, payments, multiplayer games. Firebase integration included.
PearlOS – Self-evolving local OS with swarm intelligence. Creates apps and UI on demand. Early access live on GitHub.
This newsletter runs on an 8-agent AI pipeline we built in-house.
Want that kind of automation for your business?
From scanning 50+ sources to drafting, fact-checking, and formatting - AI agents handle 95% of this newsletter.
The AI finds the signal. We decide what it means.
Research and drafting assisted by AI. All content reviewed, edited, and approved by a human editor before publication.
