
TL;DR
Ex-Anthropic researchers launched Mirendil to use AI for pharmaceutical discovery, while Nvidia signals a CPU pivot at GTC and Anthropic's Claude now generates visual outputs like charts and diagrams. The shift shows AI moving from pure compute power toward specialized applications and richer output formats.
EDITOR’S NOTE
The most interesting AI moves this week weren't about bigger models. They were about what AI does with the ones we already have.
Isomorphic Labs just turned a data center into a drug discovery engine, and pharma will never run a trial the same way again.
Ex-Anthropic researchers walked out and immediately started building the next generation of scientific AI.
Nvidia is quietly repositioning the CPU as the star of the AI stack, which tells you something about where inference costs are actually going.
And Claude learned to stop explaining charts and just show you one instead.
The pattern: the frontier labs are still racing, but the real action is in what gets built on top.
SIGNAL DROP

Eli Lilly Turned On a Supercomputer Lilly shipped LillyPod this week, the first NVIDIA DGX SuperPOD with DGX B300 systems owned and operated by a pharma company, according to the NVIDIA blog. Drug discovery just got a dedicated compute layer. Every competitor still renting cloud time should be uncomfortable.

Ex-Anthropic Researchers Launched Mirendil Behnam Neyshabur and Harsh Mehta left Anthropic in December and have already lined up a reported $175 million round at a $1 billion valuation, with Andreessen Horowitz and Kleiner Perkins co-leading, according to The Decoder. Biology and materials science. Neo-labs keep coming. Anthropic's talent retention problem is now public record.

Nvidia's CPU Is No Longer an Afterthought CPUs are "becoming the bottleneck" for agentic AI workflows, per Nvidia's own infrastructure lead, and the company is set to unveil new details at GTC this week, according to CNBC. Vera CPUs are already deployed in Meta data centers. Intel and AMD have been warned.
So What? Compute, talent, and chips are all consolidating around AI's next phase.
DEEP DIVE
When the Chatbot Stops Talking and Starts Drawing
Claude has always been the "thoughtful writer" of the major AI assistants. Good prose, careful reasoning, long-form everything. But text has a ceiling, and Anthropic just decided to push past it.
As of March 12th, 2026, Claude can now generate charts, diagrams, and other visualizations directly inside your conversation. According to The Verge's reporting, these aren't static images dropped into a side panel. They appear inline, and they're interactive. Anthropic's own example: ask Claude about the periodic table, and it generates a clickable version where you can drill into individual elements for more information.
That's a small detail worth pausing on. Interactive. Not a screenshot. Not a PNG.
What Claude Is Actually Doing Here
The feature works two ways, according to the article. Claude can proactively decide a visual would help (based on conversational context) and generate one without being asked. Or you can ask directly. Both paths produce inline output.
The building weight distribution example Anthropic uses is telling. That's not a "pretty chart" use case. That's structural reasoning made visible, the kind of thing that previously required a separate tool, a hand-drawn sketch, or a lot of trust that the reader could hold a spatial model in their head. Claude is now collapsing that gap between "explaining something" and "showing something."
And this runs on whatever model Claude is using today, with no apparent new model release attached to it. That's my read: Anthropic treated this as a capability layer, not a model upgrade. The rendering logic and the decision about WHEN to render are the interesting engineering problems here, not a new architecture.
The Interaction Design Problem Everyone Is Ignoring
Most AI assistants that generate visuals do it badly. They either dump charts you didn't ask for (annoying) or require you to explicitly prompt with "now make me a graph" after every analysis (tedious). The inline, context-aware approach Anthropic is taking is harder to get right than it sounds.
Getting the trigger condition correct (when does a visual actually help versus clutter the response?) is a judgment call baked into the model's behavior. Do it too aggressively and you get a chatbot that generates a pie chart every time you mention a percentage. Too conservatively and the feature disappears into obscurity because users forget it exists.
So the real test isn't the rendering quality. It's the judgment quality. And that's not something you can evaluate from a press release.
Who This Actually Changes Things For
Developers and data-literate users have always been able to get Claude to write code that generates visualizations, then run it themselves. That workflow worked. Clunky, but functional. This update matters most for the people who couldn't or wouldn't do that: non-technical professionals, students, anyone using Claude.ai directly without a coding environment attached.
A consultant explaining cash flow to a client. A teacher building a lesson around molecular structure. A product manager trying to communicate a funnel breakdown without opening Figma. These are the actual beneficiaries. Not developers. Developers already had workarounds.
(There's also an enterprise angle here that Anthropic is clearly eyeing. If Claude can sit in a workflow and produce presentation-ready visuals on demand, the "just use Claude instead of three other tools" pitch gets a lot stronger.)
The Feature That's Late and Still Welcome
ChatGPT has had some form of chart generation for a while. Gemini too. So Anthropic isn't first. But being late and being wrong are different things, and Claude's reputation for precision in technical explanations means the bar for "is this visual actually correct" is higher for Anthropic than for anyone else.
My take: this is a good update, and I'm genuinely curious whether the contextual trigger works as well in practice as it does in demo GIFs. The periodic table example is clean. Real conversations are messier. If Claude can correctly decide that a question about mortgage amortization warrants a chart but a question about why a coworker is annoying doesn't, that's impressive behavior. If it hallucinates a bar graph when you mention the word "comparison," that's a problem.
The feature is real. The demo looks good. But the interesting story is six weeks from now, when people have actually used it on their actual messy questions.
So What? Next time you need to explain something spatial or data-heavy, ask Claude directly instead of reaching for a separate tool.
- The AI finds the signal. We decide what it means.
PARTNER PICK


Apify is a web scraping platform that actually respects your time. You get pre-built scrapers for common sites, a visual workflow builder, or raw code control. The free tier lets you test real projects. Worth trying if you're tired of maintaining brittle scraping scripts or need to monitor competitor pricing without touching the API. The limitation: you're paying per compute unit once you scale, and it adds up faster than you'd expect for high-volume jobs. Versus Phantombuster, Apify gives you more technical depth but less hand-holding. Click if you need scraping that scales without becoming a second job.
Some links are affiliate link. We earn a commission if you subscribe. We only feature tools we'd use ourselves.
TOOL RADAR
Ink lets AI agents deploy full-stack apps without human intervention. The agent calls "deploy," Ink auto-detects the framework, builds it, and returns a live URL. Hosting, databases, DNS, secrets: all handled. Works with Claude Code, Cursor, Codex, and most major agentic tools. If you're running autonomous coding pipelines and still manually wiring up deployments, that's the bottleneck Ink targets. Free to start.
Worth it if: your AI agents write code but need a human to ship it.
Skip if: you're not running agentic coding workflows yet.
Lemonade is a local LLM runtime that now runs on AMD NPUs under Linux. Version 10 adds multimodal capabilities and expands what was originally a Windows-only, LLM-focused tool into something more cross-platform. The C++ implementation (introduced in v9) keeps it lean. Still a small team, and model conversion support is reportedly thin. But NPU inference on Linux is genuinely useful territory.
Worth it if: you have an AMD NPU and want local inference on Linux.
Skip if: you need broad model support out of the box.
ACTIONABLE
AUTOMATION PLAYBOOK

If you're building agent workflows but tired of manual prompt engineering, try using Ink's MCP integration to let Claude handle the deployment logic directly.
Instead of writing separate prompts for infrastructure decisions, feed your agent's output straight into Ink's skill system.
Example: prompt Claude to "analyze this dataset and suggest optimal deployment," then pipe the response through Ink's Linux NPU skill (via Lemonade v10) to auto-generate the infrastructure code.
Claude can now return the architecture as a diagram, eliminating the transcription step.
Result: cuts your agent-to-production cycle from hours to minutes.
Time saved: roughly 3-4 hours per deployment iteration.
TECHNIQUES
PROMPT BUSTER
The Persona Stack
Most people give an AI one role. Give it two. Layering a domain expert with a communication style forces the model to hold both constraints simultaneously, and the output quality jumps noticeably.
The structure: "You are [expert] who explains things like [communication archetype]."
You are a senior distributed systems engineer who explains
concepts the way Richard Feynman taught physics: start with
the simplest possible mental model, then add complexity only
when the simple version breaks. Explain why Kafka uses
sequential disk writes instead of random access.
The first persona sets the knowledge depth. The second sets the reasoning style. Without the second constraint, you get accurate but often lazy explanations. With it, the model has to work toward a specific intellectual standard.
Use this when you need technical accuracy AND genuine clarity. Documentation reviews, architecture explainers, onboarding materials. Anywhere that "correct but impenetrable" is a real failure mode.
Swap the communication archetype freely. "Explains like a senior PM in a board meeting" hits very differently than Feynman. Same facts, completely different framing.
Try it right now. Pick something you've been struggling to explain to your team.
QUICK LINKS
gstack: Claude Code with workflow structure - Open-source toolkit separating planning, review, and shipping into distinct Claude operating modes.
Dynin-Omni: Masked diffusion for all modalities - Single architecture unifying text, image, video, and speech understanding and generation.
Gemini 3.1 Flash-Lite: $0.25 per million tokens - Google's fastest, cheapest Gemini model targets high-volume workloads like translation and content moderation.
Nano Banana 2: Pro quality at Flash speed - Image generation combining advanced world knowledge with fast iteration across Gemini and Search.
STARTER PACK
What caught our attention this week.
ActiveCampaign — Email marketing, CRM, and workflow automation in one platform without the complexity.
Claude — Best reasoning model for actual work. Beats ChatGPT on code and analysis.
Cursor — IDE built for AI pair programming. Write code 3x faster with AI autocomplete that actually understands context.
Some links are affiliate link. We earn a commission if you subscribe. We only feature tools we'd use ourselves.
This newsletter runs on an 8-agent AI pipeline we built in-house.
Want that kind of automation for your business?
From scanning 50+ sources to drafting, fact-checking, and formatting - AI agents handle 95% of this newsletter.
The AI finds the signal. We decide what it means.
Research and drafting assisted by AI. All content reviewed, edited, and approved by a human editor before publication.
