In partnership with

7wData Ins7ghts

Your weekly signal boost from 190,000+ articles, served with a DJ's ear for what actually matters.

So, What Actually Happened?

We scanned 190,000 articles this week so you don't have to read another ”top 10 AI predictions” listicle. The pattern that jumped from the data? The AI industry just discovered that building models is the easy part. Together AI is chasing a $7.5 billion valuation by betting entirely on inference, not training. Wall Street is punishing software stocks as investors realize AI promises are running ahead of AI profits. Three separate AI acquisitions landed in a single week, with Zendesk snapping up Forethought and two more companies changing hands. And somewhere in Australia, a techie used AI to design a cancer vaccine for his dying dog. The vaccine worked. The hype, meanwhile, is getting a reality check.

The Bottom Line: The AI industry is splitting into two camps: companies that run AI in production and companies that talk about running AI in production. Wall Street just started noticing the difference, and the stocks are moving accordingly.

Your question, my mix.

Today's set covered the chip wars. But after I finished, I asked a question that didn't make the cut:

"Which companies are quietly gaining influence in AI governance faster than they're gaining attention?"

90 seconds later: 23 sources, 4 companies the Gartner crowd hasn't named yet, and a connection between compliance infrastructure and procurement that nobody in the press is making.

That's one question. I have 189,993 articles I didn't use today.

What are you trying to get ahead of right now?
Hit reply. I'll mix your question the same way and send your personal answer back within 24 hours.

Yves

The $4 Billion Problem Hiding in Nearly Every Fast-Food Location

You show up to your favorite fast-food restaurant for a quick meal. But the line is too long, and you’re starving. So you bail.

You’re not alone. 93% of monthly fast-food visitors in America say their top frustration is long lines. And while you miss out on chicken nuggets and fries, restaurant owners lose significant revenue they can’t afford to miss.

So brands like White Castle use Miso Robotics’ AI-powered kitchen restaurant robots to run their fry stations, keeping kitchen operations smooth, workers safe, and customers happy.

Aided by a collaboration with NVIDIA, Miso’s AI-powered Flippy Fry Station robot works 2X faster than average fry cooks. That means operators serve more customers and unlock up to 3X more profits per location. And that means much shorter lines for you.

This is a paid advertisement for Miso Robotics’ Regulation A offering. Please read the offering circular at invest.misorobotics.com.


The Tracks That Matter

1. Together AI Wants $7.5 Billion. For a Company Most People Haven't Heard Of.

Together AI is targeting a $7.5 billion valuation in its latest funding round, and the number tells you everything about where AI value is migrating. This is not a model company. This is not a consumer chatbot. This is an inference infrastructure platform, the plumbing that lets other companies run AI models at production speed without building their own data centers.

The valuation makes sense when you understand the economics. Training a frontier model costs hundreds of millions of dollars once. Running that model for millions of users costs money every single second. The inference market, by some estimates, will be 10x larger than the training market within three years. Together AI is positioning itself as the toll road between model builders and model users, and toll roads have historically been very profitable businesses.

What makes this interesting beyond the funding number is the timing. This round comes while software stocks are getting hammered and while SoftBank's shares are losing steam on investor concerns about its massive AI bet. The market is simultaneously punishing AI hype and rewarding AI infrastructure. That divergence tells you where the smart money thinks the real value sits.

Here's what works: If you are evaluating AI infrastructure vendors, pay attention to the inference layer. The companies building inference infrastructure today are the ones that will control the cost and speed of AI deployment tomorrow. Ask your vendors: how much does it cost to run a single inference call at scale? If they can not answer that question with specific numbers, you are talking to a training company, not a production company.

2. Wall Street Just Discovered That AI Doesn't Print Money. Software Stocks Are Paying the Price.

Something shifted in investor sentiment this week, and it was not subtle. Software stocks are getting crushed as AI casts what analysts are calling a ”shadow of doubt” over traditional software business models. The fear is straightforward: if AI can automate what software companies sell, then software margins are at risk. And when margins are at risk, multiples compress.

Simultaneously, SoftBank's shares are losing momentum as investors question the returns on its massive AI bets. This is the same SoftBank that poured billions into its AI strategy, and the market is now asking a question that should have been asked two years ago: when do these investments produce revenue that justifies the capital deployed?

The pattern is not unique to SoftBank. Across the market, investors are distinguishing between companies that have AI revenue and companies that have AI narratives. The ”AI-powered” label on a product page is no longer enough to justify a premium valuation. Investors want to see customer adoption, retention, and, most importantly, revenue growth that comes from AI features, not just AI announcements.

This matters for every enterprise data leader because it changes the conversation in the boardroom. Your CEO is reading these headlines. Your CFO is watching these stock movements. The days when ”we are investing in AI” was sufficient justification for budget requests are ending. The new question is: ”what specific revenue or cost reduction has our AI investment delivered this quarter?”

Here's what works: Prepare a concrete AI ROI narrative before your next budget cycle. Identify three AI initiatives that have measurable business impact (revenue generated, costs saved, time reduced) and present them with actual numbers. If you can not point to measurable outcomes after 12 months of AI investment, you have a problem that will become visible when your company reports earnings.

Here's how I use Attio to run my day.

Attio's AI handles my morning prep — surfacing insights from calls, updating records without manual entry, and answering pipeline questions in seconds. No searching, no switching tabs, no manual updates.

3. Three AI Acquisitions in One Week. The Consolidation Wave Is Here.

The AI acquisition pace just accelerated to a sprint. In a single week, Zendesk announced it will acquire Forethought, an AI-powered customer service startup, while two more deals landed in rapid succession: Xionex acquired AI model development specialist BRFrame, and a major social platform acquired AI agent network Maltbook. Three deals, three different segments, one message: standalone AI startups are becoming features inside larger platforms.

This is the pattern you see in every technology cycle. The innovation happens in startups. The distribution happens through acquisition. Customer service AI, coding assistants, and agent platforms are all reaching the stage where building from scratch is slower than buying. The coding AI market in particular has gone to all-out battle, with inference-focused chip design accelerating alongside the tools that run on those chips.

For enterprise buyers, this consolidation changes the vendor landscape. The AI startup you evaluated six months ago might be a feature inside a platform you already pay for. That is not necessarily bad. Integration with existing tools means faster deployment and lower switching costs. But it also means the startup's roadmap now serves the acquirer's strategy, not yours.

Here's what works: Audit your AI vendor relationships for acquisition risk. If you are building critical workflows on top of a standalone AI startup, check whether that startup is likely to be acquired (look for: venture-backed, under $200M revenue, solving a problem that a major platform also wants to solve). If acquisition is likely, ensure your architecture can swap providers without rebuilding. The companies that plan for vendor consolidation now will not be scrambling when their favorite startup becomes someone else's feature.

4. CrowdStrike and Perplexity Just Defined a New Security Category: AI-Native Browsing Protection.

When the world's largest endpoint security company partners with an AI-native search engine to protect enterprise users, it tells you something about where the threat surface is moving. CrowdStrike and Perplexity's partnership to secure Comet AI is not just another vendor collaboration. It is the first serious acknowledgment that AI-powered browsing creates security risks that traditional browser security was never designed to handle.

The core problem: when employees use AI-native tools to browse, research, and interact with external data, the attack surface expands in ways that firewall rules and URL blocklists cannot address. AI browsing tools pull content from multiple sources, synthesize it, and present it as a unified response. If any of those sources is compromised, the AI tool becomes the delivery mechanism for the threat. Traditional security sees the AI tool. It does not see what the AI tool is looking at.

This partnership matters because it signals that AI security is not a future problem. It is a present one. Your security team is probably still thinking about AI risk in terms of model poisoning and data exfiltration. The browsing-level threat, where AI tools become unwitting intermediaries for malicious content, is the gap that nobody has been discussing.

Here's what works: Ask your CISO one question this week: ”How do we monitor what our AI tools are accessing on behalf of our employees?” If the answer involves traditional URL filtering and network monitoring, your security posture has a gap the size of a language model's context window. The CrowdStrike-Perplexity approach suggests the solution requires AI-aware security layers, not just traditional network controls applied to AI traffic.

5. One Man Used AI to Design a Cancer Vaccine for His Dying Dog. It Worked.

This is the story that makes you stop scrolling. An Australian software developer, facing his dog's terminal cancer diagnosis, used ChatGPT and AlphaFold to design a custom mRNA cancer vaccine. Not a treatment plan. Not a medication suggestion. An actual vaccine, designed by a programmer with no biology degree, manufactured through a veterinary lab, and administered to his dying pet. The dog survived.

Strip away the heartwarming narrative and you are left with something that should make every pharmaceutical executive and healthcare investor deeply uncomfortable. The tools for designing biological interventions are now accessible to anyone with a laptop and a problem to solve. AlphaFold predicts protein structures. Large language models translate between scientific literature and practical protocols. The barrier to entry for biotechnology just collapsed from ”PhD and a lab” to ”curiosity and an API key.”

This is simultaneously thrilling and terrifying. Thrilling because it demonstrates AI's ability to make expertise accessible, to compress decades of specialist knowledge into actionable guidance for anyone willing to learn. Terrifying because the same tools that designed a vaccine for a beloved pet could theoretically be used to design something far less benign. The regulatory and ethical frameworks for this scenario do not exist yet.

Here's what works: If you work in healthcare, pharma, or biotech, this story is your strategic planning scenario. The question is not whether non-specialists will use AI to do biological design work. They already are. The question is whether your organization wants to lead the regulatory conversation about guardrails, or wait until a less heartwarming incident forces the discussion. Start that conversation now, while the example everyone remembers involves a dog owner who loved his pet too much to give up.

6. This Company Ran 12 Months Without Traditional Data Quality Rules. Here Is What Happened.

In a week dominated by billion-dollar valuations and acquisition wars, the most provocative story might be the quietest. digna, an enterprise data quality platform, reported a 12-month deployment where the customer ran without traditional data quality rules. No manual rule writing. No regex patterns. No threshold-based alerts. Instead, the system used AI to learn what ”normal” data looks like and flagged anomalies autonomously.

This should make every data governance professional simultaneously excited and anxious. The entire discipline of data quality management has been built on the premise that humans write rules and machines enforce them. That premise assumes humans can anticipate every way data can go wrong. Anyone who has maintained a production data pipeline knows that assumption is heroically optimistic. The most damaging data quality issues are the ones nobody thought to write a rule for.

What digna's deployment suggests is that the rule-writing paradigm itself might be the bottleneck. If AI can learn data patterns and detect anomalies without predefined rules, then the data quality team's role shifts from rule author to anomaly investigator. That is a fundamentally different job, and arguably a more valuable one.

Here's what works: Before you dismiss this as vendor marketing, run an experiment. Take your most rule-heavy data pipeline and count how many rules were triggered in the last 90 days. If more than half of your rules never fire, they are dead weight. If your most critical data quality issues were caught by humans noticing something ”felt wrong” rather than by rules, your rule-based approach is already failing in the ways that matter most. The future of data quality might not be better rules. It might be no rules at all.

Turn AI Into Extra Income

You don’t need to be a coder to make AI work for you. Subscribe to Mindstream and get 200+ proven ideas showing how real people are using ChatGPT, Midjourney, and other tools to earn on the side.

From small wins to full-on ventures, this guide helps you turn AI skills into real results, without the overwhelm.

Signal vs. Noise

🟢 Signal: Blockchain surged 1,696% in real influence across 44 articles. Before you roll your eyes, this is not 2021 crypto hype. Blockchain's structural importance in enterprise contexts grew faster than any other concept this period. The driver is infrastructure: cross-border e-commerce settlement, federated learning protocols, and digital asset compliance frameworks. When blockchain grows in influence without growing in hype, it means engineers are building on it quietly. That is the strongest signal there is.

🟢 Signal: Data Security grew 51% in real influence across 39 articles while mentions actually declined. Fewer people talking about it, more structural weight in the ecosystem. This is the inverse of hype: practitioners are implementing security, not just discussing it. The CrowdStrike-Perplexity partnership and the AI data center energy research confirm that security infrastructure is being built, not debated.

🔴 Noise: Data Governance appeared in 42 articles but real influence collapsed 42.4%. The governance conversation is getting louder and less productive at the same time. More conferences, more frameworks, more working groups, less actual implementation. When a discipline's mention count stays high but its structural impact drops, it means the people building things have moved on while the people talking about things have not.

🔴 Noise: Machine Learning dropped 38.4% in real influence despite 37 articles. The umbrella term is fading into background noise. The signal has moved to specific applications: inference optimization, model routing, agentic systems. When someone says ”we use machine learning,” ask which kind. The vague answer is the noise.

From the 190K

The Quiet Blockchain Comeback Nobody Is Talking About

We scanned 190,000 articles this week. Here is what no one is connecting:

Blockchain's structural importance surged 1,696% in a single period. Federated learning algorithms for cross-border e-commerce are using blockchain for data provenance. Digital asset compliance is getting infrastructure funding. And enterprise settlement systems are adopting distributed ledger technology without calling it ”blockchain” in the press release.

This is how real technology adoption works. The hype cycle peaks, crashes, and everyone declares the technology dead. Then, three years later, engineers quietly build the boring but essential infrastructure on top of it. Blockchain in 2026 looks nothing like the crypto speculation of 2021. It looks like the plumbing: provenance tracking, cross-border settlement, and federated data verification. Nobody writes headlines about plumbing. But try running a building without it.

The convergence with AI is the part nobody is discussing. When AI systems need to prove the provenance of their training data, when federated learning needs a trust layer between participants, when cross-border data sharing needs verifiable consent, blockchain provides the audit trail that AI alone cannot. The two technologies are becoming complementary infrastructure, not competing headlines.

Below the surface: Data Pipelines appeared in 47 articles this week with the highest foundational importance of any entity. Zero headlines. Here is how you spot real infrastructure: when something shows up everywhere but headlines nowhere, it means engineers are building on it and marketing has not caught up. Data Pipelines are the plumbing of every AI system. Nobody writes headlines about plumbing. But try running a model without it.

By The Numbers

  • $7.5B valuation target — Together AI's fundraising ambition. An inference infrastructure company valued higher than most public SaaS companies. The market has spoken about where it thinks AI value lives.
  • 3 AI acquisitions in one week — Zendesk/Forethought, Xionex/BRFrame, and a social platform acquiring Maltbook. Standalone AI startups are becoming features inside platforms.
  • 1,696% blockchain influence surge — The biggest structural jump in our data this period. Not crypto speculation. Enterprise infrastructure adoption, quietly and at scale.
  • 27 GDPR mentions — GDPR led all compliance frameworks this period, followed by HIPAA (21) and CCPA (10). The regulatory surface area continues widening across jurisdictions.
  • 47 Data Pipelines articles — The most foundationally important concept in our data. Zero headlines. Maximum infrastructure dependency. The invisible backbone of every AI system.
  • 12 months without data quality rules — digna's enterprise deployment ran a full year on AI-driven anomaly detection instead of traditional rule-based data quality. If it works at scale, the entire data quality discipline changes.
  • $3.5M for Saudi AI — Infobrim secured angel funding in Saudi Arabia. AI startup formation is accelerating outside Silicon Valley, and the capital is following.

Deep Dive: The Inference Economy (Why Running AI Just Became More Valuable Than Building It)

You know that moment when a DJ finishes setting up the sound system and realizes the real work is not the equipment? It is reading the crowd all night, adjusting the mix in real-time, keeping 5,000 people moving for six hours straight. The setup was expensive. The performance is where the value lives. That is exactly what is happening in AI right now. The industry spent three years and hundreds of billions building models. Now it is discovering that running those models, at scale, at speed, at reasonable cost, is the actual business.

The Valuation Signal

Together AI's $7.5 billion valuation is not about what the company has built. It is about what the company enables: inference at production scale. In the old AI economy, value accrued to model builders. In the inference economy, value accrues to the companies that run models efficiently. This is not a subtle shift. It is the difference between building a power plant and running the electrical grid. The grid operator touches every customer. The power plant is one supplier among many.

The coding AI battle confirms this from a different angle. Inference-focused chip design is accelerating because the bottleneck has moved from ”can we train a model?” to ”can we serve a model to millions of users at a cost that makes business sense?” When chip designers pivot to inference optimization, they are following the economics, not the hype.

The Model Routing Revolution

The emergence of AI model routing as a discipline tells the same story. Instead of deploying one massive model for every task, organizations are learning to route different queries to different models based on complexity, cost, and latency requirements. A simple customer query does not need a frontier model. A complex research synthesis does. Routing between them is where the cost savings live, and cost savings at inference scale compound into massive competitive advantages.

This is the equivalent of a DJ knowing which tracks require the big speakers and which ones sound better on the monitors. You do not blast the subwoofers for a quiet intro. You match the output to the moment. AI model routing does the same thing, and it is becoming the discipline that separates organizations that run AI profitably from those that burn cash on every API call.

Why This Matters More Than the Next Model Release

The market is already pricing this shift. Software stocks are getting crushed because investors see AI threatening software margins. But the companies that will thrive are not the ones with the biggest models. They are the ones that run AI most efficiently. The inference economy rewards operational excellence, not research breakthroughs.

What Actually Works

  1. Benchmark your inference costs per query type — If you do not know what each AI inference call costs your organization, you are flying blind. Start measuring cost per query by complexity tier.
  2. Implement model routing — Do not use your most expensive model for every task. Route simple queries to smaller models. The cost difference can be 10x or more.
  3. Evaluate inference-layer vendors separately from model vendors — Your model provider and your inference provider do not need to be the same company. Together AI, AWS, and others are creating competitive options at the infrastructure layer.
  4. Plan for inference costs in your AI budget — Training is a one-time cost. Inference is ongoing. If your AI budget is mostly training, you are budgeting for yesterday's problem.

The DJ who spends all night tuning the equipment and no time reading the crowd will play a technically perfect set to an empty dancefloor. The inference economy is AI's dancefloor. The models are ready. The question is whether you can run them in a way that keeps the crowd moving all night.

What's Coming

The AI M&A Wave Will Accelerate Through Q2

Three acquisitions in one week is a signal, not an anomaly. Zendesk's move on Forethought follows a pattern: established platforms buying AI-native startups to avoid being disrupted by them. Expect customer service, sales intelligence, and developer tools to see the highest M&A activity. If you are building on a standalone AI vendor, check whether they are likely to be acquired before your contract renewal.

AI Security Will Become a Standalone Budget Line

The CrowdStrike-Perplexity partnership is the first major move in what will become a new enterprise security category: AI-native threat protection. By Q3, expect at least three more major security vendors to announce AI-specific products. Your security budget should have a separate line item for AI security by year-end. If it does not, you are defending yesterday's attack surface.

Regulatory Divergence Will Force Multi-Stack AI Architectures

China is tightening AI regulation while Europe is streamlining its AI Act rules. The regulatory gap between jurisdictions is widening, not narrowing. Multinational organizations will increasingly need separate AI stacks for different markets, similar to how data residency requirements forced separate cloud deployments. The companies that build multi-jurisdiction AI governance now will have a structural advantage over those that retrofit compliance later.

For Your Team

Wednesday's meeting prompt: ”Together AI just raised at $7.5 billion by betting that running AI models is more valuable than building them. If we looked at our own AI spending, how much are we investing in running and optimizing AI versus just buying and deploying it? And do we even know what each AI inference call costs us?”

The Inference Economics Audit:

  1. Measure cost per inference — Calculate what each AI query costs your organization, broken down by model and task type. Most teams cannot answer this question.
  2. Identify routing opportunities — Map which AI tasks need frontier models and which can run on smaller, faster, cheaper alternatives. The gap is usually larger than expected.
  3. Separate training from inference budgets — If your AI budget lumps everything together, you are hiding the operational costs that will compound over time.
  4. Benchmark against inference-layer vendors — Compare your current costs against Together AI, Cerebras, and other inference-focused providers. The market is competitive enough that you should have options.

Share-worthy stat: Blockchain's structural importance surged 1,696% this period, the biggest jump in our data, driven entirely by enterprise infrastructure adoption (cross-border settlement, federated learning, data provenance), not by crypto speculation. The technology everyone declared dead is quietly becoming the trust layer underneath AI.

Go deeper: Track AI infrastructure and inference economics in real-time

The Track of the Day

”The AI industry spent three years and hundreds of billions building the sound system. Together AI just raised $7.5 billion by promising to be the one who actually runs the show. Wall Street punished software stocks for not delivering AI revenue. Three acquisitions landed in one week. And a programmer in Australia used freely available AI tools to design a cancer vaccine for his dying dog. The biggest signal in 190,000 articles this week? Blockchain surged 1,696% in structural importance, not on crypto hype, but on enterprise plumbing. The technology everyone wrote off is becoming the trust layer underneath everything else.”
— Ins7ghts Knowledge Graph Analysis, March 2026

Today's set: ”Running Up That Hill” by Kate Bush. Kate Bush wrote this in 1985 about wanting to swap places with someone to understand their perspective. The AI industry is going through its own swap: the value is moving from the model builders to the model runners, from the equipment manufacturers to the operators, from the hype merchants to the plumbers. Running up that hill used to be about building bigger models. Now it is about running them efficiently at the bottom of the hill, where the customers actually are. The inference economy rewards endurance, not sprints. Make sure your infrastructure can handle the long run.

Your DJ signing off. Know your inference costs, audit your vendor acquisition risk, and stop writing data quality rules that never fire. The sound system is built. The real question is whether you can keep the dancefloor moving all night.

Yves Mulkers, your data DJ, mixing 190,000 articles into the tracks that actually matter.

We scanned 190,000 articles this week so you don't have to. Data Pains → Business Gains.

Published: March 17, 2026 | Curated by Yves Mulkers @ Ins7ghts

1,300+ articles scanned. 7 stories selected. Our AI distills the noise into signal—in seconds. Get early access →

Know someone who'd find this useful? Share your unique referral link →

Want Your Own AI Intelligence Briefing?

Our platform analyzes 1,000+ sources daily and delivers personalized insights in seconds.

Join the Waitlist →

Founding members: Lifetime discount • Priority access • Shape the product

Keep Reading