Skip to main content
Thu, Mar 19, 2026
S&P 500 5,142.30 +0.87%|NASDAQ 16,284.75 +1.12%|DOW 38,972.10 -0.23%|AAPL $192.45 +1.80%|TSLA $241.80 -2.10%|AMZN $178.92 +0.54%|GOOGL $141.20 +0.32%|MSFT $415.60 -0.15%|
S&P 500 5,142.30 +0.87%|NASDAQ 16,284.75 +1.12%|DOW 38,972.10 -0.23%|AAPL $192.45 +1.80%|TSLA $241.80 -2.10%|AMZN $178.92 +0.54%|GOOGL $141.20 +0.32%|MSFT $415.60 -0.15%|
BusinessUnited States1 sourcesNeutral

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’

Nvidia Is Planning to Launch an Open-Source AI Agent Platform The Tesla Influencers Leaving the ‘Cult’ Meta Is Shutting Down Horizon Worlds on Meta Quest You can follow Brian Barrett on Bluesky at @brbarrett and Zoë Schiffer on Bluesky at @zoeschiffer.

BB
Brian Barrett,Zoë Schiffer
via Brian Barrett,Zoë Schiffer

Articles mentioned in this episode: Nvidia Is Planning to Launch an Open-Source AI Agent Platform The Tesla Influencers Leaving the ‘Cult’ Meta Is Shutting Down Horizon Worlds on Meta Quest You can follow Brian Barrett on Bluesky at @brbarrett and Zoë Schiffer on Bluesky at @zoeschiffer. Write to us at uncannyvalley@wired.com. How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link.

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’

You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too. Note: This is an automated transcript, which may contain errors.

Zoë Schiffer: Brian, hello. Very exciting to have another way to talk to you when I'm not pinging you on Slack every five seconds. Brian Barrett: It's great, because Slack doesn't have the voice part.

Zoë Schiffer: It doesn't. Brian Barrett: I will say: very sad that Leah won't be a part of that journey today. Zoë Schiffer: I know.

It is really sad, but when the Leah's away, the mice will play, and we will be talking about topics that Leah hates, so just wait. Brian Barrett: And to be clear, she'll be back next week. She's just sick.

Zoë Schiffer: Yeah. Brian Barrett: It's allergy season. Zoë Schiffer: Welcome to WIRED's Uncanny Valley.

I'm Zoë Schiffer, WIRED's director of business and industry. Brian Barrett: I'm Brian Barrett, executive editor. Zoë Schiffer: This week on the show, we're diving into Nvidia's annual developer conference, why some Tesla influencers are fleeing the brand, and why Meta has finally shut down Horizon Worlds on Meta Quest.

So to start us off, this week, Nvidia had its annual developer conference in San Jose. This is the big event in the AI industry. Some people even call it the Super Bowl of AI.

Developers go, CEOs, researchers, WIRED reporters—and we're all waiting to hear what CEO Jensen Huang is going to tell us about the future of the company. Brian Barrett: One thing that's interesting about the Nvidia conference too, is I feel like so much of it is business facing. It's not a lot of stuff that you, as an AI consumer or someone who plays around with Claude, wouldn't necessarily connect with.

One thing, with a grain of salt, because this is someone who stands to make this money, but Jensen did say the revenue opportunity for artificial intelligence chips just at Nvidia might reach at least a trillion dollars through 2027. Zoë Schiffer: Pocket change. Brian Barrett: Pocket change, I mean, really, for Nvidia at this point.

One thing that was really interesting: He introduced a new product. I always like when there's an actual product tied to this rather than the promise of a product. A while ago, Nvidia struck a licensing deal with a company called Groq, not to be confused with the occasionally— Zoë Schiffer: It's Groq with a “q.”

Brian Barrett: —Groq with a “q,” not Grok with an unconsensual undressing problem. So they're going to pair Nvidia's chips, which are good at processing AI, with Groq's chips, which have components that can put a charge into how Nvidia's chips operate. So basically that $20 billion licensing agreement is bearing fruit.

It's going to make inference quicker, less expensive. It's going to make things more efficient basically for Nvidia customers. Zoë Schiffer: Right.

Yeah. I was talking to a bunch of people in the industry about this this week, and one thing they pointed out, which might be totally obvious to AI researchers, but was pretty not obvious to me, is that we actually haven't had specific chips for AI yet. They've been using general Nvidia chips for training and inference this entire time.

And this is basically the first year where we are going to see specialized chips for artificial intelligence. Brian Barrett: Well, and I'm old enough to remember—I know we joke about my age, but it's not that long ago—Nvidia got to where it is because it made GPUs for gaming PCs. The GPUs happened to be good at the things AI needed.

So they kind of backed into this. So yeah, it is a big moment. But Zoë, we've said inference a couple of times so far.

Go ahead and define it for folks so that we know everyone's on the same page about what we are talking about. Zoë Schiffer: Yeah. OK.

So if you're an AI researcher, you're like, yawn, this is boring. But it's not obvious to people outside the industry, so it's worth saying really clearly. There is the pre-training process where you let a model loose on the corpus of the internet, and it gobbles up all the data, and it learns from it.

And then there's a process of you, as an AI consumer, asking a question to Chat

GPT or Claude. The process of you pinging that question and getting an answer in return is what we think of when we talk about inference. And actually now, most of the investments that AI companies and big tech companies are making are being spent on inference, not pre-training.

Brian Barrett: Because they've already eaten up the entire internet. Zoë Schiffer: And inference is just really expensive. Serving all of those customers in real time is a really expensive process.

Brian Barrett: Just for an example of how Jensen Huang talked about inference, there was a slightly bizarre AI animated music video that was displayed at the end of his speech. Let's listen to that. Archival audio: Once upon an AI time / training was paradigm / short talking models how / but inference runs the whole world now / Vera showed us who's the boss / at 35 times less the cost / Blackwell makes the token sing / Nvidia, the inference king.

Zoë Schiffer: I sincerely hope that they used AI to make that and did not pay a marketing firm many millions of dollars. Brian Barrett: Yeah. The quality is about what I would expect from AI.

And just for folks, the references to Blackwell and Vera are references to various Nvidia products. Zoë Schiffer: We also should say that Jensen announced NemoClaw, which was this enterprise platform for AI agents, basically like a secure enterprise version of OpenClaw. Brian Barrett: It's fun to watch companies scramble.

So you've got NemoClaw now from Nvidia. You've got the creator of OpenClaw, which then—what is the latest name for it? Zoë Schiffer: Yeah, because it was Clawdbot, Moltbot, OpenClaw.

Brian Barrett: OpenClaw, great. So he's now working at OpenAI, and Meta has acquired Moltbook, right? Zoë Schiffer: Right.

Brian Barrett: —the social network for AI agents. So everyone's scrambling to cover this and be on top of this, but it feels almost like, so that they can say that they are. Zoë Schiffer: Yeah.

Source Verification

Corroboration Score: 1

This story was independently reported by 1 sources. Click any source to read the original article.

Comments

0 comments
Be respectful and constructive.
Loading comments...