Jensen Huang Is Begging You to Stop Being So Negative About AI

4 hours ago 3

Nvidia CEO Jensen Huang, who has seen his net worth skyrocket by nearly $100 billion since the AI boom started a couple of years ago, would really appreciate it if you would stop talking about the potential harms of the technology that’s supercharged his fortune. It’s really harshing his vibe.

In an appearance on the No Priors podcast hosted by Elad Gil and Sarah Guo, Huang took aim at people who have suggested AI may have some significant, detrimental impact, from job displacement to expanding the surveillance state. “[It’s] extremely hurtful, frankly, and I think we’ve done a lot of damage with very well-respected people who have painted a doomer narrative,” he said.

According to Huang, considering the potential existential risks of unleashing AI on society may do more harm than good. “It’s not helpful. It’s not helpful to people. It’s not helpful to the industry. It’s not helpful to society. It’s not helpful to the governments,” he said. He particularly took issue with other people in the industry going to the government and asking for regulation and mandatory safeguards. “You have to ask yourself, you know, what is the purpose of that narrative and what are their intentions,” he asked rhetorically. “Why are they talking to governments about these things to create regulations to suffocate startups?”

Huang isn’t totally off-base about some of what he’s suggesting. Regulatory capture is a real risk, especially as multi-billion-dollar companies look to lock in their lead by using their absurd wealth to sway politicians and cement favorable policy. And there’s no doubt that AI players have been getting into the lobbying business. According to the Wall Street Journal, Silicon Valley firms have already poured more than $100 million into new Super PACs to push pro-AI messaging in the lead-up to midterm elections in 2026. There is also zero doubt that industry players use societal-scale risks as a marketing tactic: it makes their product seem full of endless potential, and it suggests they need to maintain control of it to keep everyone safe rather than letting this powerful tool fall into the wrong hands or be controlled by some government regulator.

But just being optimistic doesn’t mitigate some of the very real risks that AI presents. “When 90% of the messaging is all around the end of the world and the pessimism, and I think we’re scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society,” Huang said, without pointing to how more pouring more money into AI infrastructure makes us safer, other than to suggest more is better.

Huang doesn’t have a solution for the very real risk of job displacement—not necessarily because AI is so powerful that it’s replacing human labor, but because companies are so eager to chase the next big thing that they’re pulling the ladder up on would-be entry-level employees despite the fact that early AI investments have been more of a money suck than a profit generator. He doesn’t have a solution for the ongoing issues of misinformation, abuse, and the ongoing mental health crisis being exacerbated by AI. We are all simply beta testers on the path to answers.

The only apparent solution is to speed up investment and development with the belief that, at the other end, there will be a superintelligence that solves all those problems. If the doomers have a hidden agenda of control, it’s hard to look at Huang’s position and not see an ulterior motive, too: padding his bottom line.

Read Entire Article