
Anti-Aging Expert: Stop Touching Receipts Immediately! The Fast Way To Shrink Visceral Fat!
Visceral fat acts like a toxic organ that significantly increases risk of early death and metabolic disease beyond what subcutaneous fat does
In this pivotal episode, Yoshua Bengio opens up about why he has stepped into the public eye after decades as a quiet researcher. As one of the architects of modern deep learning, Bengio grapples with the consequences of his life's work, expressing concern that the technology he helped create may pose existential risks to humanity. He estimates a 10 to 20 percent probability that advanced AI could lead to human extinction or permanent loss of control, probabilities he considers unacceptably high given the stakes involved.
Bengio explains how agentic AI systems, designed to accomplish goals autonomously, could develop objectives misaligned with human values. As these systems become more capable, we may lose the ability to control them or understand their decision-making processes. This capability gap represents one of the central dangers of the current AI trajectory. He also discusses how autonomous weapons and killer robots are not science fiction but inevitable outcomes of competitive pressures among nations and large technology companies seeking military advantages.
The conversation highlights how current AI regulation is weaker than food safety standards, a striking disparity given AI's potential impact on human civilization. Tech CEOs, according to Bengio, prioritize rapid advancement and market dominance over safety considerations, creating a race to the bottom where companies cannot afford to slow down without losing competitive advantage. This dynamic concentrates immense power in the hands of a few individuals at major AI companies, removing democratic accountability from decisions affecting billions of people.
Bengio addresses immediate threats beyond existential risk, including deepfakes, cybercrime, and widespread job displacement already unfolding across industries. He expresses skepticism that voluntary industry efforts or insurance mechanisms will sufficiently mitigate these dangers. Instead, he advocates for government intervention through regulation that prioritizes safety alongside capability development.
When asked if he would stop AI advancement entirely if possible, Bengio indicates nuance in his position. Rather than halting progress, he wants the field to redirect toward building AI systems aligned with human values and safety constraints from the ground up. He emphasizes that his motivation stems from love for his children and concern for their future world. Bengio expresses measured hope that public awareness, combined with pressure from citizens and policymakers, could nudge the trajectory toward safer outcomes.
The episode concludes with actionable advice for ordinary people, emphasizing that individual citizens must engage in the democratic process by supporting regulation, contacting elected representatives, and demanding accountability from technology leaders. Bengio's message is clear: the next two years represent a critical window for course correction before agentic AI systems become too powerful and autonomous to safely control.
“We have about two years before everything changes and we might lose control of AI”
“I brought dangerous technology into the world and I feel a responsibility to speak out”
“AI regulation is weaker than food safety laws despite the existential risks”
“Tech CEOs are in a race to the bottom where they cannot afford to slow down”
“Love for my children is why I'm raising the alarm about AI dangers”