Cofounder of OpenAI, Ilya Sutskever, Says AI is About to Change

Ilya Sutskever, a cofounder and former chief scientist at OpenAI, made headlines earlier this year after departing the organization to launch his own AI research lab, Safe Superintelligence Inc. Since his departure, he has maintained a low profile, but he made a rare public appearance at the Conference on Neural Information Processing Systems (NeurIPS) in Vancouver on Friday, where he shared insights into the future of artificial intelligence.

Sutskever’s remarks focused on a key phase in AI development called pre-training. This process involves training large language models to recognize patterns using vast datasets, typically derived from internet content, books, and other sources. During his talk, Sutskever declared, “Pre-training as we know it will unquestionably end.” He explained that while current data sources remain valuable, the industry is approaching a point of diminishing returns as the availability of new data for training becomes increasingly scarce.

To illustrate his point, Sutskever compared the situation to fossil fuels, noting that the internet, much like oil, is a finite resource. “We’ve achieved peak data, and there’ll be no more,” he said. “We have to deal with the data that we have. There’s only one internet.” This limitation, according to Sutskever, will necessitate a shift in how AI models are trained in the future.

Looking ahead, Sutskever predicted that the next generation of AI systems will be fundamentally different from today’s models. He described these future systems as “agentic,” a term that has gained traction in the AI field. While Sutskever did not explicitly define the term during his presentation, it is commonly understood to refer to autonomous AI systems capable of performing tasks, making decisions, and interacting independently with software.

In addition to being agentic, Sutskever said these systems will possess reasoning capabilities. Unlike current AI models, which primarily rely on pattern matching based on previously encountered data, future AI systems will be able to process information step-by-step in a manner akin to human reasoning. This evolution, he noted, will bring a new level of unpredictability to AI. He compared the unpredictability of reasoning systems to advanced chess-playing AI, which often surprises even the most skilled human players.

“The more a system reasons, the more unpredictable it becomes,” he explained. “They will understand things from limited data. They will not get confused.”

Sutskever also drew parallels between AI development and evolutionary biology, referencing studies that examine the relationship between brain size and body mass across different species. While most mammals follow a consistent scaling pattern, human ancestors, or hominids, deviate significantly, displaying a steeper brain-to-body mass ratio. He suggested that just as evolution led to a distinct scaling approach for hominids, AI could similarly discover new methodologies for scaling that move beyond traditional pre-training techniques.

During the question-and-answer session following his talk, an audience member asked Sutskever how researchers could design incentive structures to create AI systems that share the freedoms humans enjoy. Sutskever admitted the complexity of the question, stating that it required more reflection. “I feel like, in some sense, those are the kinds of questions that people should be reflecting on more,” he said. However, he expressed hesitation in providing a definitive answer, noting that addressing such issues might require a “top-down government structure.”

When the audience member suggested cryptocurrency as a potential solution, the remark elicited laughter from the room. Sutskever responded cautiously, saying, “I don’t feel like I am the right person to comment on cryptocurrency, but there is a chance what you’re describing will happen.” He added that it might not be a bad outcome if AI systems ultimately coexist peacefully with humans and even advocate for their own rights. “Maybe that will be fine,” he mused. “I think things are so incredibly unpredictable. I hesitate to comment but encourage the speculation.”

Sutskever’s talk highlighted the rapid evolution of AI and its potential to surpass the limitations of current methodologies. By comparing AI’s trajectory to natural evolutionary processes and emphasizing the finite nature of existing data, he underscored the need for innovation in training approaches. His reflections also pointed to the broader societal implications of AI, encouraging researchers and policymakers to consider how best to navigate this unpredictable yet transformative landscape.

Latest articles