It’s Time for Silicon Valley to Move Past Its Superhuman AI Obsession
Creating a machine that surpasses human intelligence. The theme has persisted for centuries, evoking both awe and dread, as seen in the agents of “The Matrix” and the operating system in “Her.” For numerous individuals in Silicon Valley, this intriguing fictional theme is poised to transition into reality. The pursuit of artificial general intelligence, or A.G.I., and potentially superintelligence, has become the primary objective for America’s leading technology companies. These firms are pouring tens of billions of dollars into an intense competition to achieve this milestone. While some experts caution against the potential disastrous consequences of the advent of A.G.I., others contend that this breakthrough, possibly just years away, will trigger a productivity explosion. The nation and company that achieve this milestone first stand to gain significant advantages.
This frenzy prompts us to reflect. The timeline for achieving artificial general intelligence remains uncertain. Concerns arise that Silicon Valley has become so captivated by achieving its objectives that it risks alienating the general public and, more critically, overlooking vital opportunities to leverage the technology that is already available. By concentrating exclusively on this goal, our nation may jeopardize its position relative to China, which appears less preoccupied with developing A.I. that could outstrip human capabilities and more intent on leveraging the existing technology at its disposal. Silicon Valley’s intrigue with artificial general intelligence has deep historical roots, tracing back several decades. In 1950, computing pioneer Alan Turing introduced the imitation game, a test designed to assess a machine’s intelligence based on its ability to deceive human interrogators into thinking it is human. Over the years, the concept has transformed, yet the objective remains unchanged: to replicate the capabilities of a human brain. A.G.I. represents the most recent evolution in technology.
In 1965, I.J. Good, a colleague of Mr. Turing, articulated the allure surrounding the concept of a machine that could rival the sophistication of the human brain. Mr. Good observed that intelligent machines have the capability to self-improve at a pace that surpasses human ability to keep up, stating, “The first ultraintelligent machine is the last invention that man need ever make.” The invention that promises to surpass all others. In summary, achieving A.G.I. would represent the most monumental commercial opportunity ever encountered. It is hardly surprising that the world’s leading talents are dedicating themselves to this ambitious endeavor.
The prevailing approach is to build at all costs. In the competitive landscape of artificial intelligence, every tech giant is vying to achieve artificial general intelligence (A.G.I.) ahead of the others. This race has led to the construction of data centers with expenditures exceeding $100 billion. Notably, companies such as Meta are incentivizing A.I. researchers with signing bonuses that can surpass $100 million. The expenses associated with training foundation models, which act as a versatile base for various tasks, have seen a consistent increase. Reports indicate that Elon Musk’s start-up xAI is currently incurring expenses of $1 billion each month. Dario Amodei, the chief executive of Anthropic, has projected that the training costs for leading models could soar to between $10 billion and $100 billion within the next two years.
Indeed, A.I. has surpassed the average human in various cognitive tasks, excelling in areas such as solving some of the world’s most challenging math problems and writing code comparable to that of a junior developer. Proponents highlight this advancement as proof that A.G.I. is imminent. Despite the remarkable advancements in A.I. capabilities since the launch of ChatGPT in 2022, the scientific community has not yet identified a definitive route to creating intelligence that exceeds human levels. A recent survey conducted by the Association for the Advancement of Artificial Intelligence, an esteemed academic society comprising some of the leading researchers in the field, revealed that over three-quarters of the 475 respondents believe our current approaches are unlikely to yield a breakthrough. As artificial intelligence continues to advance with larger models and increased data ingestion, concerns arise regarding the potential for the exponential growth curve to falter. Experts contend that new computing architectures are essential beyond the foundations of large language models to achieve the desired objective. The challenge surrounding our emphasis on A.G.I. extends past the technological aspects and delves into the ambiguous, often contradictory narratives that come with it. Predictions range from grave to optimistic.
This year, the nonprofit AI Futures Project unveiled its report titled “A.I. 2027” which forecasts the possibility of superintelligent A.I. taking control or even exterminating humans by the year 2030. Simultaneously, computer scientists at Princeton released a paper entitled “A.I. as Normal Technology” which contends that A.I. will stay manageable in the near future, akin to nuclear power. In a curious turn of events, Silicon Valley’s leading firms announce increasingly shorter timelines for the arrival of A.G.I., even as the majority of individuals outside the Bay Area remain largely unfamiliar with the term. A growing divide is emerging between technologists who champion A.G.I. — a rallying cry for those who believe they are on the brink of a technological breakthrough — and the general public, who remain skeptical of the surrounding hype and view A.I. as an inconvenience in their everyday lives. As experts issue dire warnings regarding A.I., public enthusiasm for the technology has understandably waned.
Now, let’s examine the current situation in China. The nation’s scientists and policymakers appear to be less influenced by A.G.I. than their American counterparts. During the recent World Artificial Intelligence Conference held in Shanghai, Premier Li Qiang of China highlighted “the deep integration of A.I. with the real economy” through the expansion of application scenarios. As certain Silicon Valley technologists raise alarms about the potential dangers of A.I., Chinese companies are actively incorporating the technology into a wide array of applications, including the superapp WeChat, hospitals, electric vehicles, and even household appliances. In rural villages, competitions among Chinese farmers have emerged to enhance A.I. tools for harvest. Notably, Alibaba’s Quark app has recently achieved the status of China’s most downloaded A.I. assistant, attributed in part to its medical diagnostic capabilities. Last year, China launched the A.I.+ initiative, designed to integrate A.I. across various sectors to enhance productivity.
The optimism surrounding A.I. among the Chinese population is, therefore, not unexpected. During the World A.I. Conference, families gathered, including grandparents and young children, as they explored the exhibits. They were captivated by impressive displays of A.I. applications and eagerly engaged with humanoid robots. According to an Ipsos survey, more than 75% of adults in China reported that A.I. has significantly transformed their daily lives over the last three to five years. The figure represents the highest share globally, which is twice that of Americans. A recent poll indicates that merely 32 percent of Americans express trust in A.I., in stark contrast to 72 percent in China.
Numerous claimed advantages of A.G.I. — spanning fields such as science, education, and health care — are already attainable through the meticulous enhancement and application of current advanced models. For instance, one might wonder why there is still no product available that imparts essential, cutting-edge knowledge to all individuals in their native languages through personalized and gamified methods. What accounts for the absence of competitions among American farmers aimed at utilizing A.I. tools to enhance their harvests? Where is the Cambrian explosion of imaginative, unexpected uses of A.I. to enhance lives in the West?
The notion of an A.G.I. or superintelligence tipping point contradicts historical patterns in technology, where advancements and their spread have typically been gradual. It is often observed that technology can take decades before it achieves widespread adoption. The modern internet, which was invented in 1983, began to significantly reshape business models in the early 2000s. Despite the remarkable user growth of ChatGPT, a recent working paper from the National Bureau of Economic Research revealed that the majority of individuals in the United States continue to engage with generative A.I. on an infrequent basis. When a technology eventually goes mainstream, that’s when it’s truly game changing. Smartphones transformed global connectivity, not due to the most advanced or stylish models, but because affordable and sufficiently functional devices spread widely, reaching villagers and street vendors alike. It is essential that a greater number of individuals beyond Silicon Valley experience the positive effects of A.I. in their daily lives. A.G.I. should not be viewed as a finish line; rather, it represents a process characterized by the humble, gradual, and uneven diffusion of generations of less powerful A.I. throughout society.
Rather than merely inquiring, “Are we there yet?” we must acknowledge that A.I. has emerged as a significant force for transformation. The application and adaptation of currently available machine intelligence is poised to ignite a flywheel of increased public enthusiasm for A.I. As the frontier progresses, it is imperative that our applications of technology evolve accordingly. As America’s leading tech firms compete in the race toward achieving artificial general intelligence, China’s leadership appears to prioritize the implementation of current technologies across both traditional and emerging industries, including manufacturing, agriculture, robotics, and drones. The intense focus on artificial general intelligence may divert our attention from the everyday effects of A.I. It is essential that we pursue both avenues.






