An academic advised U.S. lawmakers to “keep calm and avoid overhyping China’s AI capabilities,” telling a Senate panel that Beijing’s authoritarian regime struggles to drive widespread adoption of technological innovations.
Analysts assessing technological leadership tend to be overly preoccupied with innovation capacity, or which nation will be the first to generate new-to-the-world breakthroughs, said George Washington University Assistant Professor Jeffrey Ding.
Equally important is diffusion capacity, or the state’s ability to spread and adopt innovations across the economy after that initial breakthrough, Ding said. And while China spends heavily on research and development and receives a lot of patents and publications, the nation’s adoption rate of information and communications technologies like cloud computing or industrial software lags far behind the United States, he told the Senate Select Committee on Intelligence Wednesday.
That puts America at a structural advantage it should capitalize on by embracing an “open system of innovation” when it comes to AI, he said – even if that results in some intellectual property slipping through to China. Adopting a “Fortress America” approach would stifle American progress for little commensurate gain, he argued (see: Experts Probe AI Risks Around Malicious Use, China Influence).
“China faces a diffusion deficit,” Ding said. “Its ability to diffuse innovations like AI across the entire economy lags far behind its ability to pioneer initial innovations or make fundamental breakthroughs.”
Ding testified alongside Meta Platforms VP and Chief AI Scientist Yann LeCun and Marine Corps School of Advanced Warfighter Professor Benjamin Jensen at a hearing on the national security implications of artificial intelligence.
Ding said China’s ability to produce AI engineers also trails America since China has just 29 universities that meet a baseline quality metric for AI engineering while the U.S. has 159. And while China has built large language models like OpenAI’s ChatGPT and well as its text-to-image model DALL-E, Ding said the Chinese models don’t perform nearly as well as their American counterparts.
In fact, Ding said ChatGPT performs better than its Chinese counterparts on benchmarks and leaderboards even when given prompts in Chinese.
“Some of China’s bottlenecks relate to a reliance on Western companies to open up new paradigms, China’s censorship regime, and computing power bottlenecks,” Ding said. “I submitted three specific policy recommendations to the committee, but I want to emphasize one, which is, ‘Keep calm and avoid overhyping China’s AI capabilities.'”
Policymakers also erroneously think anything that helps China around artificial intelligence is going to hurt the U.S. even though giants in China’s AI industry like ByteDance, Alibaba and Baidu end up generating a lot of profits that come back into the U.S. economy and hopefully get reinvested into American productivity, according to Ding.
“It’s a more difficult question than just, ‘Any investment in China’s AI sector means it’s harmful to U.S. national security,'” Ding said. “Continuing to maintain the openness of these global innovation networks is always going to favor the U.S. in the long-run in terms of our ability to run faster.”
Testing ‘the Very Foundation of Our Republic’
Despite all this, LeCun said Meta opted not to operate in China since the regime’s regulations around user privacy are incompatible with the social media giant’s privacy principles. In addition, LeCun said the Chinese government wants to control what information circulates. And from a privacy standpoint, LeCun said large language models are trained on publicly available data rather than private user data.
“Privacy and security and safety are on top of our list in terms of priorities,” LeCun said. “They’re very good principles to follow. We try to follow them as much as we can.”
Jensen had a much more pessimistic view of artificial intelligence when it comes to using it for combat or intelligence purposes. Analysts haven’t been trained in basic data science or statistics, meaning they can’t balance causal inference with decades of thinking about human judgment and intuition.
An analyst would therefore be unable to explain why it disagrees with a generative AI model that claims a new strain of adversary malware is targeting U.S. critical infrastructure, according to Jensen. Analysts and military planners alike struggle to understand prediction, inference and judgment for algorithms, leading to crisis decisions made on emotion-flawed analogies and bias rather than rational interests.
Hackers will for the foreseeable future be able to launch a constant barrage of cyber operations and misinformation campaigns at machine speed, putting both America’s military and civilian population at risk, Jensen said. Members of the military and elected officials alike need to do more tabletop exercises and experiments around using AI in wartime scenarios to be adequately prepared, he advised.