China and U.S. can cooperate to minimize AI dangers-Xinhua

China and U.S. can cooperate to minimize AI dangers

Source: Xinhuanet

Editor: huaxia

2024-07-18 09:22:29

AI poses greater threats, but China and U.S. can work together, says Harvard professor

By Tom Pauken II.

BEIJING, July 18 (Xinhuanet) -- Amid the ongoing development of Artificial Intelligence, new technologies will emerge that deliver positive benefits for society, but AI could pose grave threats to our world if it gets exploited by criminal-minded people. AI can become either a saviour or a villain for the scientific community.

It's essential that scientists should address such concerns in order to embrace AI development with innovative technologies and systems that can usher in win-win results for one and all. Additionally, there's a diplomatic solution at hand.

"China and the U.S. can cooperate together on a wide range of fields in AI," said Leslie G.Valiant, Professor of Computer Science and Applied Mathematics, School of Engineering and Applied Sciences, Harvard University, during an exclusive interview with Xinhuanet, after a press conference at the International Congress of Basic Sciences in Beijing on Sunday, July 14.

And to tackle the threats of AI, Valiant suggests that the public should not fall prey to hysteria. "AI has been over-hyped and given a mystical quality" and that's because "AI was badly introduced to the public" in its initial stages.

Professor Valiant expressed concerns the public perceives AI as transforming into a "mythical monster." He believes such worries are unfounded. Nevertheless, he does recognize that AI could pose harm to society.

"Most of the dangers are things, which we are familiar already, because all that AI is doing is what humans have been doing in the past," said Professor Valiant. But, "if you are worried about fraud, we have to worry about new kinds of fraud" with AI development.

 "Because of AI and privacy issues you are dealing with information on a bigger scale," he added. Yet these types of crimes have already existed before AI had come into existence.

He noted that people with criminal mindsets could tap into AI integration and exploit it for selfish motives, but governments, scientists and society should come together to seek solutions.

Although, the problems could become harder to deal with - this might create opportunities for collaboration between Chinese and American scientists.

The public shouldn't feel alarmed by hysterical claims. Nonetheless, "more education is needed to prevent AI disasters from occurring," Valiant said. 

Machine learning has some weak links. Bad agents are creating fake photos and fake voices to create fake videos, while spreading them on Social Media. Consequently, fake videos could cause public panic and that could cause deep impact on society.

But by supporting more 'social awareness' education programs, people could learn how to distinguish between real and fake videos. There's a scientific approach in how to respond to such challenges posed by AI.

Scientists can also set up more "trustworthy" systems in regards to machine learning tools. Hence, AI integration can move forward from a more limited framework. 

Actually, AI uses large language models with its machine learning and protocols and restrictions have already been established to prevent "AI brains" from becoming too malevolent.

The public can feel more confident about AI development. It's not as dangerous as we fear it to be. Nonetheless, it's understandable that many people are afraid of AI. 

Science-fiction novels have become widely popular, while such books often depict fantasy tales about AI development, including AI robots as villainous characters. It's human nature that the public can be swept up by their emotions.

Therefore, Professor Valiant is correct to suggest the public should get more educated on AI and to conclude the "mystical quality" of AI, is just science-fiction. By getting better educated on AI, the public can discover its wide-ranging benefits.

Meanwhile, China and the U.S. can work together to make AI development less dangerous and more constructive for the world at large.