SACRAMENTO, the United States, Oct. 7 (Xinhua) -- More industry experts and insiders have weighed in on the regulation of artificial intelligence (AI) safety as a recently vetoed bill in the U.S. state of California sparked a nationwide debate over how to effectively govern the rapidly evolving technology.
California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was vetoed by Governor Gavin Newsom last week for fear of curtailing innovation and jeopardizing the state's leadership in AI development.
While AI safety advocates have expressed strong disappointment at the veto, the tech industry has largely welcomed the governor's decision, calling it "pro-innovation."
In the wake of this controversial move, two experts with deep expertise in the AI industry shared their views with Xinhua, arguing that the technology is still relatively new and requires a nuanced approach to mitigate potential risks.
"The California law is a well-intentioned effort to control the dangers of AI, but regulating a technology that hasn't been matured is a very difficult guessing game. It would certainly not be pro-business in that sense," said Kai-Fu Lee, chairman and CEO of Sinovation Ventures.
Lee said that the most effective approach to ensuring AI safety might come through technology itself, as evidenced by the history of technological advancement.
"When electricity was brought to the home, electrocution was a huge problem, and then circuit breakers were invented to prevent that danger," said Lee. "When PCs (personal computer) were connected to the internet, viruses began spreading everywhere. We didn't regulate the PC connecting to the internet, but rather technologies called antivirus software ended up solving the problem."
"So I think for AI, those guardrails will mostly be technological, not regulatory," he added.
However, he didn't dismiss the importance of regulation entirely. Instead, he proposed that regulation should be an extension of existing laws, adapted to cover offenses created by AI or people using AI.
"If people feel deep fake is a big problem, then the punishment for deep fakes should be similar to the punishment for someone impersonating another person to cheat others," Lee explained. "If people feel fraud is a big issue, then GenAI-related fraud should be punished equally or more severely than non-GenAI fraud."
One of the controversies about the bill is it would have placed liability on developers for severe harm caused by their models. Designed to prevent "catastrophic" harms by AI, it would have applied to all large-scale models costing at least 100 million U.S. dollars to train, regardless of the potential damage.
The legislation also would have required AI developers to publicly disclose their testing methods for assessing the likelihood of critical harm and conditions for shutting down models before training began.
Yangqing Jia, founder and CEO of Palo Alto-based Lepton AI, suggested that the bill, if signed into law, could have harmed the open-source community.
"My experience in AI has taught me that it's always good to have a wider audience having access to the technology that's upcoming. Before we know what it can do and what it cannot do, we want to have more people looking into it," said Jia, a former AI expert and cloud computing head at Alibaba.
He credited open source community for raising societal awareness about AI capabilities, especially the open-source software sites like PyTorch and TensorFlow and the open-source models such as Llama.
Calling himself "a big proponent of open source," Jia said that fostering awareness and adoption of AI is currently more important than implementing regulations based on fear of the unknown.
"So I'm glad that Governor Gavin Newsom vetoed the bill, giving us a more free and more open arena to discuss the many aspects of AI," he said. ■