URUMQI, Oct. 15 (Xinhua) -- As artificial intelligence (AI) permeates all aspects of daily life, the world is experiencing a crisis of truth as it is increasingly difficult for humans to distinguish between real and fabricated media content.
While AI-generated content expands creative boundaries and enhances communication efficiency, it also exacerbates the spread of misinformation, increases the risk of intellectual property infringement, and poses new challenges to developing an international communication ecosystem.
Applying generative AI and large model technologies has opened Pandora's box of AI-driven deception. Under the onslaught of multimodal AI "deepfakes" -- ranging from audio to video and images -- people can easily lose their way in the fog of misinformation.
"It's important for us to understand how artificial intelligence is being applied, particularly as these new technologies often contribute to this misinformation and hate speech that impact the lives of millions of people around the planet," said Pierre Krahenbuhl, director-general of International Committee of the Red Cross (ICRC).
Misusing AI tools has significantly increased the difficulty of managing misinformation, eroding trust in global media. According to the Digital News Report 2023 published by Reuters, only 40 percent of respondents expressed trust in media reports influenced by technologies such as deepfakes.
"Journalists and media organizations have the power and the responsibility to fight disinformation to uncover the truth and restore the public's trust," said Najum Iqbal, head of communications with the regional delegation for East Asia of the ICRC.
In the face of large-scale, diverse and viral new characteristics in the production and dissemination of misinformation in the digital age, many global media outlets and international organizations have begun exploring ways to jointly build a clear public discourse environment and uphold the fundamental principle of truthfulness.
The think tank affiliated with Xinhua News Agency released the "Responsibility and Mission of News Media in AI Era" report on Monday during the sixth World Media Summit in Urumqi, the capital of northwest China's Xinjiang Uygur Autonomous Region.
According to the report, regarding the potential adverse effects of generative AI within the media sector, an overwhelming 85.6 percent are in favor of bolstering regulation and governance measures, with a strong inclination towards "industry self-discipline," "enactment of national laws," and "internal media organization regulation."
"As we gather at this summit today, we have the opportunity to strengthen the credibility of information as we confront disinformation, misinformation, and hate speech," said Siddharth Chatterjee, UN development system resident coordinator in China.
He mentioned that the UN issued the Global Principles for Information Integrity this year, urging governments, tech companies, advertisers, PR firms and media to collaborate in building a more ethical information ecosystem.
"Unity and resolve is more critical than ever in advancing toward our common goals," said Chatterjee. ■