JERUSALEM, April 14 (Xinhua) -- Israeli researchers have found that modern artificial intelligence (AI) systems can judge people and form a type of "trust," though in ways that differ from humans, the Hebrew University of Jerusalem said in a statement on Tuesday.
In a new study published in Proceedings of the Royal Society A, the team examined how AI evaluates individuals in contexts such as lending, hiring, and making recommendations.
The researchers analyzed over 43,000 simulated decisions and compared them with responses from about 1,000 human participants.
They found that both humans and AI tend to favor people who appear competent, honest, and well-intentioned, suggesting AI captures some basic elements of "human trust."
However, key differences emerged. Humans usually form overall impressions based on a mix of traits, while AI systems break judgments into separate factors, such as competence or integrity, and score them more rigidly, the researchers said.
The study also found that AI can show consistent bias, sometimes stronger than human bias, based on factors such as age, gender, or religion, even when all other details are the same.
The researchers said that the findings highlight the need to better understand how AI makes decisions, as these systems increasingly influence hiring, finance, and healthcare. ■



