Cultural biases in large language models (LLMs) have been found to surface easily in everyday use, with 86.1% of incidents arising from a single prompt, as per the Singapore AI Safety Red Teaming Challenge in late 2024. Gender bias was the most prevalent, followed by race/religious/ethnicity, geographical/national identity, and socio-economic biases, with regional languages showing higher bias manifestation compared to English. The research highlighted the need for improved AI safety measures, particularly in non-English contexts, and underscored the importance of human oversight in AI-assisted creative processes for marketers and advertisers targeting diverse Asian markets.
Full Article