Cultural biases in large language models (LLMs) have been found to surface easily in everyday use, with 86.1% of incidents arising from a single prompt, as per the Singapore AI Safety Red Teaming Challenge in late 2024. Gender bias was the most prevalent, followed by race/religious/ethnicity, geographical/national identity, and socio-economic biases, with regional languages showing higher bias manifestation compared to English. The research highlighted the need for improved AI safety measures, particularly in non-English contexts, and underscored the importance of human oversight in AI-assisted creative processes for marketers and advertisers targeting diverse Asian markets.
Full Article
Commentary: As Singapore builds up, can it hold on to its memories?
Read a summary of this article on FAST.Get bite-sized news via a newcards interface. Give it a try. Click here to return to FAST Tap here to return to FAST FASTSINGAPORE: When the Urban Redevelopment Authority (URA) launched its draft master plan last month, most of the conversation focused on how many new homes would be built – at least 80,000 in more than 10 new neighbourhoods over the next 10 to 15 years.It's not...
Read more