ABC News reports on researchers finding that OpenAI’s Whisper, used by Nabla for medical transcription, sometimes creates false information in transcriptions, hallucinating sentences during silences in recordings. Researchers from Cornell University and others discovered that these hallucinations can include violent sentiments, nonsensical phrases, and invented medical conditions. OpenAI acknowledges the issue and is working to reduce hallucinations, with usage policies in place to prevent high-stakes decision-making contexts.
Full Article
TikTok is making it easier to control what is (and isn't) in your 'For You' feed
TikTok is enhancing user control over its "For You" algorithm by introducing a "manage topics" setting, allowing users to adjust the frequency of recommendations for various topics like nature and fashion using sliders. While these settings won't completely eliminate any content categories, they can influence the recommendation frequency as user interests change. Additionally, TikTok is expanding its keyword filtering option with AI-enhanced features, enabling users to block specific keywords and related terms, ultimately allowing for...
Read more