ABC News reports on researchers finding that OpenAI’s Whisper, used by Nabla for medical transcription, sometimes creates false information in transcriptions, hallucinating sentences during silences in recordings. Researchers from Cornell University and others discovered that these hallucinations can include violent sentiments, nonsensical phrases, and invented medical conditions. OpenAI acknowledges the issue and is working to reduce hallucinations, with usage policies in place to prevent high-stakes decision-making contexts.
Full Article
Tesla is trying to stop certain self-driving crash data becoming public
Tesla has requested a judge to block the National Highway Transportation Safety Administration (NHTSA) from disclosing confidential crash data related to its self-driving features, arguing that public release could allow competitors to analyze the effectiveness of its technology. The Washington Post, which has sued the NHTSA for more detailed crash information, claims that while some data is published, critical details remain withheld. Tesla asserts that certain crash-related information, including driver behavior and road conditions, should...
Read more