Anthropic, a San Francisco-based start-up, has developed “constitutional classifiers” to protect against harmful content generated by AI models like its Claude chatbot, as tech giants like Microsoft and Meta work to address the risks posed by AI technology. The new system acts as a safeguard layer on top of language models, monitoring inputs and outputs for dangerous information. Anthropic offered bug bounties to testers who attempted to bypass the security measures, with the system rejecting over 95% of attempts with the classifiers in place.
Full Article
10 Reasons to Buy and Hold This Artificial Intelligence (AI) Stock Forever
AI is currently a hot topic among investors, with companies like Nvidia and Palantir Technologies posting remarkable gains of 816% and 1,600%, respectively, over the past three years. Although Amazon (NASDAQ: AMZN) has only doubled in that time, it remains a strong investment opportunity due to its diverse growth potential beyond AI, with ten compelling reasons to buy and hold its stock long-term. Explain It To Me Like I'm 5: Amazon is a big company...
Read more