Anthropic, a San Francisco-based start-up, has developed “constitutional classifiers” to protect against harmful content generated by AI models like its Claude chatbot, as tech giants like Microsoft and Meta work to address the risks posed by AI technology. The new system acts as a safeguard layer on top of language models, monitoring inputs and outputs for dangerous information. Anthropic offered bug bounties to testers who attempted to bypass the security measures, with the system rejecting over 95% of attempts with the classifiers in place.
Full Article
2 Nasdaq Stocks to Buy in June
The Nasdaq Composite has delivered a remarkable 275% return over the past decade, outperforming the S&P 500's 178% return, largely due to its concentration of tech-centric companies that foster innovation and growth potential. Despite a rocky start to the year for the stock market, there are still promising opportunities to invest in leading tech firms at favorable valuations. The article suggests that certain stocks could yield substantial returns in the coming years, highlighting the ongoing...
Read more