It seems that instead of updating Grok to prevent outputs of sexualized images of minors, X is planning to purge users ...
LAION, the German research org that created the data used to train Stable Diffusion, among other generative AI models, has released a new dataset that it claims has been “thoroughly cleaned of known ...
For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly ...
Two major developments reignited regulatory and technological discourse around Child Sexual Abuse Material (CSAM) this year: The first, Visa & MasterCard cracking down on adult sites that contained ...
Posts from this topic will be added to your daily email digest and your homepage feed. is a senior tech and policy editor focused on online platforms and free expression. Adi has covered virtual and ...
Researchers have found child sexual abuse material in LAION-5B, an open-source artificial intelligence training dataset used to build image generation models. The discovery was made by the Stanford ...
If the question is how to control harmful content, Davey Winder considers whether the answer is machine learning? The proposed introduction of new safety features from Apple to address the problem of ...
Over thousands of CSAM (child sexual abuse materials) victims are now taking the fight against Apple after the company ultimately decided to skip adding tools that will help detect it on their ...
A user on Reddit says they have discovered a version of Apple's NeuralHash algorithm used in CSAM detection in iOS 14.3, and Apple says that the version that was extracted is not current, and won't be ...
Top AI companies, including Meta, Microsoft, Amazon, OpenAI, and others, have officially signed a pledge that ensures child safety principles, as announced by the nonprofit organization Thorn. The new ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results