Google establish AI prototype to spot misinformation, and abusive content online
By
siliconindia | Friday, 27 October 2023, 09:51 IST
Google said it has developed a prototype that leverages recent advances in Large Language Models, or LLMs, to assist in identifying content abusive at scale. LLMs are a type of artificial intelligence that can generate and understand human language. “Using LLMs, our aim is to be able to rapidly build and train a model in a matter of days instead of weeks or months to find specific kinds of abuse on our products”, said Amanda Storey, senior director of trust and safety.
Google is still testing these new techniques, but the prototypes have demonstrated impressive results so far. “It shows promise for a major advance in our effort to proactively protect our users, especially from new and emerging risks”, The company, however, did not specify which of its many LLMs it is using to track misinformation.
"We’re constantly evolving the tools, policies, and techniques we’re using to find content abuse. AI is showing tremendous promise for scaling abuse detection across our platforms”, said Google. Google said it is taking several steps to reduce the threat of misinformation and to promote trustworthy information in generative AI products.
The company has also categorically told developers that all apps, including AI content generators, must comply with its existing developer policies, which prohibit the generation of restricted content like child sexual abuse material (CSAM) and content that enables "deceptive behavior". To help users find high-quality information about what they see online, Google has also rolled out the "About this image" fact-check tool to English language users globally in Search.
Read More News :
X takes on WhatsApp, introduces audio and video calling features
Multiple SAP cloud solutions available soon on India-based data centre