Post Widget 1

Heath Tips

  • In enim justo, rhoncus ut, imperdiet a
  • Fringilla vel, aliquet nec, vulputateDonec pede justo,  eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo.Nullam dictum felis eu pede mollis pretium.

Post Widget 2

Google develops AI prototype to spot misinformation, abusive content online

Google develops AI prototype to spot misinformation, abusive content online

Google is still testing these new techniques, but the prototypes have demonstrated impressive results so far.

Updated On – 10:10 PM, Thu – 26 October 23


Google develops AI prototype to spot misinformation, abusive content online



New Delhi: Google on Thursday said it has developed a prototype that leverages recent advances in Large Language Models, or LLMs, to assist in identifying content abusive at scale.

LLMs are a type of artificial intelligence that can generate and understand human language.

“Using LLMs, our aim is to be able to rapidly build and train a model in a matter of days” instead of weeks or months” to find specific kinds of abuse on our products,” said Amanda Storey, senior director, trust and safety.

Google is still testing these new techniques, but the prototypes have demonstrated impressive results so far.

“It shows promise for a major advance in our effort to proactively protect our users especially from new, and emerging risks,” Storey added.

The company, however, did not specify which of its many LLMs it is using to track misinformation.

“We’re constantly evolving the tools, policies and techniques we’re using to find content abuse. AI is showing tremendous promise for scaling abuse detection across our platforms,” said Google.

Google said it is taking several steps to reduce the threat of misinformation and to promote trustworthy information in generative AI products. The company has also categorically told developers that all apps, including AI content generators, must comply with its existing developer policies, which prohibit the generation of restricted content like child sexual abuse material (CSAM) and content that enables “deceptive behaviour”.

To help users find high-quality information about what they see online, Google has also rolled out the “About this image” fact-check tool to English language users globally in Search.

admin

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Read also x