Skip to main content

Google is ramping up its efforts to combat the spread of explicit deepfake content on its search engine. Recent updates to the platform’s algorithms have significantly reduced the visibility of these harmful and non-consensual images. Although Google has long provided tools for individuals to request the removal of explicit content, critics have argued that more proactive measures are needed.

Emma Higham, a Google product manager, stated that the new adjustments to the company’s ranking results, which were rolled out this year, have already reduced exposure to fake explicit images by over 70 percent in searches targeting specific individuals. “With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images,” Higham wrote in a company blog post on Wednesday.

READ MORE: Haniyeh Assassination: Thousands Attend Funeral Prayers in Pakistan

In response to the issue, Google has implemented several new strategies. These include more aggressively removing duplicate deepfakes, filtering out explicit images from related search queries, and penalizing websites with a high volume of takedown requests. The goal is to prevent these harmful images from appearing in search results, even if users are not actively seeking them.

However, challenges remain. While Google has made progress, critics argue that the company could do more to protect users from this pervasive issue. The effectiveness of the new measures is yet to be fully assessed, and questions about Google’s commitment to addressing the problem continue to persist.

Despite these limitations, Google’s latest efforts represent a positive step forward. As AI technology advances, it is crucial for platforms like Google to take proactive measures to protect individuals from the damaging effects of deepfakes.

Leave a Reply