AI news roundup: Facebook AI image detection, Google joins C2PA, Biden deepfake & the FCC, ChatGPT memory upgrade, and AI-assisted Cyberattacks.

Meta announced that they are working on identifying and labeling AI-generated images that appear on Facebook, Instagram, and Threads. For images created with Meta-AI, they are baking in invisible watermarks, as well as visible markers and metadata. For AI-generated images that come from outside of Meta, they say they are following best practices and standards from organizations like the Partnership on AI (PAI) and the Coalition for Content Provenance and Authority (C2PA), which is related to the Adobe led Content Authenticity Initiative (CAI), of which we (the Human Creator Alliance), are a member. 

We think this is a very encouraging move from Meta. As we are working to develop a common standard for easily identifiy AI generated content, it is great to see social media giants participate in this initiative.

Speaking of the C2PA, in related news, Google is now the newest member to join the coalition. We believe this is hugely positive news for tranparency in media, fighting misinformation, restoring trust online, and continued development and adoption of content credentials

In other news, the FCC Outlaws Use Of AI-Faked Voices In Robocalls. The FCC ruling came in the wake of recent reports that some New Hampshire residents had received fake phone calls from the AI generated voice of President Joe Biden, advising them to skip the state’s January 23 primary. Under the new law, consumers who receive more than one illegal robocall can sue to recover up to $1,500 per incident. This is an important victory in the fight against deepfakes, which have been dominating headlines lately. Look no further than the widely reported debacle on platform X (formerly twitter), surrounding deepfake images of Taylor Swift. 

These two examples highlight the need for both regulatory action, and continued adoption and development of industry standards for AI content detection, authentication, and transparency. We would also add that trusted source verification is another important tool against deepfakes and misinformation, and the Human Creator Alliance has developed a solution to accomplish just that. 

Both ChatGPT and Google's Gemini (the rebrand of Google Bard) are adding long-term memory upgrades and will now remember details from conversations across multiple chats. While the Gemini version of this upgrade is merely implied in the terms of service, OpenAI says they are actively testing memory features in an effort to make conversations more helpful. In our view, this is simultaneously exciting and terrifying, and clearly will make LLM Chatbots far more powerful in the long term. 

Finally, nation-state backed hackers are using LLM to help assist with cyberattacks, including researching targets, writing scripts, and engineering phishing emails. According to OpenAI and Microsoft: “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,”. We present this story without comment, as the implications speak for themself. 

If you are as concerned as we are with trust and transparency in the age of generative-AI, please consider joining the Alliance

Previous
Previous

Please join us in signing the Disrupting Deepfakes Supply Chain Letter.

Next
Next

The Doomsday Clock is as close to midnight as ever — and now includes Generative AI as a risk factor