Fake images and misinformation in the age of AI are growing. Even in 2019, a Pew Research Center study found that 61% of Americans said it is much to ask of the average American to be able to recognize altered videos and images, but 53% of those adults polled also said they could recognize altered videos and images.
Adobe shared its AI image statistics in August 2023 on the number of AI-generated images created with Adobe Firefly reaching one billion, only three months after it launched in March 2023.
In response to the increasing use of AI images, Google Deep Mind announced a beta version of SynthID. The tool will watermark and identify AI-generated images by embedding a digital watermark directly into the pixels of an image which will be imperceptible to the human eye but detectable for identification.
Kris Bondi, CEO and founder of Mimoto, a proactive detection and response cybersecurity company said that while Google’s SynthID is a starting place, the problem of deep fakes will not be fixed by a single solution.
“People forget that bad actors are also in business. Their tactics and technologies continuously evolve, become available to more bad actors, and the cost of their techniques, such as deep fakes, comes down,” said Bondi.
“The cybersecurity ecosystem needs multiple approaches to address deep fakes, with collaboration to develop flexibly architected approaches that will evolve to meet and surpass the bad actors’ technology,” adds Bondi.
Ulrik Stig Hansen, co-founder of Encord, a London-based computer vision training data platform, said that there is little doubt synthetic data detection will be one of the significant challenges ahead.
“We’ve seen it over and over with new technologies, and it’s no different with Generative AI — just as it’s being used in overwhelmingly positive ways (e.g., cheaper diagnostics in healthcare, faster disaster recovery), there’ll be vulnerabilities for those looking to exploit,” adds Hansen.
“It’ll be more a matter of how quickly the preventative applications can progress compared to the bad guys and how regulation will shape around the space,” said Hansen. “We’ve seen some indications of what this might look like in the EU, but the key will be to enable the progress of positive applications while building solid guardrails to limit misuse.”
Digital watermarking was a term created by Andrew Tirkel and Charles Osborne in 1992. Watermarking is a way to identify the origin and authenticity of images. Other forms of identifying images are through their metadata. Metadata, however, can be removed or modified, which diminishes trust in the image’s authenticity.
Dattaraj Rao, Chief Data Scientist at Persistent Systems who has 11 computer vision patents, says traditionally, watermarking has been used to protect image copyrights, but it can damage and modify the content.
“Using this method for AI-generated images, which have been in use for several years, is a great improvement,” said Rao in an email interview. “Although the major challenge will be for all enterprises and users to adopt a single standard for this – we still have not agreed upon a single format for storing image data; hence, we have GIF, JPEG, PNG, etc.”
Rao says because AI technology is evolving rapidly, someone will find a way to break into this watermark and override it.
“That’s what happened with visible watermarks. Today, multiple algorithms can detect and fill the watermarked pixels of the image with best guess colors based on surroundings,” said Rao.
“The computer vision engineer inside me feels that using imaging techniques is not a long-term solution here – at the end of the day, an image is an array of pixel color intensities, which can easily be manipulated,” said Rao. “This problem will need a generic solution for protecting digital content using techniques like cryptography.”
Rao says that today, we know that some websites are safe based on public key encryption provided by TLS certificates, which are issued by certain approved agencies.
“Similarly, we will probably need a way to verify any digital content,” said Rao. “Technologies like blockchains and digital ledgers can help create a decentralized, immutable register for digital content so you know the complete lineage for any image or Word document on the internet, but this, of course, is difficult to enforce.”
Ray adds that whichever method succeeds, the challenge will be in developing the standard and getting it endorsed by multiple organizations and countries globally.
In July 2023, the White House hosted a meeting with seven leading AI companies, including Google and OpenAI. Each company pledged to create tools to watermark and detect AI-generated text, videos and images.
Neil Sahota, a futurist, the lead Artificial Intelligence Advisor to the United Nations and author of Own the AI Revolution (McGraw Hill), says we can and should equip more people on how to verify the authenticity of images to ensure accuracy.
In fact, the Pew Research Center study also showed that 77% of US adults saying that steps should be taken to restrict altered videos and images intended to mislead, but only 22% said they prefered protecting the freedom to publish and access them.
“This includes having companies step up to the digital plate. The watermarking idea has been out there for a while,” said Sahota. “It will help to some degree, but the biggest problem is that the watermarks can be spoofed.”
“One of the advantages physical watermarks have is that they can use things like ultraviolet ink, so that part of it is‘invisible, and we haven’t figured out how to do that with an e-watermark,” said Sahota.
“If Google’s solution has the ability (which would make it much harder to spoof), then this would be a tremendous leap forward,” adds Sahota.