Imagine waking up to a barrage of messages and missed calls from worried friends and family who saw a video of you doing something completely out of character. But you’re sure you’ve never done anything like that! It turns out that the video is a deep fake, an AI-generated video created by someone you had a minor disagreement with. Once a video is on the internet, it’s almost impossible to remove, and even though the video is fake, it can still ruin your reputation. Unfortunately, this is the future that many people may face, as AI deep fakes and generative technology are becoming increasingly realistic and indistinguishable from real images and videos. This trend will lead to a surge of misinformation in the form of fake images, videos, and wrongly generated content.
My Personal View
As an artist, I find AI generative technology fascinating. While many people in my industry worry that AI will end their careers or become impossible to find work, I have embraced this technology. Understanding new formats and technologies is essential for keeping society well-informed. While AI will displace many jobs in the short term, it will create new opportunities and jobs in the long term. It is crucial for people to learn about this technology and its capabilities, especially when it comes to dealing with generated images, videos, and information that lack truth.
However, while AI generative technology has great potential for creativity and innovation, its misuse is becoming increasingly prevalent. In the wrong hands, this technology can be used to create and distribute false information, leading to serious consequences for individuals and society as a whole. The need for regulations and laws to limit the use of AI and protect people’s privacy and reputations is more crucial than ever.
Media misleading the public despite the evolution of technology
Misleading the public through media technology is nothing new. One example is the broadcast of H.G. Wells’s War of the Worlds over the radio in 1938, which caused mass panic among listeners who believed it was a real news broadcast.
The Cambridge Analytica scandal revealed how personal data was used to illegally micro-target political ads with misinformation during the 2016 US presidential election.
Another example is the viral photo of the Pope wearing a white puffy coat which would remind any of the clothing styles of a fashion icon, which turned out to be an AI-generated image.
Recently, AI-generated assets were used in a political video to give a bad impression of a candidate winning a position. A “What if” scenario if you will. These are all examples of technology that have misled the public both accidentally and intentionally.
So what is a deep fake?
A deep fake is a type of AI-generated media that can create realistic videos or images of individuals saying or doing things they never actually said or did. The term “deep fake” comes from the combination of “deep learning” and “fake”.
Basically, what happens is that an AI model is trained on a large dataset of images and videos of an individual’s face and voice. Once the model has gathered and learned a person’s facial features and expressions, it can then generate new videos or images of the individual appearing to say or do things they never actually did.
While deepfakes can be used for harmless purposes like entertainment or artistic expression, they can also be used to spread disinformation or manipulate public opinion. Because they can be incredibly realistic, it can be challenging for viewers to discern whether a video or image is real or fake. This can lead to public confusion and mistrust, possibly causing harm to individuals or organizations.
That’s why I believe that deepfakes should be regulated to protect people and society from their potential harm.
The Need for AI Regulation and Laws
Over the past couple of years, people have been superimposing celebrities’ and influencers’ faces onto adult content without their consent. The level of detail in these deep fakes is incredibly lifelike, and people who don’t know any better may assume it’s real. We are looking at a new era of misinformation on a scale that we have never seen before. With AI, it is becoming increasingly difficult to tell what is real and what is fake. It’s crucial to have regulations and laws in place to limit the use of AI and ensure that people’s privacy and reputations are protected.
Here are the common sense regulatory points we need to implement as soon as possible
- AI-generated content including deep fakes in all forms should not be allowed to be used in political ads or campaigns including the media formats of audio (voice), image, and video as this kind of content can be misleading. This restriction can help prevent the spread of misinformation.
- “Adult content” should not include the likeness of an individual without their express permission in writing given to the creator of content. The posting of such content without consent should face the same legal repercussions as the Violence Against Women Act Reauthorization Act of 2022 and The Communications Decency Act of 1996.
- AI-generated content for public consumption where said content is arguably hard to tell if it's real or not needs to include a disclaimer of the usage of AI in its development or creation.
AI deep fakes and generative technology pose significant challenges to society. It’s essential to learn about this technology and its capabilities and to establish regulations and laws to limit its use. We must work together to ensure that AI is used ethically and responsibly to avoid the creation of a world where we can’t trust anything we see online.
If you find this newsletter useful, share or tag a friend.
Got questions? You can DM me directly on Twitter