Writing by John Nicholas 

Deepfakes, or deep fakes, are manipulated videos, images or audio produced by sophisticated artificial intelligence. They use falsified images and sounds that appear to be real. But don’t be fooled: Deepfakes are the result of AI.  

The term deepfake combines the terms “deep learning” and “fake.” “Deep learning” is a set of algorithms that can learn and make intelligent decisions on their own. This is a dangerous new twist in the world of “fake news” and misinformation. Deep learning programs produce very real-looking counterfeits by studying photographs and videos of a target person from multiple angles. Then these programs can copycat behavior and speech patterns to make a video look like it has not been altered.

Read more:

Deepfakes are by known foreign influence campaigns using social media to polarize our dialogue and to defame people. According to The Guardian, about 26% of these campaigns targeted the U.S. and 74% obscured readily verifiable facts.

But deepfakes go beyond hitting public figures. They can be — and often are — pornographic, taking revenge porn to a new level. Sensity AI estimates that 96% of all of the deepfakes in 2019 were pornographic. Almost all of those mapped faces from female celebrities on to porn stars.

Several tech companies, such as Facebook and Microsoft, have taken initiatives to detect and remove deepfake videos. But we must all remain vigilant in detecting them ourselves. There are some clues in the videos that can tip us off. And as always, if you are viewing a video on social media or a website that is pushing a political agenda, be suspicious. If you are not seeing the video on media that is widely accepted as credible, it may well be a deepfake.

How to Identify deepfakes:

Look closer at the video or photo. Look for any slight visual aspects that are off — anything from the ears or eyes not matching to fuzzy borders of the face to too-smooth skin to lighting and shadows. Watch the lips and listen to the words. If it looks slightly out of sync or the mouth does not quite fit the word, then it has likely been altered.

Don’t blink. Human beings have a specific blink pattern that is consistent. Watch the blink of the eyes in the video. Then look for other videos of that individual that you know has not been altered or watch the individual on live television to see if the patterns match.

Use your natural human intuition. We all have built-in nonsense detectors of which we are not always conscious. It’s the feeling you get when you think someone is lying or you get “creeped out” by someone. If something doesn’t feel quite right about the video, your intuition may be cluing you in.

Trust but verify. Determine whether other outlets are reporting the same news. If the video appears only on news outlets that have a specific political bias but cannot be found in reputable sources, then it is likely a deepfake. Verify what you see in a source like the Associated Press or Reuters.

The “tells” of deepfakes are getting harder and harder to detect. As deepfake technology improves, these videos look more realistic, but it can be done. A little research and some common sense will serve you well. When in doubt, do not believe the video or picture in question.

Dr. John B. Nicholas is a Professor of Computer Information Systems and Co-Founder of the Cybersecurity Degree Track at The University of Akron. Dr. Nicholas has over 30 years experience in the technology field in both the private sector and higher education. Reach him at jbnicholasphd@gmail.com.  

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: