Mastodon

The End of "Seeing Is Believing"

Deepfake Videos Are More Realistic Than Ever. Here’s How to Spot if a Video Is Real or AI

Remember when “fake” on the internet meant a badly Photoshopped picture? Ah, simpler times. Now, we’re all swimming in a sea of AI-generated videos and deepfakes, from bogus celebrity videos to false disaster broadcasts, and it’s getting almost impossible to know what’s real.

Not too long ago, whenever you saw a photo or video, it was safe to assume it was real. Sure, you probably knew it had been touched up a bit or cropped for emphasis, but overall, you could believe what you were seeing. Then you started scrutinizing photos more carefully when composites became common, and even more so when Photoshop made it possible to remove or insert objects and people who weren’t in the original image. The only comfort we had was that these skills were limited to a select few. Otherwise, it was easy to spot the imperfections that gave away a fake.

Then came videos. When watching a movie, we assumed most of it was enhanced or outright faked, and we called it movie magic. We knew the context and situation we were putting ourselves into, so we expected it and allowed ourselves to be fooled for enjoyment and entertainment.

Somewhere down the road, AI-generated videos started appearing. The most popular one was probably the Will Smith eating spaghetti video from 2023. It was clearly fake and easy to spot because of its low quality and uncanny valley appearance. Other videos were easy to identify because AI struggled with rendering hands. They always looked weird, sometimes with six fingers, which was a dead giveaway.

Since then, AI has gotten significantly better at creating images and videos. You can still spot them, but it’s much harder now.

This CNET article linked above offers some suggestions for spotting fake AI-created images and videos. Basically, trust your gut. If something looks too perfect or oddly off, it’s probably fake. Another suggestion is to run it through a metadata checker, where responsible companies will indicate if content was AI-generated. However, they acknowledge that most people won’t take that extra step. Looking for watermarks from companies like Sora or Google can also help identify AI content. The problem is you’re depending on creators to insert those watermarks. Usually, those aren’t the ones you need to worry about since they’re mainly for entertainment.

The real danger comes from content designed to convince you of something fake by passing it off as real. Those are the ones you need to scrutinize carefully and make your own judgment about whether what you’re seeing is true. Check your sources and verify information through other means to confirm what’s being portrayed is actually real. Don’t take things for granted, and please educate others. I’m thinking especially of older generations who may not be up to speed on these capabilities and could be easily fooled. I’m not saying they’re less intelligent than us because any of us can be fooled by this at any time. Let’s continue to look out for one another.