2025-07-25

Artificial Integrity

Picture AI-generated (Copilot)

High time for a summery blog, although the inspiration doesn’t come from the current weather. Fortunately, a colleague gave me a great tip.

He showed me two short videos. The first one shows him and his girlfriend sitting next to each other. They turn toward each other and kiss. In the second video, he’s alone on a rock by the sea, and four blonde, long-haired, and rather scantily clad women slide into view and, well, caress him. He lifts his head in delight.

Why does he share that footage? We don’t have a team culture where we brag about such conquests. No, he showed me this because it’s not real. Oh, it starts with a real photo, just a nice vacation snapshot. Then the AI app PixVerse turns it into a video. You can choose from a whole range of templates—far more than the two examples mentioned: you can have someone board a private jet, cuddle with a polar bear or tiger, turn into Batman, have your hair grow explosively, get slapped in the face, and so on. With many of these videos, viewers will immediately realize they’re fake. But with my colleague’s videos, it’s not so obvious.

That’s exactly why the European AI Act requires that content created by artificial intelligence be labeled as such. Imagine if his girlfriend saw the second video without any explanation. Depending on temperament and mutual trust, that could easily lead to a dramatic scene. PixVerse is mainly aimed at having fun, but you can imagine how such tools could be used for very different purposes.

Take blackmail, for instance. You generate a video of someone in a compromising situation, threaten to release it, and hold out your hand. And like any good criminal, they won’t necessarily follow the law and label it as fake. Now, PixVerse’s quality isn’t immediately threatening: if you look closely, you can tell. Fingers remain problematic for AI, and eyes too. But still, if you’re not expecting to be fooled, you won’t notice—and you only see it once you’re looking for it. I see a criminal business model here.

It seems PixVerse mainly targets young people, judging by the free templates available. My colleague’s videos were also made by a child. On the other hand, you can subscribe to various plans, ranging from €64.99 to €649.99 per year. That’s well above pocket money level for most. If you do get a subscription, the watermark disappears from your videos—in other words, no more hint that AI was involved.

One of the pillars of information security is integrity: the accuracy and completeness of data. This was originally conceived with databases and other computer files in mind. It would be wrong if a house number or amount would be incorrect or if data would be missing. But you can easily apply this principle to images and audio, too. If you can no longer trust them, integrity is no longer guaranteed. Not to mention the (personal) integrity of those who abuse it.

After this blog, my vacation begins, and I used AI to help plan it. For example, to find nice overnight stops on the way to our final destination. But you have to stay alert: ChatGPT claimed the distance between two stops was over a hundred kilometers less than what Google Maps calculated. When confronted, ChatGPT admitted it had measured as the crow flies. I’d call that artificially dumb rather than intelligent.

I hope you encounter something during your own vacation that makes you think: he should write a blog about that. Write it down or take a photo and send it to me! As long as it’s real…

The Security (b)log will return after the summer holidays.


And in the big bad world ...

 

No comments:

Post a Comment

Red Square

Image from Pixabay You rent a small plane, fly it to Moscow, and park it on Red Square. Back in 1987, 18-year-old German Mathias Rust embarr...