2025-07-25

Artificial Integrity

Picture AI-generated (Copilot)

High time for a summery blog, although the inspiration doesn’t come from the current weather. Fortunately, a colleague gave me a great tip.

He showed me two short videos. The first one shows him and his girlfriend sitting next to each other. They turn toward each other and kiss. In the second video, he’s alone on a rock by the sea, and four blonde, long-haired, and rather scantily clad women slide into view and, well, caress him. He lifts his head in delight.

Why does he share that footage? We don’t have a team culture where we brag about such conquests. No, he showed me this because it’s not real. Oh, it starts with a real photo, just a nice vacation snapshot. Then the AI app PixVerse turns it into a video. You can choose from a whole range of templates—far more than the two examples mentioned: you can have someone board a private jet, cuddle with a polar bear or tiger, turn into Batman, have your hair grow explosively, get slapped in the face, and so on. With many of these videos, viewers will immediately realize they’re fake. But with my colleague’s videos, it’s not so obvious.

That’s exactly why the European AI Act requires that content created by artificial intelligence be labeled as such. Imagine if his girlfriend saw the second video without any explanation. Depending on temperament and mutual trust, that could easily lead to a dramatic scene. PixVerse is mainly aimed at having fun, but you can imagine how such tools could be used for very different purposes.

Take blackmail, for instance. You generate a video of someone in a compromising situation, threaten to release it, and hold out your hand. And like any good criminal, they won’t necessarily follow the law and label it as fake. Now, PixVerse’s quality isn’t immediately threatening: if you look closely, you can tell. Fingers remain problematic for AI, and eyes too. But still, if you’re not expecting to be fooled, you won’t notice—and you only see it once you’re looking for it. I see a criminal business model here.

It seems PixVerse mainly targets young people, judging by the free templates available. My colleague’s videos were also made by a child. On the other hand, you can subscribe to various plans, ranging from €64.99 to €649.99 per year. That’s well above pocket money level for most. If you do get a subscription, the watermark disappears from your videos—in other words, no more hint that AI was involved.

One of the pillars of information security is integrity: the accuracy and completeness of data. This was originally conceived with databases and other computer files in mind. It would be wrong if a house number or amount would be incorrect or if data would be missing. But you can easily apply this principle to images and audio, too. If you can no longer trust them, integrity is no longer guaranteed. Not to mention the (personal) integrity of those who abuse it.

After this blog, my vacation begins, and I used AI to help plan it. For example, to find nice overnight stops on the way to our final destination. But you have to stay alert: ChatGPT claimed the distance between two stops was over a hundred kilometers less than what Google Maps calculated. When confronted, ChatGPT admitted it had measured as the crow flies. I’d call that artificially dumb rather than intelligent.

I hope you encounter something during your own vacation that makes you think: he should write a blog about that. Write it down or take a photo and send it to me! As long as it’s real…

The Security (b)log will return after the summer holidays.


And in the big bad world ...

 

2025-07-18

The reliable criminal

 

Image from Pixabay


Have you ever experienced being unable to work at home or in the office because your computer wouldn’t respond? Or that your children’s school or university had to close for the same reason, or that a store couldn’t sell anything? Welcome to the world of ransomware.

As we often see with technological developments, this phenomenon also started surprisingly long ago — in 1989, with the AIDS Trojan. This malware was distributed via floppy disks to participants of an AIDS conference. Victims had to send $189 by mail to Panama — but received nothing in return. In the early 2000s, there were some amateurish attempts to hide files, but the real game began in 2013 with CryptoLocker. It spread via email attachments, used strong encryption, and demanded payment in bitcoin. That became the market standard.

In the early days, you could never be sure whether, after scraping together your savings, you would actually receive the key to decrypt your files. Law enforcement agencies around the world advised against paying ransom. This affected the criminals’ income. Thus, the “reliable criminal” emerged: increasingly, you could count on being “helped” after payment. According to an estimate by Copilot, the chance of this in 2015 was about 80% (now only 60%).

Again, law enforcement urged people not to pay. Not only was there still no guarantee of receiving the decryption key, but paying also helped sustain the criminal business model — while the goal was to make this trade less profitable.

Criminals responded with double extortion: not only were your files encrypted, but they also made a copy for themselves. If you didn’t pay, your information would be published. And since everyone has something to hide, this was a successful extra incentive to pay. Around that time, there was also a shift from individuals to businesses and governments as targets, because larger sums could be demanded. Publishing customer data or trade secrets could have serious consequences.

Beyond law enforcement’s calls not to pay, there’s also a moral question: is it ethically justifiable to pay? I instinctively lean toward “no”, but I want to explore the nuances — because not paying can have serious consequences beyond the affected organization. Consider the 2021 attack on JBS Foods, the world’s largest meat processing company. The attack led to temporary closures of factories in the U.S., Canada, and Australia and disrupted the food supply. Partly for that reason, the company decided to pay no less than $11 million.

Two years earlier, Jackson County, Georgia was a victim. Police and other government services were completely paralyzed. They paid $400,000, but never officially confirmed whether they got what they paid for. That same year, around Christmas, Maastricht University in the Netherlands was hit. The €200,000 they paid turned out to be a good investment: part of it was recovered and, due to the rise in bitcoin value, was worth €500,000 now.

Food is a basic necessity, but if you can temporarily eat something other than meat, getting that meat processor back online may not be so urgent. If the local police are digitally blind for a while, perhaps another police force can help. And a paralyzed university — we survived that in 1969 too, when the administration building of the University of Amsterdam (the ‘Maagdenhuis’) was occupied (though that wasn’t about ransom). In short: seek alternatives rather than paying ransom.

There is a collective interest in eradicating ransomware, but everyone must participate. Some countries are working on banning ransom payments or at least requiring mandatory reporting. A ban on insurance coverage can also help discourage payment. But these measures don’t help the affected companies directly. What does help are initiatives like No More Ransom, where police and the private sector collaborate to recover decryption keys and make them freely available. We also regularly see the successes of international police cooperation. And of course, organizations must increase their own resilience by investing in awareness (especially around phishing), good detection tools, and a solid backup strategy. With all these measures, this criminal business should eventually become unprofitable. And then maybe those people can do reliable and honest work instead.

And in the big bad world…

 

2025-07-11

No clock ticks like the one at home

Image from Pixabay

My grandparents had a Frisian tail clock hanging on the wall. In the same village, my in-laws had exactly the same clock. Recently, my wife made an interesting revelation about their version.

According to that clock, time passed more slowly than in reality. They had already taken it to a clockmaker in Belgium. A cleaning didn’t help. Then someone revealed a special trick. He said the clock probably wanted to hang slightly askew. That turned out to be true. But getting it off-level was a matter of millimeters. It took weeks to find the right position. Other clocks in the house served as reference points.

I have a modern desk lamp with a built-in digital clock. My physics teacher once explained that electric clocks always show the correct time because they – if I remember correctly – tick along with the frequency of the alternating current (50 Hertz in Europe). Not so with my desk lamp clock. I have to reset it every few weeks. I usually do that when it’s two minutes fast, because then it gets too annoying. I’m always surprised that in 2025 there are still clocks that don’t keep accurate time.

All these clocks that don’t perform their task well need to be interpreted. “Oh right, it’s that clock, so it’s probably a bit earlier/later.” With clocks in someone else’s house, you often don’t know that. You might think you’re already too late for the train home, while you could have still caught it.

We also interpret security policy. As a security officer, I often get questions like: someone did this or that, is that actually allowed? The answer is rarely stated literally in a policy document. You have to tilt the document a bit, so to speak, to extract the right information. We always find one or more rules that apply to the situation. Sometimes you also have to want to see it. That’s where professional judgment comes in: you’re a security officer for a reason, and if you say something is or isn’t allowed, then that’s how it is – your judgment is based on your professionalism.

Over the years, I’ve seen a parade of colleagues flagged by some security system. Those notifications lead to an assessment. Is it worth taking action? Is the incident serious enough? Or is it immediately clear that it was an accident and the user had no malicious intent? I find the latter especially interesting: if it’s a report about something that could potentially have malicious intent, then you have my attention and can expect a meeting with your supervisor. They know you better than I do and may have other puzzle pieces that together paint the picture of a generally exemplary employee – or not.

In all that time, no one has ever dared to ask: where does it say that this isn’t allowed? No, they feel caught, say sorry, and promise never to do something so stupid again. Fortunately, I’ve rarely encountered anyone with bad intentions. Most of these incidents are the result of well-meaning actions that unfortunately conflict with policy. Everyone is supposed to know the law, the law says, but in practice it’s a bit different. We’re happy to help them stay within the lines.

My grandmother had a special time policy. She set the clock ten minutes ahead. That way, if she had to go somewhere, there was always the reassurance that she should have already left, but luckily still had some extra time. I always found that just as strange as clocks that decide to show a time other than the correct one.


And in the big bad world ...

2025-07-04

Your inner self

Image by Copilot

“The best inspiration comes from within.” That’s not a quote from Sun Tzu, the Chinese general from the sixth century BC, whose work The Art of War is quoted at every opportunity. No, we attribute this quote to one Patrick Borsoi from the twentieth century AD. Not Chinese, not a general, but – in all modesty – occasionally clever.

Readers sometimes ask me how I find inspiration for a blog every week. I usually answer that I observe my surroundings and often see something mundane that I can link to information security. Sometimes colleagues give me a tip, whether or not from their own daily lives. Now I’ve discovered something new: listening to myself. Literally.

I was a guest on the podcast of the KNVI, the Royal Dutch Association of Information Professionals. I was there to talk about the Security (b)log and more technical topics like phishing, AI, and quantum computing. The podcast went online on July 1, and of course, I was one of the first to listen to it. That’s quite strange, by the way, but everyone says that when they hear a recording of themselves. The point is that I heard myself say something I had never said before and didn’t even remember saying (the recording was made a month and a half earlier).

Marijn Plomp is the regular host of this podcast, and Sandra de Waart was his sidekick that day. Since my blog has security awareness as its overarching theme, Sandra asked me: “How do you actually make people aware?” Because, as she rightly pointed out, simply saying “be aware!” doesn’t help. I compared it to a traffic sign that gives a general warning of danger (a triangle with a red border and an exclamation mark in the middle). If you only see that sign, you still don’t know anything. Only if there’s an  extra sign underneath, explaining what the danger is, you’ll know what to do or avoid. And here it comes. I said: “I try to be that extra sign.” By explaining why something is a risk, by clarifying it, you can make people aware. They need to understand it and even feel it.

Later in the podcast, I made a statement I’ve made more often: “I get paid to think in doom scenarios.” Just as there are people who get paid to play with Lego all day, I get to indulge in the question: what could possibly go wrong? While others revel in what a system, device, or method can do, I get to look at the dark side. That’s not always easy, as it can sometimes dampen others’ enthusiasm. Usually, that perspective on the error path is appreciated after all, because the final product improves by also considering aspects we’d rather ignore. That quote about doom thinking is, of course, a big wink, but it clearly and concisely shows that risk analyses are important – even if it’s just on the back of an envelope.

At the end of the podcast, I hear myself say that I need people as the last line of defense. Because if technology fails to avert disaster, if, for example, that one phishing email still manages to get through all the checks, then the employee whose inbox it lands in can make the difference between a healthy and a crippled organization. And with that last line of defense, we circle back a bit to Sun Tzu, who undoubtedly wrote something about that too.

Listen to the KNVI podcast. [DUTCH]


And in the big bad world...

·         airlines have recently attracted a lot of attention from cybercriminals.

·         even criminal organizations sometimes shut down.

·         Germany wants to ban DeepSeek.

·         physical and digital crime sometimes converge.

·         the Dutch Ministry of Defence is also investing in AI and cloud services. [DUTCH]

·         the police will now also respond to digital crime reports. [DUTCH]

·         a civil servant was punished for emailing confidential data to his private address. [DUTCH]

   

Red Square

Image from Pixabay You rent a small plane, fly it to Moscow, and park it on Red Square. Back in 1987, 18-year-old German Mathias Rust embarr...