2025-07-25

Artificial Integrity

Picture AI-generated (Copilot)

High time for a summery blog, although the inspiration doesn’t come from the current weather. Fortunately, a colleague gave me a great tip.

He showed me two short videos. The first one shows him and his girlfriend sitting next to each other. They turn toward each other and kiss. In the second video, he’s alone on a rock by the sea, and four blonde, long-haired, and rather scantily clad women slide into view and, well, caress him. He lifts his head in delight.

Why does he share that footage? We don’t have a team culture where we brag about such conquests. No, he showed me this because it’s not real. Oh, it starts with a real photo, just a nice vacation snapshot. Then the AI app PixVerse turns it into a video. You can choose from a whole range of templates—far more than the two examples mentioned: you can have someone board a private jet, cuddle with a polar bear or tiger, turn into Batman, have your hair grow explosively, get slapped in the face, and so on. With many of these videos, viewers will immediately realize they’re fake. But with my colleague’s videos, it’s not so obvious.

That’s exactly why the European AI Act requires that content created by artificial intelligence be labeled as such. Imagine if his girlfriend saw the second video without any explanation. Depending on temperament and mutual trust, that could easily lead to a dramatic scene. PixVerse is mainly aimed at having fun, but you can imagine how such tools could be used for very different purposes.

Take blackmail, for instance. You generate a video of someone in a compromising situation, threaten to release it, and hold out your hand. And like any good criminal, they won’t necessarily follow the law and label it as fake. Now, PixVerse’s quality isn’t immediately threatening: if you look closely, you can tell. Fingers remain problematic for AI, and eyes too. But still, if you’re not expecting to be fooled, you won’t notice—and you only see it once you’re looking for it. I see a criminal business model here.

It seems PixVerse mainly targets young people, judging by the free templates available. My colleague’s videos were also made by a child. On the other hand, you can subscribe to various plans, ranging from €64.99 to €649.99 per year. That’s well above pocket money level for most. If you do get a subscription, the watermark disappears from your videos—in other words, no more hint that AI was involved.

One of the pillars of information security is integrity: the accuracy and completeness of data. This was originally conceived with databases and other computer files in mind. It would be wrong if a house number or amount would be incorrect or if data would be missing. But you can easily apply this principle to images and audio, too. If you can no longer trust them, integrity is no longer guaranteed. Not to mention the (personal) integrity of those who abuse it.

After this blog, my vacation begins, and I used AI to help plan it. For example, to find nice overnight stops on the way to our final destination. But you have to stay alert: ChatGPT claimed the distance between two stops was over a hundred kilometers less than what Google Maps calculated. When confronted, ChatGPT admitted it had measured as the crow flies. I’d call that artificially dumb rather than intelligent.

I hope you encounter something during your own vacation that makes you think: he should write a blog about that. Write it down or take a photo and send it to me! As long as it’s real…

The Security (b)log will return after the summer holidays.


And in the big bad world ...

 

2025-07-18

The reliable criminal

 

Image from Pixabay


Have you ever experienced being unable to work at home or in the office because your computer wouldn’t respond? Or that your children’s school or university had to close for the same reason, or that a store couldn’t sell anything? Welcome to the world of ransomware.

As we often see with technological developments, this phenomenon also started surprisingly long ago — in 1989, with the AIDS Trojan. This malware was distributed via floppy disks to participants of an AIDS conference. Victims had to send $189 by mail to Panama — but received nothing in return. In the early 2000s, there were some amateurish attempts to hide files, but the real game began in 2013 with CryptoLocker. It spread via email attachments, used strong encryption, and demanded payment in bitcoin. That became the market standard.

In the early days, you could never be sure whether, after scraping together your savings, you would actually receive the key to decrypt your files. Law enforcement agencies around the world advised against paying ransom. This affected the criminals’ income. Thus, the “reliable criminal” emerged: increasingly, you could count on being “helped” after payment. According to an estimate by Copilot, the chance of this in 2015 was about 80% (now only 60%).

Again, law enforcement urged people not to pay. Not only was there still no guarantee of receiving the decryption key, but paying also helped sustain the criminal business model — while the goal was to make this trade less profitable.

Criminals responded with double extortion: not only were your files encrypted, but they also made a copy for themselves. If you didn’t pay, your information would be published. And since everyone has something to hide, this was a successful extra incentive to pay. Around that time, there was also a shift from individuals to businesses and governments as targets, because larger sums could be demanded. Publishing customer data or trade secrets could have serious consequences.

Beyond law enforcement’s calls not to pay, there’s also a moral question: is it ethically justifiable to pay? I instinctively lean toward “no”, but I want to explore the nuances — because not paying can have serious consequences beyond the affected organization. Consider the 2021 attack on JBS Foods, the world’s largest meat processing company. The attack led to temporary closures of factories in the U.S., Canada, and Australia and disrupted the food supply. Partly for that reason, the company decided to pay no less than $11 million.

Two years earlier, Jackson County, Georgia was a victim. Police and other government services were completely paralyzed. They paid $400,000, but never officially confirmed whether they got what they paid for. That same year, around Christmas, Maastricht University in the Netherlands was hit. The €200,000 they paid turned out to be a good investment: part of it was recovered and, due to the rise in bitcoin value, was worth €500,000 now.

Food is a basic necessity, but if you can temporarily eat something other than meat, getting that meat processor back online may not be so urgent. If the local police are digitally blind for a while, perhaps another police force can help. And a paralyzed university — we survived that in 1969 too, when the administration building of the University of Amsterdam (the ‘Maagdenhuis’) was occupied (though that wasn’t about ransom). In short: seek alternatives rather than paying ransom.

There is a collective interest in eradicating ransomware, but everyone must participate. Some countries are working on banning ransom payments or at least requiring mandatory reporting. A ban on insurance coverage can also help discourage payment. But these measures don’t help the affected companies directly. What does help are initiatives like No More Ransom, where police and the private sector collaborate to recover decryption keys and make them freely available. We also regularly see the successes of international police cooperation. And of course, organizations must increase their own resilience by investing in awareness (especially around phishing), good detection tools, and a solid backup strategy. With all these measures, this criminal business should eventually become unprofitable. And then maybe those people can do reliable and honest work instead.

And in the big bad world…

 

2025-07-11

No clock ticks like the one at home

Image from Pixabay

My grandparents had a Frisian tail clock hanging on the wall. In the same village, my in-laws had exactly the same clock. Recently, my wife made an interesting revelation about their version.

According to that clock, time passed more slowly than in reality. They had already taken it to a clockmaker in Belgium. A cleaning didn’t help. Then someone revealed a special trick. He said the clock probably wanted to hang slightly askew. That turned out to be true. But getting it off-level was a matter of millimeters. It took weeks to find the right position. Other clocks in the house served as reference points.

I have a modern desk lamp with a built-in digital clock. My physics teacher once explained that electric clocks always show the correct time because they – if I remember correctly – tick along with the frequency of the alternating current (50 Hertz in Europe). Not so with my desk lamp clock. I have to reset it every few weeks. I usually do that when it’s two minutes fast, because then it gets too annoying. I’m always surprised that in 2025 there are still clocks that don’t keep accurate time.

All these clocks that don’t perform their task well need to be interpreted. “Oh right, it’s that clock, so it’s probably a bit earlier/later.” With clocks in someone else’s house, you often don’t know that. You might think you’re already too late for the train home, while you could have still caught it.

We also interpret security policy. As a security officer, I often get questions like: someone did this or that, is that actually allowed? The answer is rarely stated literally in a policy document. You have to tilt the document a bit, so to speak, to extract the right information. We always find one or more rules that apply to the situation. Sometimes you also have to want to see it. That’s where professional judgment comes in: you’re a security officer for a reason, and if you say something is or isn’t allowed, then that’s how it is – your judgment is based on your professionalism.

Over the years, I’ve seen a parade of colleagues flagged by some security system. Those notifications lead to an assessment. Is it worth taking action? Is the incident serious enough? Or is it immediately clear that it was an accident and the user had no malicious intent? I find the latter especially interesting: if it’s a report about something that could potentially have malicious intent, then you have my attention and can expect a meeting with your supervisor. They know you better than I do and may have other puzzle pieces that together paint the picture of a generally exemplary employee – or not.

In all that time, no one has ever dared to ask: where does it say that this isn’t allowed? No, they feel caught, say sorry, and promise never to do something so stupid again. Fortunately, I’ve rarely encountered anyone with bad intentions. Most of these incidents are the result of well-meaning actions that unfortunately conflict with policy. Everyone is supposed to know the law, the law says, but in practice it’s a bit different. We’re happy to help them stay within the lines.

My grandmother had a special time policy. She set the clock ten minutes ahead. That way, if she had to go somewhere, there was always the reassurance that she should have already left, but luckily still had some extra time. I always found that just as strange as clocks that decide to show a time other than the correct one.


And in the big bad world ...

2025-07-04

Your inner self

Image by Copilot

“The best inspiration comes from within.” That’s not a quote from Sun Tzu, the Chinese general from the sixth century BC, whose work The Art of War is quoted at every opportunity. No, we attribute this quote to one Patrick Borsoi from the twentieth century AD. Not Chinese, not a general, but – in all modesty – occasionally clever.

Readers sometimes ask me how I find inspiration for a blog every week. I usually answer that I observe my surroundings and often see something mundane that I can link to information security. Sometimes colleagues give me a tip, whether or not from their own daily lives. Now I’ve discovered something new: listening to myself. Literally.

I was a guest on the podcast of the KNVI, the Royal Dutch Association of Information Professionals. I was there to talk about the Security (b)log and more technical topics like phishing, AI, and quantum computing. The podcast went online on July 1, and of course, I was one of the first to listen to it. That’s quite strange, by the way, but everyone says that when they hear a recording of themselves. The point is that I heard myself say something I had never said before and didn’t even remember saying (the recording was made a month and a half earlier).

Marijn Plomp is the regular host of this podcast, and Sandra de Waart was his sidekick that day. Since my blog has security awareness as its overarching theme, Sandra asked me: “How do you actually make people aware?” Because, as she rightly pointed out, simply saying “be aware!” doesn’t help. I compared it to a traffic sign that gives a general warning of danger (a triangle with a red border and an exclamation mark in the middle). If you only see that sign, you still don’t know anything. Only if there’s an  extra sign underneath, explaining what the danger is, you’ll know what to do or avoid. And here it comes. I said: “I try to be that extra sign.” By explaining why something is a risk, by clarifying it, you can make people aware. They need to understand it and even feel it.

Later in the podcast, I made a statement I’ve made more often: “I get paid to think in doom scenarios.” Just as there are people who get paid to play with Lego all day, I get to indulge in the question: what could possibly go wrong? While others revel in what a system, device, or method can do, I get to look at the dark side. That’s not always easy, as it can sometimes dampen others’ enthusiasm. Usually, that perspective on the error path is appreciated after all, because the final product improves by also considering aspects we’d rather ignore. That quote about doom thinking is, of course, a big wink, but it clearly and concisely shows that risk analyses are important – even if it’s just on the back of an envelope.

At the end of the podcast, I hear myself say that I need people as the last line of defense. Because if technology fails to avert disaster, if, for example, that one phishing email still manages to get through all the checks, then the employee whose inbox it lands in can make the difference between a healthy and a crippled organization. And with that last line of defense, we circle back a bit to Sun Tzu, who undoubtedly wrote something about that too.

Listen to the KNVI podcast. [DUTCH]


And in the big bad world...

·         airlines have recently attracted a lot of attention from cybercriminals.

·         even criminal organizations sometimes shut down.

·         Germany wants to ban DeepSeek.

·         physical and digital crime sometimes converge.

·         the Dutch Ministry of Defence is also investing in AI and cloud services. [DUTCH]

·         the police will now also respond to digital crime reports. [DUTCH]

·         a civil servant was punished for emailing confidential data to his private address. [DUTCH]

   

2025-06-27

Russian roulette

Image from Pixabay

Sometimes you catch a news item on the radio that makes you think, “Huh? I must have misheard that.” Like the report that Pavel Durov is leaving his fortune to all of his onehundred children.

The man turns out to have two 'real' children; for the remaining 104, he was only involved as a sperm donor. Fortunately, those children need not fear missing out, even with that many half-siblings. Each of them can expect over 160 million dollars, based on their father/donor’s current bank balance. They’ll probably have to wait a while, though, as Durov is only forty and very much alive. His name appears on impressive lists: the 120th richest person in the world, the richest expat in the United Arab Emirates, the most powerful entrepreneur in Dubai—those kinds of things.

Durov’s portfolio includes no fewer than four passports: he is a citizen of Russia (born in the Soviet Union), Saint Kitts and Nevis (islands in the Caribbean Sea where he supported the sugar industry with a quarter million dollars), the United Arab Emirates, and France. According to Paul du Rove, as he calls himself in that country, the application for the latter passport was an April Fool’s joke that was accidentally approved via a special procedure. But it did make him an EU citizen as well.

All these facts (and many more) can also be found on Wikipedia, but why am I bringing them up here? Because Pavel Valeryevich Durov is also the spiritual father and founder of Telegram, the messaging service akin to WhatsApp and Signal. And Telegram is not exactly a service beloved by security and privacy experts. I’ll explain why. Keep in mind that the term cryptography, as used here, has nothing to do with cryptocurrencies like Bitcoin.

When you exchange messages with someone, you generally don’t want others—people, companies, or governments—to be able to read along. That’s why messages are encrypted. This encryption ensures that only you and your conversation partner can read the messages, because only you two have the corresponding keys (this is called end-to-end encryption). The mechanism that handles the encryption is called a cryptographic protocol, which in turn uses cryptographic algorithms. Typically, internationally recognized standards are used, which have been extensively reviewed by many different experts. That makes them reliable. At Telegram, they thought it better to create their own crypto protocol. In cryptography, that’s considered a cardinal sin, because it’s likely you’ll overlook your own mistakes. Their protocol is also not fully public, making it difficult to scrutinize. Moreover, encryption is not enabled by default. With other messaging apps, it is.

Telegram and its founders have a turbulent history. Durov left Russia after disputes over his previous company, VKontakte (the Russian Facebook). In short: he had refused to hand over personal information about protesters to the authorities. In 2014, he left Russia and founded Telegram. According to Durov, Telegram turned a profit for the first time ten years later, with revenues exceeding one billion dollars. How Telegram was funded in the meantime remains unclear.

Despite the disputes in Russia, we don’t know whether backdoors have been built into Telegram. From what I can tell, Durov has a decent track record of resisting grasping authorities. On the global stage of espionage, however, you can never be sure whether that’s just for show and whether deals have been made behind the scenes. The platform is popular among criminals for conducting business, perhaps because Durov and co. don’t stand in their way. In this context, France arrested him last year (and released him on bail). In any case, the lack of transparency and, frankly, the Russian roots make Telegram a platform I strongly advise against using. Use Signal instead—it has a strong reputation in both cryptography and privacy. That said, it’s an American product and thus subject to U.S. law, which gives law enforcement various powers to demand data. However, they can only hand over data they have; the content of your messages is end-to-end encrypted and therefore reasonably safe. WhatsApp works the same way but has a poorer reputation for privacy because it monetizes your profile and behavior.

Even if you have a hundred children and an above-average bank balance, that doesn’t make you a diligent father. I see too many red flags to trust Durov and his Telegram.


And in the big bad world…

 

2025-06-20

At the theatre

Picture from author

The Red Hall of the Meervaart Theatre in Amsterdam looks empty in the photo. Just a few minutes later, it was filled with around three hundred employees from the National Collection Centre (LIC) of the Dutch Tax Administration. And that laptop in the picture? That’s mine.

A few months ago, the organizers of this annual event got excited about my blog posts. Probably under the slightly risky assumption that “if he can write in an engaging way, he can probably speak that way too,” they invited me to take part in the program. So, on Tuesday, I braved the railway strike and headed to the capital. I had three missions: a presentation in the breakout program before lunch, a plenary talk in that big hall after lunch, and at the end of the day, the same story from the morning, but for a different group of about forty people. The colleagues who came to hear me in Room 9 were 92% women. Someone like me, from IT and security, rarely sees that many women together in a work setting. They were a fantastic, engaged audience and gave me a great glimpse into their world.

I mainly owed the invitation to my blog about Girl’s Day. (Quick recap: for a presentation to high school girls, I googled their names and showed them what I — an amateur in that field — had managed to find out.) The LIC folks wanted to hear that story too. There was one difference: on Girl’s Day, my talk was about the girls in the room, while at the Meervaart, it was about those same girls — so, not about the actual audience itself (and of course, I didn’t mention any names or overly sensitive details in either presentation). Still, the tension was visible on the faces in the Red Hall. Especially the revelation that presentations made with the free version of PowerPoint alternative Prezi are publicly available online triggered an audible “Oh!” from the audience. A video showing a ‘psychic’ effortlessly uncovering personal details about his clients wrapped it up nicely.

My other presentation was titled Phish & Chats and covered phishing, chat apps, and artificial intelligence. The first part was a nostalgia trip for many: “Who of you has never received a phishing email?” No hands. “Hey Dad, this is my new phone number.” Murmurs in the room. English, with an Indian accent: “Hello, this is the Microsoft Helpdesk.” Nods all around. Naturally, I also gave them some tools to recognize phishing — because on a bad day, any individual employee might be the organization’s last line of defense when a phishing email lands in their inbox. And in that moment, you really want your colleague to respond appropriately.

The chat apps segment covered the pros and cons of various platforms. In short: don’t use WhatsApp for work due to privacy concerns, and don’t use Telegram at all. For internal government communication in the Netherlands, Webex is available. Signal is also an excellent choice.

Artificial intelligence (AI) also fell under the “Chats” part of Phish & Chats, because all those handy tools like ChatGPT, Gemini, and Copilot are smart chatbots — you can literally chat with them. I discussed how they work, how I view them from a professional standpoint, and what our organization does and doesn’t allow (allowed: Copilot Chat; not allowed: all others).

For me, the day was a warm bath of thumbs-ups, compliments, and thank-yous. And I hope that those who haven’t yet started reading the Security (b)log will now begin — not for me, but to become familiar with what’s happening in information security and their own role in it. Soon, I’ll be visiting a team closer to home, and after the summer, I’ll be back at our IT auditors’ annual conference. Yesterday, we discussed potential topics, and I’ll be working on finding a connecting thread in the coming weeks. In the meantime, I’ll also be a guest on a podcast. But more on that later.


And in the big bad world…

2025-06-13

The Hague brought to a standstill

Image from Pixabay

By now, you’ve probably heard, at least, if you live in the Netherlands: in just over a week, the city of The Hague will become an impenetrable fortress.

People living and working anywhere near the World Forum conference center have already been dealing with the disruptions caused by the largest security operation in history. But just like with an iceberg, what you see is only a fraction of the whole picture.

The last event of this scale was the Nuclear Security Summit in 2014, which also brought dozens of world leaders to that same conference center. In the eleven years since, the threat landscape—especially in terms of cybersecurity—has changed dramatically. Attack methods have become more sophisticated, and so have the people behind them. Much more sophisticated. And cunning. Which is troubling, because as an ordinary citizen, there’s little you can do to defend yourself.

“I’m just a regular person—what does this NATO summit have to do with me?” I hear you think. And yes, most of us won’t be directly involved. But that doesn’t mean you won’t be affected. In fact, you might be—without even realizing it.

Here’s why. Major events like this act as a magnet for what we broadly call malicious actors. Just like pickpockets flock to crowded markets, cybercriminals and spies are drawn to high-profile global gatherings. They’re after three things: money, information, and influence. The first is mostly the domain of criminals, though some rogue states aren’t above it either (looking at you, North Korea).

Stealing information is typically associated with state actors from countries like Russia, China, and Iran (plus a few others not on the public list). But don’t underestimate the criminals here either: ransomware attacks not only paralyze organizations but also steal data, which they then threaten to publish unless a ransom is paid. That increases their chances of getting paid.

Influence can be exerted in various ways. One is through disinformation—shaping public opinion, or even swaying the views of summit attendees. Some heads of state are surprisingly susceptible to such manipulation. Another tactic is disrupting the summit itself, throwing off schedules or even derailing the entire event.

Whatever the motive, these activities often start in the same place: phishing. Around events like this, phishing attempts spike—often themed around the event. You might get an email that looks like it’s from the City of The Hague: “Are you experiencing disruptions due to the NATO summit, such as being unable to get to work? Click here to apply for compensation.” Malicious actors know they’re more likely to succeed if they strike a nerve and dangle the promise of money.

Regular phishing is like shooting with a shotgun: blast it out to as many people as possible and see who bites. But there’s also targeted phishing—spearphishing—where a specific individual is the target and the message is custom-crafted. Expect to see more of that in the context of the NATO summit too.

I do wonder how they manage it in the Vatican. The Pope passed away, and five days later his funeral was held—with many dignitaries in attendance, including the U.S. President. Meanwhile, the Netherlands has been preparing for the NATO summit for months. Maybe it’s time for an educational field trip to Rome.

 

And in the big bad world…

Artificial Integrity

Picture AI-generated (Copilot) High time for a summery blog, although the inspiration doesn’t come from the current weather. Fortunately, a ...