2025-09-26

Red Square

Image from Pixabay

You rent a small plane, fly it to Moscow, and park it on Red Square. Back in 1987, 18-year-old German Mathias Rust embarrassed the Soviet Union in spectacular fashion.


At the time, the Iron Curtain was still firmly in place, and Soviet air defense was ruthless. Just five years earlier, Korean Air Flight 007, a Boeing 747 en route from New York to Seoul, made a navigational error and entered restricted Soviet airspace. It was mercilessly shot down, killing all 269 people on board.

Naturally, the world was outraged. Rust benefited from that outrage, as the Red Army became more cautious about potentially civilian flights. He was detected by air defense and even accompanied by a MiG fighter jet, but no permission was given to shoot him down. Apparently, communication between military units was lacking, because further along his route, they had no idea and assumed the radar blip was a student pilot who forgot to turn on his transponder (a device that identifies aircraft). Elsewhere, they thought it was a rescue helicopter or a training aircraft.

And so it happened that Rust circled over the Kremlin on the evening of May 28, 1987, and landed his Cessna in the heart of Russia. He did so as a peace activist, and according to historians, his stunt accelerated the fall of the Soviet Union by giving President Gorbachev arguments to dismiss political and especially military opponents. Rust’s hero status quickly faded after serving fifteen months in prison and returning to Germany, where the media portrayed him as eccentric and mentally unstable, and he got into legal trouble.

Let’s pause to consider Russian defense. Their radar spotted Rust within minutes, but it took an hour before a fighter jet joined him—and did nothing. Despite the Cessna clearly being a West German aircraft, they simply left—allegedly due to confusion caused by a plane crash the day before. At each point where Rust was noticed, incorrect assumptions led to ignoring a potential threat.

And from the Soviet perspective, it certainly was a threat. How would our own defense react if a Russian drone appeared over our parliament buildings? Hopefully, that’s the wrong question—ideally, such a drone would be intercepted long before reaching that point, even far beyond our borders. But if an (armed) drone did make it that far, it would pose a serious threat to national leadership. That’s likely how it felt in the Kremlin, too. No wonder Gorbachev could easily dismiss hundreds of top military officials. They had failed.

This historical tale offers lessons beyond the military domain. First: you need oversight. If a threat is repeatedly detected but consistently dismissed as unimportant and not reported, its true scale remains unclear. An example from my world: a virus on a few computers that gets neutralized by antivirus software is no big deal. But if infections multiply, you’re facing an outbreak and need different measures. But that requires visibility.

Making assumptions (“it’s probably a rescue helicopter”) is also dangerous. Was there a lack of clear instructions, or just indifference? Again, in the realm of cybersecurity: if you receive a suspicious email and yet assume it’s fine, and then click the link or open the attachment, you’re making the same mistake as those Soviet radar operators—you see the threat but choose to ignore it.

If Rust’s stunt truly accelerated the fall of the Soviet Union, it’s a prime example of a small action with massive consequences. Today, we see that with ransomware: one careless click by a single employee can bring down an entire organization.

Let’s make sure the lessons from Rust’s flight don’t, well, rust away. Protect your own Red Square.

And in the big bad world…

 

2025-09-19

Beyond Customs

Image from Pixabay

"Beyond Customs I bought a watch," said Merlijn Kaiser in the novel Magnus by Arjen Lubach. The book is highly recommended, but this sentence deserves some attention.

Merlijn was at Amsterdam Airport Schiphol and took a flight to Stockholm. You will encounter only one authority that performs a check: security. Apologies for the vague term, but that’s what the airport itself calls it. It’s the inspection of your hand luggage and yourself, checking whether you’re carrying anything that could endanger the flight. Like scissors or explosives, just to name a few.
On flights outside the Schengen area (roughly outside Europe), you also encounter the Royal Netherlands Marechaussee (military police), who check your passport. But that’s not Customs. You almost never encounter Customs when departing the Netherlands; they’re only interested in goods traffic. So, dear Merlijn, there is no "beyond Customs" when you leave the Netherlands. You only encounter Customs when returning from abroad. You know, after you’ve picked up your luggage, just before the sliding doors where people are waiting to pick you up.
It’s not uncommon for responsibilities to be confused. In the past, many organizations thought that information security was something the IT department was responsible for. And the IT department, in turn, thought the security team should handle it all. Strangely enough, that was also the time when backups weren’t made for certain systems because the client ("the business") hadn’t asked for it. One side assumed everything would be taken care of, while the other side strictly followed the assignment—and nothing more.
Now it’s the opposite. The business largely realizes that they are responsible for securing their own environment, and that they may and must set requirements. At the same time, many standard measures have been introduced. When you buy a car, you don’t need to demand that it comes with brakes, seat belts, and airbags; the law has already arranged that for you. The same applies to information security: there are laws and regulations that describe the minimum requirements a system must meet. Of course, an organization or internal client can set higher requirements – if a risk analysis shows it’s necessary. Because you never take measures just for the sake of it.
That doesn’t mean ad hoc measures can’t be taken. This can happen, for example, when security professionals encounter a dangerous situation. While we’re not responsible for "handling everything," we are responsible for ensuring the organization is safe. In doing so, we sometimes apply professional judgment. A nice term that essentially means: this must be done now because I, in my role, judge it to be necessary. And you can trust that this judgment is based on expertise.
Back to Merlijn Kaiser. Where did he actually buy that watch? Schiphol Airport has two major shopping areas: one where you enter the airport buildings, and one beyond security. That’s where he bought the watch. Without seeing a single customs officer. But still, it’s a great book.


In the big bad world ...

 

2025-09-05

Champions

Photo by author

 

I love this traffic sign. In other European countries, the warning for playing children is a neat triangle, just like all other warning signs. But in Croatia, they literally thought out of the box.

This sign powerfully expresses what it's about: playing children are unpredictable and can suddenly run into the street – breaking through the boundaries of their safe environment. The sign is also large and has a striking background color. You’ll find it in every village and city.

If you look under the sign, you’ll see an example of the opposite: a sign that raises questions. The sign prohibits vehicles over five tons from driving here; that’s clear enough. But there’s a sub-sign indicating that the rule only applies to trucks. Now I challenge you to name a road vehicle, not being a truck, that weighs more than five thousand kilograms.

But since I felt a bit unsure, I checked with AI: 'Are there road vehicles, not being trucks, that weigh more than 5 tons?' And yes indeed, my view was too narrow: the universe doesn’t consist solely of regular cars and trucks, but also of more exotic vehicles on our roads: heavy SUVs and pickup trucks, large RVs, special vehicles (Copilot mentions mobile medical units, mobile offices, and film production vehicles), and agricultural and construction vehicles. These are not trucks, but they are too heavy for this road. Unless that sub-sign is present.

Then you naturally wonder what the actual issue is. Apparently, the road (or is it the bridge on the left in the photo?) shouldn’t be overloaded, but a heavy load only seems to be a problem if caused by a truck. In the past, you’d have had a good discussion about such matters with colleagues, but well, remote work, right? So I asked AI again and it turns out that the weight itself – or as Copilot correctly calls it: the mass – doesn’t have to be the problem. Maybe they want to reduce noise pollution or improve traffic safety. I’ll leave out other AI arguments here because I find them less convincing.

Two signs, two totally different experiences. One causes a wow-effect and was the reason for taking this photo, the other raises questions and only stood out when I looked closely while writing this blog. Is that a problem? I don’t think so. I’m not the target audience for the second sign; my driver’s license only goes up to 3.5 tons. While driving, I wouldn’t even notice it. The first sign, however, should speak to every driver. No one wants to run over a child.

It works the same way in information security. Some things are important for everyone, like practicing good password hygiene and being alert to phishing. The importance of other matters depends on who you are. A network administrator must ensure no one gets uncontrolled access to the company network, while someone in finance must be careful not to pay fake invoices. That means we need to tailor our awareness efforts to the audience. But unfortunately, information security professionals in many organizations are too busy to differentiate their awareness activities. And so we end up with well-intentioned but sometimes too generic education.

How can we break through that? If hiring extra staff isn’t an option, maybe we can enlist help from the target groups themselves. Often, there are already people who are quite aware of the specific risks their team faces. They’re eager to share their knowledge and skills with their direct colleagues. We can support them by giving them a certain status. In some organizations, they’re called security champions. I think that’s a great title. They are our champions in the field. Let’s cherish and support them.

Will you be our first security champion?

Next week, due to a busy schedule, there may be no Security (b)log.

 

And in the big bad world …

2025-08-29

New friends

Image from Unsplash

It was somewhat between a conference and a summer camp: there were lots of people and it was a bit chaotic. On my way from one presentation to the next workshop, I was harassed by a few teenagers; a small scuffle even broke out, during which I put them in their place.

As I continued on my way, I noticed that the boy who had been the most aggressive had slipped a note into my pocket. I wanted to read it only once I was out of their sight, but that was a big mistake. We will never know what message that boy wanted to convey. Because at that moment, the alarm clock went off. And no matter how hard I tried to pick up the thread again, it didn’t work.

Of course, you start fantasizing about the meaning of such a dream, and especially what would have been written on that note if it had been real. It was probably a cry for help, and the scuffle was only meant to get that note into my pocket. Maybe I’ve watched too many movies like that.

Anyway, let’s continue with the theme that someone needs help but apparently can’t ask for it openly. In my work, I don’t encounter that so directly – people with a security issue simply ask for advice; sometimes even when they suspect the expected answer will mean extra work or require them to abandon their current way of working. Fortunately, colleagues usually understand the importance of security measures.

Outside of my direct work, situations like that can certainly occur. Our organization has a huge impact on what you and I have in our wallets. But also on the profits of legal or illegal trade. And when it comes to money, crime is always lurking. That doesn’t always mean people spontaneously start doing criminal things. But because we manage an insane amount of data about everything and everyone, professional criminals sometimes set their sights on our employees.

Here’s a snippet from an NOS news report dated August 20: "An Amsterdam municipal official who is in custody on suspicion of corruption and complicity in explosions has sold data on a large scale to criminals, according to the Public Prosecution Service. Those criminals then carried out attacks or caused explosions at dozens of addresses he had provided."

It seems this official independently set up a little business selling addresses. However, often it works the other way around: people with access to certain data are approached by criminals. And that doesn’t always happen in a straightforward way. No, often they first try to become friends, and maybe at some point they help you with something. They’re looking for a weak spot in you, something you could really use help with. And you get that help from your new friend. A little while later, a small favor is asked. "Look at this, someone hit my car! Luckily, I just managed to note the license plate. The police are too busy to look into it, but for the insurance I really need to know who that jerk is. Hey, you have access to that kind of information, you’ll help me out, right?"

They’ll probably come back to you more often, and then you can’t get out of it. You’ve done something that you shouldn’t have, and now you’re stuck. You don’t want to pass on information anymore, but your new ‘friend’ won’t accept that. If you don’t help him anymore, your boss might find out what you’ve done…

That situation may seem hopeless, but help is always available. And luckily, you don’t have to appear in my dreams for that. There are various internal channels you can turn to. Look for information on subversive crime. Do something before it’s truly too late. And for the vast majority who are not affected by this: remember that you never know who will cross your path in the future.


And in the big bad world...

2025-07-25

Artificial Integrity

Picture AI-generated (Copilot)

High time for a summery blog, although the inspiration doesn’t come from the current weather. Fortunately, a colleague gave me a great tip.

He showed me two short videos. The first one shows him and his girlfriend sitting next to each other. They turn toward each other and kiss. In the second video, he’s alone on a rock by the sea, and four blonde, long-haired, and rather scantily clad women slide into view and, well, caress him. He lifts his head in delight.

Why does he share that footage? We don’t have a team culture where we brag about such conquests. No, he showed me this because it’s not real. Oh, it starts with a real photo, just a nice vacation snapshot. Then the AI app PixVerse turns it into a video. You can choose from a whole range of templates—far more than the two examples mentioned: you can have someone board a private jet, cuddle with a polar bear or tiger, turn into Batman, have your hair grow explosively, get slapped in the face, and so on. With many of these videos, viewers will immediately realize they’re fake. But with my colleague’s videos, it’s not so obvious.

That’s exactly why the European AI Act requires that content created by artificial intelligence be labeled as such. Imagine if his girlfriend saw the second video without any explanation. Depending on temperament and mutual trust, that could easily lead to a dramatic scene. PixVerse is mainly aimed at having fun, but you can imagine how such tools could be used for very different purposes.

Take blackmail, for instance. You generate a video of someone in a compromising situation, threaten to release it, and hold out your hand. And like any good criminal, they won’t necessarily follow the law and label it as fake. Now, PixVerse’s quality isn’t immediately threatening: if you look closely, you can tell. Fingers remain problematic for AI, and eyes too. But still, if you’re not expecting to be fooled, you won’t notice—and you only see it once you’re looking for it. I see a criminal business model here.

It seems PixVerse mainly targets young people, judging by the free templates available. My colleague’s videos were also made by a child. On the other hand, you can subscribe to various plans, ranging from €64.99 to €649.99 per year. That’s well above pocket money level for most. If you do get a subscription, the watermark disappears from your videos—in other words, no more hint that AI was involved.

One of the pillars of information security is integrity: the accuracy and completeness of data. This was originally conceived with databases and other computer files in mind. It would be wrong if a house number or amount would be incorrect or if data would be missing. But you can easily apply this principle to images and audio, too. If you can no longer trust them, integrity is no longer guaranteed. Not to mention the (personal) integrity of those who abuse it.

After this blog, my vacation begins, and I used AI to help plan it. For example, to find nice overnight stops on the way to our final destination. But you have to stay alert: ChatGPT claimed the distance between two stops was over a hundred kilometers less than what Google Maps calculated. When confronted, ChatGPT admitted it had measured as the crow flies. I’d call that artificially dumb rather than intelligent.

I hope you encounter something during your own vacation that makes you think: he should write a blog about that. Write it down or take a photo and send it to me! As long as it’s real…

The Security (b)log will return after the summer holidays.


And in the big bad world ...

 

2025-07-18

The reliable criminal

 

Image from Pixabay


Have you ever experienced being unable to work at home or in the office because your computer wouldn’t respond? Or that your children’s school or university had to close for the same reason, or that a store couldn’t sell anything? Welcome to the world of ransomware.

As we often see with technological developments, this phenomenon also started surprisingly long ago — in 1989, with the AIDS Trojan. This malware was distributed via floppy disks to participants of an AIDS conference. Victims had to send $189 by mail to Panama — but received nothing in return. In the early 2000s, there were some amateurish attempts to hide files, but the real game began in 2013 with CryptoLocker. It spread via email attachments, used strong encryption, and demanded payment in bitcoin. That became the market standard.

In the early days, you could never be sure whether, after scraping together your savings, you would actually receive the key to decrypt your files. Law enforcement agencies around the world advised against paying ransom. This affected the criminals’ income. Thus, the “reliable criminal” emerged: increasingly, you could count on being “helped” after payment. According to an estimate by Copilot, the chance of this in 2015 was about 80% (now only 60%).

Again, law enforcement urged people not to pay. Not only was there still no guarantee of receiving the decryption key, but paying also helped sustain the criminal business model — while the goal was to make this trade less profitable.

Criminals responded with double extortion: not only were your files encrypted, but they also made a copy for themselves. If you didn’t pay, your information would be published. And since everyone has something to hide, this was a successful extra incentive to pay. Around that time, there was also a shift from individuals to businesses and governments as targets, because larger sums could be demanded. Publishing customer data or trade secrets could have serious consequences.

Beyond law enforcement’s calls not to pay, there’s also a moral question: is it ethically justifiable to pay? I instinctively lean toward “no”, but I want to explore the nuances — because not paying can have serious consequences beyond the affected organization. Consider the 2021 attack on JBS Foods, the world’s largest meat processing company. The attack led to temporary closures of factories in the U.S., Canada, and Australia and disrupted the food supply. Partly for that reason, the company decided to pay no less than $11 million.

Two years earlier, Jackson County, Georgia was a victim. Police and other government services were completely paralyzed. They paid $400,000, but never officially confirmed whether they got what they paid for. That same year, around Christmas, Maastricht University in the Netherlands was hit. The €200,000 they paid turned out to be a good investment: part of it was recovered and, due to the rise in bitcoin value, was worth €500,000 now.

Food is a basic necessity, but if you can temporarily eat something other than meat, getting that meat processor back online may not be so urgent. If the local police are digitally blind for a while, perhaps another police force can help. And a paralyzed university — we survived that in 1969 too, when the administration building of the University of Amsterdam (the ‘Maagdenhuis’) was occupied (though that wasn’t about ransom). In short: seek alternatives rather than paying ransom.

There is a collective interest in eradicating ransomware, but everyone must participate. Some countries are working on banning ransom payments or at least requiring mandatory reporting. A ban on insurance coverage can also help discourage payment. But these measures don’t help the affected companies directly. What does help are initiatives like No More Ransom, where police and the private sector collaborate to recover decryption keys and make them freely available. We also regularly see the successes of international police cooperation. And of course, organizations must increase their own resilience by investing in awareness (especially around phishing), good detection tools, and a solid backup strategy. With all these measures, this criminal business should eventually become unprofitable. And then maybe those people can do reliable and honest work instead.

And in the big bad world…

 

2025-07-11

No clock ticks like the one at home

Image from Pixabay

My grandparents had a Frisian tail clock hanging on the wall. In the same village, my in-laws had exactly the same clock. Recently, my wife made an interesting revelation about their version.

According to that clock, time passed more slowly than in reality. They had already taken it to a clockmaker in Belgium. A cleaning didn’t help. Then someone revealed a special trick. He said the clock probably wanted to hang slightly askew. That turned out to be true. But getting it off-level was a matter of millimeters. It took weeks to find the right position. Other clocks in the house served as reference points.

I have a modern desk lamp with a built-in digital clock. My physics teacher once explained that electric clocks always show the correct time because they – if I remember correctly – tick along with the frequency of the alternating current (50 Hertz in Europe). Not so with my desk lamp clock. I have to reset it every few weeks. I usually do that when it’s two minutes fast, because then it gets too annoying. I’m always surprised that in 2025 there are still clocks that don’t keep accurate time.

All these clocks that don’t perform their task well need to be interpreted. “Oh right, it’s that clock, so it’s probably a bit earlier/later.” With clocks in someone else’s house, you often don’t know that. You might think you’re already too late for the train home, while you could have still caught it.

We also interpret security policy. As a security officer, I often get questions like: someone did this or that, is that actually allowed? The answer is rarely stated literally in a policy document. You have to tilt the document a bit, so to speak, to extract the right information. We always find one or more rules that apply to the situation. Sometimes you also have to want to see it. That’s where professional judgment comes in: you’re a security officer for a reason, and if you say something is or isn’t allowed, then that’s how it is – your judgment is based on your professionalism.

Over the years, I’ve seen a parade of colleagues flagged by some security system. Those notifications lead to an assessment. Is it worth taking action? Is the incident serious enough? Or is it immediately clear that it was an accident and the user had no malicious intent? I find the latter especially interesting: if it’s a report about something that could potentially have malicious intent, then you have my attention and can expect a meeting with your supervisor. They know you better than I do and may have other puzzle pieces that together paint the picture of a generally exemplary employee – or not.

In all that time, no one has ever dared to ask: where does it say that this isn’t allowed? No, they feel caught, say sorry, and promise never to do something so stupid again. Fortunately, I’ve rarely encountered anyone with bad intentions. Most of these incidents are the result of well-meaning actions that unfortunately conflict with policy. Everyone is supposed to know the law, the law says, but in practice it’s a bit different. We’re happy to help them stay within the lines.

My grandmother had a special time policy. She set the clock ten minutes ahead. That way, if she had to go somewhere, there was always the reassurance that she should have already left, but luckily still had some extra time. I always found that just as strange as clocks that decide to show a time other than the correct one.


And in the big bad world ...

Red Square

Image from Pixabay You rent a small plane, fly it to Moscow, and park it on Red Square. Back in 1987, 18-year-old German Mathias Rust embarr...