2025-07-11

No clock ticks like the one at home

Image from Pixabay

My grandparents had a Frisian tail clock hanging on the wall. In the same village, my in-laws had exactly the same clock. Recently, my wife made an interesting revelation about their version.

According to that clock, time passed more slowly than in reality. They had already taken it to a clockmaker in Belgium. A cleaning didn’t help. Then someone revealed a special trick. He said the clock probably wanted to hang slightly askew. That turned out to be true. But getting it off-level was a matter of millimeters. It took weeks to find the right position. Other clocks in the house served as reference points.

I have a modern desk lamp with a built-in digital clock. My physics teacher once explained that electric clocks always show the correct time because they – if I remember correctly – tick along with the frequency of the alternating current (50 Hertz in Europe). Not so with my desk lamp clock. I have to reset it every few weeks. I usually do that when it’s two minutes fast, because then it gets too annoying. I’m always surprised that in 2025 there are still clocks that don’t keep accurate time.

All these clocks that don’t perform their task well need to be interpreted. “Oh right, it’s that clock, so it’s probably a bit earlier/later.” With clocks in someone else’s house, you often don’t know that. You might think you’re already too late for the train home, while you could have still caught it.

We also interpret security policy. As a security officer, I often get questions like: someone did this or that, is that actually allowed? The answer is rarely stated literally in a policy document. You have to tilt the document a bit, so to speak, to extract the right information. We always find one or more rules that apply to the situation. Sometimes you also have to want to see it. That’s where professional judgment comes in: you’re a security officer for a reason, and if you say something is or isn’t allowed, then that’s how it is – your judgment is based on your professionalism.

Over the years, I’ve seen a parade of colleagues flagged by some security system. Those notifications lead to an assessment. Is it worth taking action? Is the incident serious enough? Or is it immediately clear that it was an accident and the user had no malicious intent? I find the latter especially interesting: if it’s a report about something that could potentially have malicious intent, then you have my attention and can expect a meeting with your supervisor. They know you better than I do and may have other puzzle pieces that together paint the picture of a generally exemplary employee – or not.

In all that time, no one has ever dared to ask: where does it say that this isn’t allowed? No, they feel caught, say sorry, and promise never to do something so stupid again. Fortunately, I’ve rarely encountered anyone with bad intentions. Most of these incidents are the result of well-meaning actions that unfortunately conflict with policy. Everyone is supposed to know the law, the law says, but in practice it’s a bit different. We’re happy to help them stay within the lines.

My grandmother had a special time policy. She set the clock ten minutes ahead. That way, if she had to go somewhere, there was always the reassurance that she should have already left, but luckily still had some extra time. I always found that just as strange as clocks that decide to show a time other than the correct one.


And in the big bad world ...

2025-07-04

Your inner self

Image by Copilot

“The best inspiration comes from within.” That’s not a quote from Sun Tzu, the Chinese general from the sixth century BC, whose work The Art of War is quoted at every opportunity. No, we attribute this quote to one Patrick Borsoi from the twentieth century AD. Not Chinese, not a general, but – in all modesty – occasionally clever.

Readers sometimes ask me how I find inspiration for a blog every week. I usually answer that I observe my surroundings and often see something mundane that I can link to information security. Sometimes colleagues give me a tip, whether or not from their own daily lives. Now I’ve discovered something new: listening to myself. Literally.

I was a guest on the podcast of the KNVI, the Royal Dutch Association of Information Professionals. I was there to talk about the Security (b)log and more technical topics like phishing, AI, and quantum computing. The podcast went online on July 1, and of course, I was one of the first to listen to it. That’s quite strange, by the way, but everyone says that when they hear a recording of themselves. The point is that I heard myself say something I had never said before and didn’t even remember saying (the recording was made a month and a half earlier).

Marijn Plomp is the regular host of this podcast, and Sandra de Waart was his sidekick that day. Since my blog has security awareness as its overarching theme, Sandra asked me: “How do you actually make people aware?” Because, as she rightly pointed out, simply saying “be aware!” doesn’t help. I compared it to a traffic sign that gives a general warning of danger (a triangle with a red border and an exclamation mark in the middle). If you only see that sign, you still don’t know anything. Only if there’s an  extra sign underneath, explaining what the danger is, you’ll know what to do or avoid. And here it comes. I said: “I try to be that extra sign.” By explaining why something is a risk, by clarifying it, you can make people aware. They need to understand it and even feel it.

Later in the podcast, I made a statement I’ve made more often: “I get paid to think in doom scenarios.” Just as there are people who get paid to play with Lego all day, I get to indulge in the question: what could possibly go wrong? While others revel in what a system, device, or method can do, I get to look at the dark side. That’s not always easy, as it can sometimes dampen others’ enthusiasm. Usually, that perspective on the error path is appreciated after all, because the final product improves by also considering aspects we’d rather ignore. That quote about doom thinking is, of course, a big wink, but it clearly and concisely shows that risk analyses are important – even if it’s just on the back of an envelope.

At the end of the podcast, I hear myself say that I need people as the last line of defense. Because if technology fails to avert disaster, if, for example, that one phishing email still manages to get through all the checks, then the employee whose inbox it lands in can make the difference between a healthy and a crippled organization. And with that last line of defense, we circle back a bit to Sun Tzu, who undoubtedly wrote something about that too.

Listen to the KNVI podcast. [DUTCH]


And in the big bad world...

·         airlines have recently attracted a lot of attention from cybercriminals.

·         even criminal organizations sometimes shut down.

·         Germany wants to ban DeepSeek.

·         physical and digital crime sometimes converge.

·         the Dutch Ministry of Defence is also investing in AI and cloud services. [DUTCH]

·         the police will now also respond to digital crime reports. [DUTCH]

·         a civil servant was punished for emailing confidential data to his private address. [DUTCH]

   

2025-06-27

Russian roulette

Image from Pixabay

Sometimes you catch a news item on the radio that makes you think, “Huh? I must have misheard that.” Like the report that Pavel Durov is leaving his fortune to all of his onehundred children.

The man turns out to have two 'real' children; for the remaining 104, he was only involved as a sperm donor. Fortunately, those children need not fear missing out, even with that many half-siblings. Each of them can expect over 160 million dollars, based on their father/donor’s current bank balance. They’ll probably have to wait a while, though, as Durov is only forty and very much alive. His name appears on impressive lists: the 120th richest person in the world, the richest expat in the United Arab Emirates, the most powerful entrepreneur in Dubai—those kinds of things.

Durov’s portfolio includes no fewer than four passports: he is a citizen of Russia (born in the Soviet Union), Saint Kitts and Nevis (islands in the Caribbean Sea where he supported the sugar industry with a quarter million dollars), the United Arab Emirates, and France. According to Paul du Rove, as he calls himself in that country, the application for the latter passport was an April Fool’s joke that was accidentally approved via a special procedure. But it did make him an EU citizen as well.

All these facts (and many more) can also be found on Wikipedia, but why am I bringing them up here? Because Pavel Valeryevich Durov is also the spiritual father and founder of Telegram, the messaging service akin to WhatsApp and Signal. And Telegram is not exactly a service beloved by security and privacy experts. I’ll explain why. Keep in mind that the term cryptography, as used here, has nothing to do with cryptocurrencies like Bitcoin.

When you exchange messages with someone, you generally don’t want others—people, companies, or governments—to be able to read along. That’s why messages are encrypted. This encryption ensures that only you and your conversation partner can read the messages, because only you two have the corresponding keys (this is called end-to-end encryption). The mechanism that handles the encryption is called a cryptographic protocol, which in turn uses cryptographic algorithms. Typically, internationally recognized standards are used, which have been extensively reviewed by many different experts. That makes them reliable. At Telegram, they thought it better to create their own crypto protocol. In cryptography, that’s considered a cardinal sin, because it’s likely you’ll overlook your own mistakes. Their protocol is also not fully public, making it difficult to scrutinize. Moreover, encryption is not enabled by default. With other messaging apps, it is.

Telegram and its founders have a turbulent history. Durov left Russia after disputes over his previous company, VKontakte (the Russian Facebook). In short: he had refused to hand over personal information about protesters to the authorities. In 2014, he left Russia and founded Telegram. According to Durov, Telegram turned a profit for the first time ten years later, with revenues exceeding one billion dollars. How Telegram was funded in the meantime remains unclear.

Despite the disputes in Russia, we don’t know whether backdoors have been built into Telegram. From what I can tell, Durov has a decent track record of resisting grasping authorities. On the global stage of espionage, however, you can never be sure whether that’s just for show and whether deals have been made behind the scenes. The platform is popular among criminals for conducting business, perhaps because Durov and co. don’t stand in their way. In this context, France arrested him last year (and released him on bail). In any case, the lack of transparency and, frankly, the Russian roots make Telegram a platform I strongly advise against using. Use Signal instead—it has a strong reputation in both cryptography and privacy. That said, it’s an American product and thus subject to U.S. law, which gives law enforcement various powers to demand data. However, they can only hand over data they have; the content of your messages is end-to-end encrypted and therefore reasonably safe. WhatsApp works the same way but has a poorer reputation for privacy because it monetizes your profile and behavior.

Even if you have a hundred children and an above-average bank balance, that doesn’t make you a diligent father. I see too many red flags to trust Durov and his Telegram.


And in the big bad world…

 

2025-06-20

At the theatre

Picture from author

The Red Hall of the Meervaart Theatre in Amsterdam looks empty in the photo. Just a few minutes later, it was filled with around three hundred employees from the National Collection Centre (LIC) of the Dutch Tax Administration. And that laptop in the picture? That’s mine.

A few months ago, the organizers of this annual event got excited about my blog posts. Probably under the slightly risky assumption that “if he can write in an engaging way, he can probably speak that way too,” they invited me to take part in the program. So, on Tuesday, I braved the railway strike and headed to the capital. I had three missions: a presentation in the breakout program before lunch, a plenary talk in that big hall after lunch, and at the end of the day, the same story from the morning, but for a different group of about forty people. The colleagues who came to hear me in Room 9 were 92% women. Someone like me, from IT and security, rarely sees that many women together in a work setting. They were a fantastic, engaged audience and gave me a great glimpse into their world.

I mainly owed the invitation to my blog about Girl’s Day. (Quick recap: for a presentation to high school girls, I googled their names and showed them what I — an amateur in that field — had managed to find out.) The LIC folks wanted to hear that story too. There was one difference: on Girl’s Day, my talk was about the girls in the room, while at the Meervaart, it was about those same girls — so, not about the actual audience itself (and of course, I didn’t mention any names or overly sensitive details in either presentation). Still, the tension was visible on the faces in the Red Hall. Especially the revelation that presentations made with the free version of PowerPoint alternative Prezi are publicly available online triggered an audible “Oh!” from the audience. A video showing a ‘psychic’ effortlessly uncovering personal details about his clients wrapped it up nicely.

My other presentation was titled Phish & Chats and covered phishing, chat apps, and artificial intelligence. The first part was a nostalgia trip for many: “Who of you has never received a phishing email?” No hands. “Hey Dad, this is my new phone number.” Murmurs in the room. English, with an Indian accent: “Hello, this is the Microsoft Helpdesk.” Nods all around. Naturally, I also gave them some tools to recognize phishing — because on a bad day, any individual employee might be the organization’s last line of defense when a phishing email lands in their inbox. And in that moment, you really want your colleague to respond appropriately.

The chat apps segment covered the pros and cons of various platforms. In short: don’t use WhatsApp for work due to privacy concerns, and don’t use Telegram at all. For internal government communication in the Netherlands, Webex is available. Signal is also an excellent choice.

Artificial intelligence (AI) also fell under the “Chats” part of Phish & Chats, because all those handy tools like ChatGPT, Gemini, and Copilot are smart chatbots — you can literally chat with them. I discussed how they work, how I view them from a professional standpoint, and what our organization does and doesn’t allow (allowed: Copilot Chat; not allowed: all others).

For me, the day was a warm bath of thumbs-ups, compliments, and thank-yous. And I hope that those who haven’t yet started reading the Security (b)log will now begin — not for me, but to become familiar with what’s happening in information security and their own role in it. Soon, I’ll be visiting a team closer to home, and after the summer, I’ll be back at our IT auditors’ annual conference. Yesterday, we discussed potential topics, and I’ll be working on finding a connecting thread in the coming weeks. In the meantime, I’ll also be a guest on a podcast. But more on that later.


And in the big bad world…

2025-06-13

The Hague brought to a standstill

Image from Pixabay

By now, you’ve probably heard, at least, if you live in the Netherlands: in just over a week, the city of The Hague will become an impenetrable fortress.

People living and working anywhere near the World Forum conference center have already been dealing with the disruptions caused by the largest security operation in history. But just like with an iceberg, what you see is only a fraction of the whole picture.

The last event of this scale was the Nuclear Security Summit in 2014, which also brought dozens of world leaders to that same conference center. In the eleven years since, the threat landscape—especially in terms of cybersecurity—has changed dramatically. Attack methods have become more sophisticated, and so have the people behind them. Much more sophisticated. And cunning. Which is troubling, because as an ordinary citizen, there’s little you can do to defend yourself.

“I’m just a regular person—what does this NATO summit have to do with me?” I hear you think. And yes, most of us won’t be directly involved. But that doesn’t mean you won’t be affected. In fact, you might be—without even realizing it.

Here’s why. Major events like this act as a magnet for what we broadly call malicious actors. Just like pickpockets flock to crowded markets, cybercriminals and spies are drawn to high-profile global gatherings. They’re after three things: money, information, and influence. The first is mostly the domain of criminals, though some rogue states aren’t above it either (looking at you, North Korea).

Stealing information is typically associated with state actors from countries like Russia, China, and Iran (plus a few others not on the public list). But don’t underestimate the criminals here either: ransomware attacks not only paralyze organizations but also steal data, which they then threaten to publish unless a ransom is paid. That increases their chances of getting paid.

Influence can be exerted in various ways. One is through disinformation—shaping public opinion, or even swaying the views of summit attendees. Some heads of state are surprisingly susceptible to such manipulation. Another tactic is disrupting the summit itself, throwing off schedules or even derailing the entire event.

Whatever the motive, these activities often start in the same place: phishing. Around events like this, phishing attempts spike—often themed around the event. You might get an email that looks like it’s from the City of The Hague: “Are you experiencing disruptions due to the NATO summit, such as being unable to get to work? Click here to apply for compensation.” Malicious actors know they’re more likely to succeed if they strike a nerve and dangle the promise of money.

Regular phishing is like shooting with a shotgun: blast it out to as many people as possible and see who bites. But there’s also targeted phishing—spearphishing—where a specific individual is the target and the message is custom-crafted. Expect to see more of that in the context of the NATO summit too.

I do wonder how they manage it in the Vatican. The Pope passed away, and five days later his funeral was held—with many dignitaries in attendance, including the U.S. President. Meanwhile, the Netherlands has been preparing for the NATO summit for months. Maybe it’s time for an educational field trip to Rome.

 

And in the big bad world…

2025-06-06

From slippers to biometrics

Image from Pixabay

Some nursing homes use facial recognition to keep elderly people with dementia inside, the Dutch tv news reported a few months ago. Because I am always on when it comes to possible topics for this blog, I made of note. And now I finally get around to explaining why that report caught my attention.

Facial recognition is a form of biometrics, just like a fingerprint scan or voice recognition. Biometrics means something like 'measuring biological characteristics'. The technology is based on the fact that every person has a number of unique characteristics. Based on these, you can identify someone. And to reassure you: biometrics doesn’t store your complete fingerprint or a photo of your face. Instead, a number of specific characteristics are recorded, such as the distance between your eyes and other proportions. When checking your access rights, a camera or scanner is used to check whether these characteristics are in its database. That is why the fingerprint scan on your phone suddenly works less well if you have been doing a lot of DIY: your finger is too rough to match.

So we use biometrics to gain access to something. Not to be denied access. But that is exactly what those nursing homes do. The front door is always open, but if the camera sees someone approaching who is not allowed outside because it is not safe for them, the door is locked. The nursing homes love it: "Otherwise we have to keep the doors closed for all residents. Now we turn that around: the doors are open."

And what if a smart resident sticks on a fake moustache, I wonder. Or puts on sunglasses. There is a good chance that he will not be recognized and will happily walk outside. Now I don't know if smart and demented can go together, but yes, I am obliged to my position to assume that things can go wrong. Edward Murphy is my role model (you know, the one with that law: everything that can go wrong, will go wrong).

What we see there is biometrics turned upside down. Why is biometrics not applied in the usual way? Everyone who is allowed to go outside is in the system. If he or she is recognized, the door swings open. If someone comes shuffling along who is not allowed to go outside and therefore is not in the system, the door stays closed. You have to be very clever to fool the system.

Before those nursing homes switched to biometrics, they used wristbands or sensors in their clients' slippers. Even then, they worked with open doors, which were locked only for some. But of course, you could easily work around that: take off your slippers and voila, you were outside. And a bit of fiddling with the wristband also turned out to work. Incidentally, the switch to biometrics has a double face: on the one hand, a band that is visible to everyone has a stigmatizing effect, on the other hand, the barely visible biometrics makes it difficult to enter an official protest – a right that also dementia patients have.

A nursing home is not a prison. Only residents who, due to their condition, are not safe to go outside alone, are kept inside – with the permission of themselves or their legal representative. Visitors are welcome and must be able to walk in and out freely. Open doors give a relaxed feeling, and thus contribute to a dignified existence. From that perspective, I understand the reverse approach, and I can imagine that there will not be that many clients who know how to hack the system. For most other applications, however, I like to stick to biometrics as they are intended.

 

And in the big bad world…

2025-05-23

Miscellaneous

Image from Pixabay

A few weeks ago I was at a conference. I took a lot of notes and I can watch the recorded sessions. What is the best thing to do with all that? After some browsing I made a decision: I am going to treat you to some quotes and let my own thoughts loose on them.

As a warm-up, here’s an obvious one: “If you have only met someone online, then that person is always a stranger.” This comes from a presentation on resilience against scams. You’ll have to agree with this statement, but do you also act accordingly? Or do you still want to believe that this nice person is also honest? That is very difficult. In the last century, when the internet was not yet mean, I met someone in an online forum (does anyone still remember CompuServe?). We had nice conversations about the state of the world and about observations in daily life. Later we started emailing directly, and at my wedding I met him in real life for the first time. If I had taken the above quote to heart, I would have missed out on this friendship. Back then, cybercrime did not exist and online life was a lot easier.

A handy tip to avoid becoming a victim of scammers: never pay to get paid. In other words: if someone promises you the moon but needs your money up front to make that happen, then something is wrong. It started with that Nigerian prince who wanted to share a fortune with you but needed some money to release that fortune, and nowadays you may be offered a job where a little effort will be richly rewarded – but certain costs have to be made first. Don't fall for it.

Then there’s this nice tip that you can immediately benefit from: change the name of your guest network to “faster wifi”. All your guests – and especially your children’s guests – will want to be on that network. And that is exactly where you want them. Because your guest network is separate from the network that provides access to your private data. At odds with this is the idea of connecting all your Internet of Things (IoT) devices to the guest network. The idea behind this is that IoT devices can be hacked relatively easily and that you would rather not have a hacker have access to your data. But do you want all your guests to have access to your dishwasher, dryer and solar panels? Difficult choices.

Sometimes a statement from one speaker ties in with that of another. Like these two: “8% of the users in your organization cause 80% of the risk” and “New employees are the biggest threat: they easily click on links because they do not understand the risks.” I would mainly link the first quote to employees who are in the “cannot & do’nt want to” quadrant: they don’t know how to behave safely and they are also not willing to adjust their behavior, which makes them difficult to reach. But according to the second speaker, the danger lies mainly in new employees. You can do something about that. That is why we have been involved in the onboarding program for new employees for years now. We treat the new colleagues to a presentation in which we playfully guide them through the most important aspects of information security, business continuity and privacy. And we advertise the Security (b)log, so that they will come back to our important message.

If there was one subject that ran through all those hundreds of presentations, it was artificial intelligence. One speaker thought that 90% of so-called AI experts have no idea what they are talking about, and that the other 10% know very little. And that is normal, he argued, because AI consists of many sub-disciplines and it is important that experts know a lot about their own sub-discipline. Just as you wouldn’t go to see a brain surgeon with heart problems, you should also seek out the right specialist in the field of AI.

Finally, a quote that stuck with me because it hits home so well: “ Generative AI is autocorrect/type ahead on steroids.” Let me break it down for you. Generative AI is the form of artificial intelligence known to the general public, which generates something on its own; you know it from ChatGPT, for example. You know autocorrect mainly from your phone; on the one hand, it protects you from typing errors, but sometimes it causes embarrassing situations because the “correction” turns out to be annoying (in my case, “Hi Nick” was once replaced by “Hi pig”). Type ahead is its cousin, and you also know it from your email program that, while you’re still typing an address: I know who you mean! Well, and all this on steroids, that is generative AI. With all the conveniences that come with it, but also with an amplification of all the inconveniences. I stopped the message to Nick in time, but if genAI is happily hallucinating and telling us a story that makes no sense, that’s a lot harder to discover.

There will be no Security (b)log next week.

 

And in the big bad world…

 

No clock ticks like the one at home

Image from Pixabay My grandparents had a Frisian tail clock hanging on the wall. In the same village, my in-laws had exactly the same clock....