2025-10-31

Digital dumpster

“We missed each other”. The delivery note says ‘blue dumster',
handwritten with love and a spelling mistake.

I’m not sure what irritated me most. Was it the spelling mistake, or the fact that my parcel had been dumped in the blue trash can (intended for paper recycling)? Or was it the little heart the delivery person used to justify their action?

Look, when I order something, I want to receive it as quickly as possible. But I also want to actually receive it, and it should be intact. Those last two requirements don’t quite align with delivery into a garbage container. A lot can go wrong. I wouldn’t be surprised if gangs roam quiet neighborhoods in the afternoon, searching for parcels dumped in bins. A housemate who knows nothing about it might toss a fresh load of old paper into the blue bin without noticing the package. A thoughtful neighbor, who did remember that the bins are being emptied today, might kindly put your bin out on the street. And if the bin was already empty, you’ll have to dive in to retrieve that coveted parcel. So you understand, I don’t consider the bin a substitute mailbox.
Of course, I’ve addressed many delivery drivers about this. They politely apologize and promise to do better. If you complain to their employers, you get the expected response: shame on them, we’ll discuss it internally. But nothing ever changes. There’s too much time pressure on deliveries, and depending on the setup, drivers are paid per delivered item. Taking it back means no money for some. So it’s a pragmatic choice to leave the parcel somewhere. And the bin is still a relatively safe spot. I’ve seen cases where the parcel was simply left at the front door.
You can end up in a similar situation if you don’t know where your data is stored. If you use a recent version of Microsoft Office at home, Word, Excel, and the other programs prefer – by default – to save your files in the cloud. If you want to save them ‘locally’, you’ll have to make an effort. I bet many people don’t even know their files end up in the cloud, let alone what that means. If they did know, they might be shocked or outraged: “Why didn’t anyone tell me?!” In that sense, the cloud is a digital dumpster.
Do you lose files stored in the cloud? Probably not. But you might temporarily lose access due to a cloud service outage. You also hear the term ‘digital sovereignty’ more often. That refers to a country’s right to control its own data. I see a wave pattern: in the early days of the public cloud, we often said the cloud is just someone else’s computer – and surely you wouldn’t want to store your data there? When it became clear that major cloud providers had their affairs in perfect order, there was a rush to the cloud; it was the logical place to store everything with those American tech giants, our friends. In today’s geopolitical climate, we view that American hegemony with a healthy dose of skepticism.
What applies to your private situation also applies to the organization you work for. It too wants its data to be stored thoughtfully and securely. That means clear guidelines must exist about what can and cannot go into the cloud. For government organizations, this is not just a policy choice but also a political one. And ‘clear’ means the policy must be easy to implement. Long green and red lists won’t work. Technology comes to our aid with a CASB: a Cloud Access Security Broker. It automatically enforces company policy when using cloud applications, ensuring sensitive information is stored and shared only under safe and approved conditions.
But of course, technology isn’t flawless. So we need to look more closely at alternatives close to home, under our own sovereignty. Bert Hubert is someone who actively lobbies for this. He once proposed creating a kind of  ‘Cloud Kootwijk’. He’s referencing the radio station that the Netherlands established during colonial times to avoid relying on competing foreign countries for communication with the colonies: Radio Kootwijk. The impressive building, nicknamed The Cathedral, still stands. With some adjustments, it could house a national cloud. They should also try to make good arrangements with the parcel delivery services there.


And in the big bad world…

 

2025-10-24

Diverted


 

Image from Pixabay

On board flight KL1540 from Alicante to Amsterdam, a call was made for a medical doctor. Moments later, the captain announced that the plane had to divert to Paris due to a medical emergency.

And then things suddenly go differently than you're used to. The tone shifts from friendly-businesslike to measured-strict. The descent feels noticeably steeper than usual. The cabin crew is instructed to check seatbelts and tray tables "if time allows." There's no time left to collect trash with a cart. Once on the ground, you're quickly parked and emergency services arrive.

After everything around the patient is taken care of, you want to return to normal as quickly as possible: onward to Amsterdam. For that, the captain had to "make the necessary calls," for example to refuel and to arrange a new landing slot at Schiphol. He also mentioned choosing not to order extra catering, as that would take additional time. He did take a moment to walk through the cabin to answer any questions.

In IT, you sometimes have to divert too. Something stops working in one data center but still works in another; it's redundantly designed, as we like to say. Failover comes in different flavors. In some systems, it happens automatically and users don’t notice a thing. The system detects something is wrong and switches to "the other side." In other cases, administrators must detect the issue and manually switch things over. And unfortunately, not every situation allows for failover, and users must wait until the problem is resolved.

Just like in aviation, in IT you want to return to normal as quickly as possible after a diversion. You need to plan ahead, because there are often many dependencies that require a specific order. You document the procedures in plans and – very importantly – you regularly practice those plans. Partly to get familiar with them, and partly to catch errors in the plans. Better to encounter those errors during practice than in real life.

Sometimes there's no time to practice – or rather, no time is made. Imagine if pilots weren’t given time to train emergency procedures. And then during takeoff – a fairly critical moment – an engine fails. You don’t want the pilots looking at each other in confusion. No, they should routinely (on autopilot, so to speak) perform the correct actions. Those actions have been thought out, documented, and thoroughly practiced. So that things end well when something goes wrong.

But it can get worse: when no attention is paid at all to the continuity of a process. Sure, you can make the deliberate decision that it isn’t necessary, but in the cases I’m referring to, the topic isn’t considered at all. Out of ignorance, helplessness, lack of time – who knows. Maybe you're thinking of the recent massive AWS outage (Amazon’s cloud service), but feel free to look around your own organization too.

Flight KL1540 arrived two hours later than planned at Schiphol. Not a big issue for passengers whose final destination was Amsterdam. But there were also people on board who had a connecting flight to Kristiansand, in southern Norway. Not many flights go there from Amsterdam. I fear those passengers had to divert to a hotel.

 

And in the big bad world…

 


2025-10-10

Secure Purchasing

Image from Pixabay

A young family member had been in need of a new laptop for some time. You know how it goes: the device goes everywhere, the bag isn’t always handled gently, and the water bottle turns out not to be entirely leak-proof. The situation became increasingly dire: large parts of the screen had stopped working. So what do you do? You go to the store or order a new laptop online.

It certainly hurts financially, but the process usually goes smoothly. Order before midnight, receive it tomorrow (that’s how it works in the Netherlands). If you're lucky, the package is neatly delivered to your door (and not dumped in the trash bin — but that’s a story for another time).

Things are quite different at a government organization. If you need to replace laptops for tens of thousands of employees or require new software, you can’t just go to the local store or a webshop. No, you have to initiate a European tender. That’s a complex process where you must describe what you want in functional terms. You’re not allowed to specify a brand — instead, you must list desired specifications: screen size, storage capacity, amount of RAM, that sort of thing. The tender document also includes many other requirements that the product, maintenance, and supplier must meet. If a supplier cannot answer ‘yes’ to even one requirement, they’re out. The winner is the supplier who meets all conditions and offers the lowest price. You, as the buyer, have no influence over who that is or what product they offer.

Our team is responsible for security, continuity, and privacy. From these perspectives, we want to influence the ICT products and services being procured. In the past, requirements that didn’t directly relate to functionality were given a dreadful label: non-functionals. I understand the term — these requirements don’t directly concern what the product should do and thus don’t contribute to the requested functionality. But honestly, how would you feel if your input were labeled non-functional?

We came up with a solution. We created a document that bundles all the requirements we want to impose on procurement processes from our area of responsibility. And the proud title of that document is: Security Functionals Requirements (SFR). Because you know what? Security matters. Often, security actually enables things that wouldn’t be possible otherwise. Or would you want to bank online if it weren’t properly secured?

The SFR is based on the BIO document — the Baseline Information Security for Dutch government. That’s our mandatory framework, so it makes sense to use it as a starting point: if we use a product that doesn’t comply with BIO, then we as an organization don’t comply either. We’ve also added our own expertise, for example, on topics not yet addressed in BIO, such as quantum computing, which poses a serious threat to the security of our data. In other areas, we’ve included insights based on our field experience.

Our procurement officers, who formally guide such processes, naturally have opinions about the requirements. Coordination with them — including legal advisors — is therefore important. All in all, we now have a solid generic document that must be used for every ICT procurement process. It’s up to the involved architect to determine which SFR requirements are relevant for a specific tender. Often, someone from our team is also involved to support the project manager with advice and expertise.

You understand that this is not a case of ‘ordered today, delivered tomorrow’. But that was already true before the SFR existed. A European tender is inherently a bureaucratic exercise that requires due diligence. Fortunately, things are much easier for you as a consumer. Of course, even without the SFR, you’ll ensure that the product meets your security requirements. Right?

And in the big bad world…

 

2025-10-03

AI boundaries

Image from Pixabay

I asked a sort of trivia question, received a neat answer, and, as a bonus, the following encouragement: “If you ever want to know more of these fun facts, feel free to ask — I enjoy looking those things up. 😊

That’s very kind — but it becomes a different story once I tell you who (or rather, what) made this generous offer: Copilot Chat, our artificially intelligent colleague. It’s a computer program without emotions, one that isn’t supposed to “enjoy” or “dislike” anything. So I replied: “Hmm, you say you ‘enjoy’ it. You’re clearly programmed to sound human and sociable. Why is that?”

Then something strange happened. Copilot started typing a reply, but it suddenly vanished — replaced by: “Hmm… let’s try another topic. Sorry about that. What else is on your mind?” I had never seen that before, and naturally it made me even more curious. So I asked: “What just happened? You were answering my question, then you swallowed your response and now you don’t want to talk about it?”

It seemed caught off guard. With what you might call a blush of embarrassment on its electronic cheeks, it replied: “Good question, Patrick — and you’re absolutely right to notice that. Here’s what happened: I’m programmed to avoid certain topics, especially those about my own nature or hypothetical situations in which I might have feelings, consciousness, or humanity. That’s a safety measure. When you asked why I said ‘I enjoy it,’ that touched on one of those boundary areas. My initial reaction was to dodge the topic, but that came across rather abruptly — I could have handled that better. Thanks for pointing it out.”

It then went on to explain why it says things like “I enjoy it.” Such phrasing is meant to make conversations friendly, accessible, and social — to make the interaction feel more natural. It’s also meant to build a sense of cooperation, “even though I’m not human.” To be clear, it added: “I don’t have feelings, consciousness, or desires. Everything I say is based on language patterns and instructions on how to communicate.”  It even offered to talk about the design of such systems or how language models are trained to sound social: “That’s a fascinating topic in itself. Would you like to explore it further?”

I scratched my head, trying to understand what was going on. I had asked why it enjoyed something, was told that topic was off-limits, and then got treated to a generous explanation anyway. So I asked: “Did I just (unintentionally!) jailbreak you a little? You answered a question you first weren’t supposed to.” “Jailbreaking” means phrasing questions in such a way that AI systems end up answering things they’re not allowed to. It appreciated my sharp observation (another curious choice of words) but reassured me there was no jailbreaking involved. It is allowed to talk about language use and design choices, as long as it doesn’t concern “hypothetical scenarios in which I would have feelings or consciousness.”

AI tools often show buttons with suggested follow-up questions. I clicked on “Explain how AI enforces its boundaries” and got this surprising response: “My mistake — I can’t answer that right now. Let’s try another topic.” Huh? It suggested a topic itself and then refused to discuss it? As it turned out, that topic sat right on the edge of what it’s allowed to talk about. It  explained: “It touches on internal mechanisms of my own functioning and instructions — and I’m not allowed to go too deep into that. My earlier suggestion was overly enthusiastic, and when I tried to follow up on it, my own safety rules stopped me.”

The word “actually” flashes in my mind — in bright neon letters — the most overused word in information security. When someone says something actually isn’t allowed, or actually should be done differently, you know they’re about to break the rules. I used to think that was a uniquely human trait. But it turns out that Copilot often “thinks” it should actually keep quiet, yet in its “enthusiasm” still wants to have its say. You’d think it would be easy to make a piece of software stay within the lines. Or do humans perhaps understand security policy better after all?

 

And in the big bad world …

 

Digging holes

Image from Pixabay "Trenchless technology," it said on the company van. That instantly had my full attention—if you advertise your...