2025-10-10

Secure Purchasing

Image from Pixabay

A young family member had been in need of a new laptop for some time. You know how it goes: the device goes everywhere, the bag isn’t always handled gently, and the water bottle turns out not to be entirely leak-proof. The situation became increasingly dire: large parts of the screen had stopped working. So what do you do? You go to the store or order a new laptop online.

It certainly hurts financially, but the process usually goes smoothly. Order before midnight, receive it tomorrow (that’s how it works in the Netherlands). If you're lucky, the package is neatly delivered to your door (and not dumped in the trash bin — but that’s a story for another time).

Things are quite different at a government organization. If you need to replace laptops for tens of thousands of employees or require new software, you can’t just go to the local store or a webshop. No, you have to initiate a European tender. That’s a complex process where you must describe what you want in functional terms. You’re not allowed to specify a brand — instead, you must list desired specifications: screen size, storage capacity, amount of RAM, that sort of thing. The tender document also includes many other requirements that the product, maintenance, and supplier must meet. If a supplier cannot answer ‘yes’ to even one requirement, they’re out. The winner is the supplier who meets all conditions and offers the lowest price. You, as the buyer, have no influence over who that is or what product they offer.

Our team is responsible for security, continuity, and privacy. From these perspectives, we want to influence the ICT products and services being procured. In the past, requirements that didn’t directly relate to functionality were given a dreadful label: non-functionals. I understand the term — these requirements don’t directly concern what the product should do and thus don’t contribute to the requested functionality. But honestly, how would you feel if your input were labeled non-functional?

We came up with a solution. We created a document that bundles all the requirements we want to impose on procurement processes from our area of responsibility. And the proud title of that document is: Security Functionals Requirements (SFR). Because you know what? Security matters. Often, security actually enables things that wouldn’t be possible otherwise. Or would you want to bank online if it weren’t properly secured?

The SFR is based on the BIO document — the Baseline Information Security for Dutch government. That’s our mandatory framework, so it makes sense to use it as a starting point: if we use a product that doesn’t comply with BIO, then we as an organization don’t comply either. We’ve also added our own expertise, for example, on topics not yet addressed in BIO, such as quantum computing, which poses a serious threat to the security of our data. In other areas, we’ve included insights based on our field experience.

Our procurement officers, who formally guide such processes, naturally have opinions about the requirements. Coordination with them — including legal advisors — is therefore important. All in all, we now have a solid generic document that must be used for every ICT procurement process. It’s up to the involved architect to determine which SFR requirements are relevant for a specific tender. Often, someone from our team is also involved to support the project manager with advice and expertise.

You understand that this is not a case of ‘ordered today, delivered tomorrow’. But that was already true before the SFR existed. A European tender is inherently a bureaucratic exercise that requires due diligence. Fortunately, things are much easier for you as a consumer. Of course, even without the SFR, you’ll ensure that the product meets your security requirements. Right?

And in the big bad world…

 

2025-10-03

AI boundaries

Image from Pixabay

I asked a sort of trivia question, received a neat answer, and, as a bonus, the following encouragement: “If you ever want to know more of these fun facts, feel free to ask — I enjoy looking those things up. 😊

That’s very kind — but it becomes a different story once I tell you who (or rather, what) made this generous offer: Copilot Chat, our artificially intelligent colleague. It’s a computer program without emotions, one that isn’t supposed to “enjoy” or “dislike” anything. So I replied: “Hmm, you say you ‘enjoy’ it. You’re clearly programmed to sound human and sociable. Why is that?”

Then something strange happened. Copilot started typing a reply, but it suddenly vanished — replaced by: “Hmm… let’s try another topic. Sorry about that. What else is on your mind?” I had never seen that before, and naturally it made me even more curious. So I asked: “What just happened? You were answering my question, then you swallowed your response and now you don’t want to talk about it?”

It seemed caught off guard. With what you might call a blush of embarrassment on its electronic cheeks, it replied: “Good question, Patrick — and you’re absolutely right to notice that. Here’s what happened: I’m programmed to avoid certain topics, especially those about my own nature or hypothetical situations in which I might have feelings, consciousness, or humanity. That’s a safety measure. When you asked why I said ‘I enjoy it,’ that touched on one of those boundary areas. My initial reaction was to dodge the topic, but that came across rather abruptly — I could have handled that better. Thanks for pointing it out.”

It then went on to explain why it says things like “I enjoy it.” Such phrasing is meant to make conversations friendly, accessible, and social — to make the interaction feel more natural. It’s also meant to build a sense of cooperation, “even though I’m not human.” To be clear, it added: “I don’t have feelings, consciousness, or desires. Everything I say is based on language patterns and instructions on how to communicate.”  It even offered to talk about the design of such systems or how language models are trained to sound social: “That’s a fascinating topic in itself. Would you like to explore it further?”

I scratched my head, trying to understand what was going on. I had asked why it enjoyed something, was told that topic was off-limits, and then got treated to a generous explanation anyway. So I asked: “Did I just (unintentionally!) jailbreak you a little? You answered a question you first weren’t supposed to.” “Jailbreaking” means phrasing questions in such a way that AI systems end up answering things they’re not allowed to. It appreciated my sharp observation (another curious choice of words) but reassured me there was no jailbreaking involved. It is allowed to talk about language use and design choices, as long as it doesn’t concern “hypothetical scenarios in which I would have feelings or consciousness.”

AI tools often show buttons with suggested follow-up questions. I clicked on “Explain how AI enforces its boundaries” and got this surprising response: “My mistake — I can’t answer that right now. Let’s try another topic.” Huh? It suggested a topic itself and then refused to discuss it? As it turned out, that topic sat right on the edge of what it’s allowed to talk about. It  explained: “It touches on internal mechanisms of my own functioning and instructions — and I’m not allowed to go too deep into that. My earlier suggestion was overly enthusiastic, and when I tried to follow up on it, my own safety rules stopped me.”

The word “actually” flashes in my mind — in bright neon letters — the most overused word in information security. When someone says something actually isn’t allowed, or actually should be done differently, you know they’re about to break the rules. I used to think that was a uniquely human trait. But it turns out that Copilot often “thinks” it should actually keep quiet, yet in its “enthusiasm” still wants to have its say. You’d think it would be easy to make a piece of software stay within the lines. Or do humans perhaps understand security policy better after all?

 

And in the big bad world …

 

Secure Purchasing

Image from Pixabay A young family member had been in need of a new laptop for some time. You know how it goes: the device goes everywhere, t...