2023-11-17

Kafka's Castle

 

Image from Pixabay

Remember that castle I wrote about last week? Where they didn't trust anyone, because they assumed the enemy was already within the walls of the castle? I went for a walk around the area, and guess what? There is another castle just down the road. And they do things in a completely different way there.

Not so long ago I heard this statement at a conference: we must move from 'low trust, high tolerance' to 'high trust, low tolerance'. That’s one of those statements to which the audience mumbles in agreement, without yet understanding exactly what it means. I make a note of those kinds of statements to think about them later. Writing a blog is an excellent way to hatch an egg like that one. Buckle up, dear reader, because at this point I don't know yet where the story is going.

The statement contains the assumption that many organizations work from a kind of non-trust (which is different from distrust), much like in last week's castle. There are many rules that you have to adhere to, because you probably won't do the right thing on your own. Not because you don't want to, but because you can't know them all. And because there are so many rules, it is very difficult to adhere to them all. If only because you do not know all the rules, but also because some rules are not feasible, or because it is sometimes inconvenient. You know, that word 'actually'. Whenever someone says that something shouldn’t actually be done in that way, you already know that a rule will be worked around. The lord of the castle knows this too, and therefore turns a blind eye to many things: he is very tolerant, as long as the rules are not broken deliberately and with malicious intent.

The statement from the second paragraph implies that that attitude is not good, because well, we 'must' move towards that other model: high trust, low tolerance. This lord of the castle assumes that everyone who works for him understands very well what is and is not possible, because many things are obvious. When you enter somewhere, you close the door behind you. Not only because otherwise it would be draughty, but also because someone might slip in who shouldn't be there. If you’re in charge of the lady's jewelry, you probably understand that you are not supposed to lend them to your girlfriend for an evening. So there are far fewer formal rules, but woe betide you if you betray trust and they find out. Then you'll be in the dungeon on bread and water in no time. There is little tolerance.

Do you know Franz Kafka's novel Der Prozess (The Trial)? That story revolves around Josef K., who is arrested and ultimately convicted without ever knowing why. Apparently he sinned against rules that he did not know – even could not have known. We could easily end up in such a Kafkaesque situation if we work on the basis of 'high trust, low tolerance'. Not a nice place to live, that castle.

What about a middle ground? I call it 'some trust, some tolerance'. It is probably true that we have too many rules, which no one knows anyway. Every citizen is supposed to know the law, they say. But how realistic is that, if taken literally? Even without knowledge of the law, you know that you are not allowed to puncture car tires, right? Likewise, there are numerous security rules that people adhere to anyway. Or where a little more tolerance wouldn't hurt. It annoys me every time when the app, in which I can see my daughter's class schedule, kicks me out if I haven't checked the app for a few weeks. Then I have to log in again, and then I always have to figure out how that works, because it works differently than elsewhere. How exciting is what's in that app? Let it piggyback on the security of my phone. Even my bank's app is easier (after an initial strict admission procedure).

So we can probably get by with fewer rules, but we also have to learn to be less tolerant. Still too often someone does something in a way that they know perfectly well is not the way it should be, but – of course with the best intentions, no doubt – they still manage to do it in that way. It works, but there are too many risks involved that may have been overlooked. Tolerance should not be taken, it should be given. From the person who is responsible.

So we need a new castle, at an appropriate distance from Kafka's. With residents who reasonably adhere to rules that mainly regulate what is not obvious to everyone. That model will only work for people, by the way. Let’s stick to zero trust for systems.

Next week there will be no Security (b)log.

 

And in the big bad world...

This section contains a selection of news articles I came across in the past week. Because the original version of this blog post is aimed at readers in the Netherlands, it contains some links to articles in Dutch. Where no language is indicated, the article is in English.

 

2023-11-10

The leaking castle

 

Image from Pixabay

Assume breach – you can safely assume that your systems have been compromised; hackers have already managed to gain access to your IT resources without you noticing. Of course this isn’t a very joyful assumption. It means something like: my security will fail and I can't stop it. It sounds like you're putting your head down, like a capitulation. However, it is not intended that way. No, the assume breach mindset is pointing out that your opponents have so many opportunities to penetrate your castle that it is simply impossible to always adequately protect all holes.

Let me deepen the castle metaphor a little further using the age-old parable as we know it in information security, with the castle moat, the drawbridge and the crown jewels in the robust keep. That comparison emphasizes how well we are doing with our layered security. What I want to talk about is that those layers all have their weaknesses.

Let's start with the moat. That’s easy: in winter you can sometimes just walk over it (yes, you young people, it used to get so cold in winter that all bodies of water in the country would freeze). I think many proud medieval castle lords were surprised when it turned out that their ingenious water barrier could easily be overcome without boats, as long as the enemy waited for the right moment. We have the drawbridge for normal crossing of that water. What happens if the chains or ropes used to raise the bridge snap? Then the bridge deck falls down and everyone can cross it. From a security perspective, if something is broken you don't want the unsafe situation to become the default.

But fortunately we still have the portcullis, which closes the opening in the castle wall. If its chains snap, it will fall and access will be blocked. That is, if it doesn't go askew due to the uncontrolled fall and become stuck. Then it remains open again and the enemy can still enter.

Finally, there is the donjon, or keep, the sturdy residential tower of the lord of the castle. It has thick walls and narrow windows. Valuables and important people would stay on the top floor, I imagine, furthest away from an intruder. I'm just afraid they wouldn't have anywhere to go if the enemy started a fire.

The onion model is based on the hope that if one layer is broken, the next layers will still stop the attacker. But is it really so inconceivable that all layers are leaking at the same time? The moat is frozen, the portcullis is rusted and the enemy, who marches in unhindered, smokes out the lord of the castle. But you forgot the archers! Well, that is a matter of attacking with a sufficiently strong and well-equipped army.

So assume that the attacker is already inside, the assume breach mindset tells us. Maybe he isn't at the top of the keep yet, but he is already walking around within the walls of your castle. He is in disguise and waiting for a good moment to make his move. What do you do when you think you know that the enemy in disguise is already inside? Then you don't trust anyone anymore. In security terms: zero trust. You assume that no one can be trusted and that every time someone wants something, you have to check whether that is allowed. Not: “Hi Pete, come in,” but: “Hi Pete, let's check whether you are still allowed in.” This in turn presupposes that it is perfectly clear what is allowed and what isn’t. Can it be true that so many employees have access to that important system? Or can you maybe reduce that attack surface through a better authorization structure? The more people can do something, the more people an attacker can try to deceive through, for example, phishing. Another important measure in this context is two-factor authentication: you say that this is your user ID and password, but that alone is not good enough to gain access.

In the physical castle, zero trust only works up to a point. Ultimately, the lord of the castle will have to be able to trust his bodyguards and his cook. He can take extra measures: remove the jewelry from the display cabinet and store it in a locked chest, for example. Thus making it a bit more difficult for an attacker. And that is what our profession is all about. 

 

And in the big bad world...

This section contains a selection of news articles I came across in the past week. Because the original version of this blog post is aimed at readers in the Netherlands, it contains some links to articles in Dutch. Where no language is indicated, the article is in English.

 

 

2023-11-03

Betrayed by your phone

 

Image from Pixabay

Last Tuesday I was in the auditorium of a hotel in Venlo. Standing on the presenter’s side in a lecture hall is a bit intimidating, but after four presentations to groups of colleagues about the risks of their online existence, it fit me like a glove.

An important part of those risks has to do with your privacy. While you can use all kinds of apps for free, most apps also do something on their own: they collect data about you. And they sell that information to advertising companies, who use this information to create profiles. Your name is not necessarily linked to this: mobile devices work with an advertising ID that is linked to your device. Is your privacy well protected by this feature? Meh.

As is often the case in information security, it is all about who you are, or sometimes also what you are. Take phishing for example. This can be done in two ways: the criminals use a dragnet and are fine with whatever they catch, or they use a spear to catch exactly the one fish they want. For example, because they know that that person has access to the company's money and is therefore a good target to receive an email 'from the CEO', stating that he must immediately transfer a nice amount of money to a certain bank account. This form of phishing is called spear phishing; you now understand why.

Back to the advertising world. As we saw, profiles are created for advertising purposes, but who says those profiles can only be used for that purpose? Suppose you have a collection of profiles. You could then create a map showing all the devices in a certain area. You don't know who they belong to, you just see the advertising IDs. Then you could single out one of those IDs and turn the question around, so to speak: where has this device been? That may provide a clue of places where the device is often found. And that in turn offers the opportunity to find out where someone works and where he lives.

For most of us, that's not a threat – we're not interesting enough for that. But what if you’re a criminal and therefore the police are looking for you? By using information, which is actually intended for placing advertisements, they may be able to get close to you. Unfortunately, it also works the other way: what if you’re in law enforcement and you have to deal with criminals that also have access to that kind of information? Of course either side also needs specialized software for this. Reputable companies that could make something like this would probably only supply such a product to law enforcement. Unfortunately, organized crime is also becoming smarter and moreover, they have plenty of money to have something like that built. That could be a serious threat. In the context of personnel care, the Dutch financial crimes unit kindly requested this blog post on the matter. But of course it can also be relevant for other colleagues and for people outside our organization.

You can do something about this quite easily. The advertising ID of your device can be turned off. This makes you invisible on the map, and your device will not appear if someone asks the question: which devices are present around this office building around eight in the morning and five in the afternoon? Advertising companies such as Google and Meta will inform you that you will then see 'less relevant' advertising. So what! I brush aside the advertising for strollers as easily as I would the advertising for running shoes. And remember, if you also have your private phone in your pocket while at work, you want to kill the advertising ID on that device as well. Here is a brief description of how to do this in iOS/IpadOS and in Android. And in this video, John Oliver explains again how trading your data works. The entire video is interesting; fast forward to 10:10 if you just want to see the part about phone location.

The above tips are of course only intended for people on the right side of the law. It is advisable for criminals not to follow the tips, because that could have all kinds of unpleasant consequences.

 

And in the big bad world...

This section contains a selection of news articles I came across in the past week. Because the original version of this blog post is aimed at readers in the Netherlands, it contains some links to articles in Dutch. Where no language is indicated, the article is in English.

 

 

Gyro Gearloose

  Image from Pixabay Gyro Gearloose is a crane after my own heart. He can invent a genius device to order, or he has something lying around ...