2024-07-19

Playing with toilet paper

 

Image from Pixabay

If you are eating right now and have a bit of a delicate soul, you better save this blog for after dinner. It has an, uh, sanitary approach. I try to keep it as neat as possible.

About five years ago, Dutch comedian Kasper van der Laan appeared on a couple of tv shows. He made an interesting suggestion there to save toilet paper. When you wipe your bottom, you continue until you see a clean piece of paper, that's how his argument begins. But, according to Van der Laan, then your buttocks were already clean. You could have stopped one wiping round earlier. The big question now is: do you dare to gamble on it? Like, “I think I’m done here,” in the comedian's words. Do you remember the sudden, unfounded fear that toilet paper would run out during the covid pandemic? Perhaps there were more people then who put Van der Laan's thought experiment into practice.

Using toilet paper is – if I'm being a bit broad – a kind of security measure. It protects you from skid marks, skin irritation and unpleasant odors. This immediately raises the question of how other animal species deal with this, especially with the second risk (the first does not apply and they will not have to deal with the third as much). But come on, let me not digress. What Van der Laan did here was a genuine risk analysis. And in view of the summer holidays, in which many people have to go to a camping toilet, whether pleasant or not, with a roll of toilet paper under their arm, and others will encounter hotel paper in various qualities, it is urgent to work this out in more detail.

The basic formula for risk analyses is: risk = likelihood × impact. In this toilet case we play with likelihood: if you keep going until you produce a clean piece of paper, the likelihood of skid marks is practically zero. If you put the idea described above into practice, the likelihood will always be greater than zero. But how much greater? That's difficult to determine, because you have to deal with another variable: the, er, output. If it were always of the same quality, you would know after a few swipes: after so many swipes it's done, so I can switch to that many swipes minus one. But we all know that our biological output can vary over time. For example, because of what you have eaten, because of a different climate or because you are ill or nervous. The chance of an incorrect assessment, and therefore also the chance that the risk will become reality, is variable.

Estimating the probability of an event which damages information security is in itself difficult. We usually do a qualitative risk analysis, which means we use terms such as low, medium and high. The counterpart is quantitative risk analysis, which involves calculating with numbers - for example with statistical data for the likelihood and with amounts for the damage. In all these analyses, the probability is not a fixed factor, nor is it in the sanitary example. However, most of the time we pretend that this is the case. And I don't think that should be a problem with quantitative analyses, because the necessary margin is already built into the terms used such as high and low.

However, if you are in a situation where the odds can go either way, you will have to assume worst case. This may mean that the measures you take to deal with the risk are 'too good' some of the time. After all, we do not strive for maximum, but for optimal security – not too little, but also not too much. If you are allowed to drive 30 km/h somewhere because of road works, that is fine, but if no one is working at that moment, that measure feels unnecessarily strict.

What to do? Do you tailor your measures to what the average is? Then you run the risk that the measures are too weak at times. How bad that is depends on the impact it may have. If the expected impact is acceptable, then you can do with a bit less. But if that temporary speed limit is not only there to protect road workers, but also because there is a large hole in the road, things are different.

In Japan they have toilets that make toilet paper redundant. You will be sprayed clean and blown dry from within the bowl. And sometimes you can even play a sound via the control panel to disguise certain typical bathroom sounds. They have taken all risks into account and implemented smart measures.

The Security (b)log will return after the summer holidays.

And in the big bad world...

 

2024-07-12

I see, I see what you don't see

 

Image from Pixabay

It was a warm Tuesday afternoon in one of those summers that just won't break loose. Then you take what you can get, and so they sat in their Utrecht backyard enjoying that one beautiful day. Suddenly the peace was cruelly disturbed by shouting and banging on the garden gate. They jumped up in alarm.

Through the cracks in the gate they saw bits of a woman with a wild-eyed look. “Let me in, this is my house!” she screamed. Well, they weren't going to do that. Explaining that the woman was really at the wrong house, even in the wrong street, had no effect on this lady, who was clearly under the influence of something. She kept banging on the gate. Well, 'gate' sounds very solid, but in fact it was a construction of windmill wood that hung on inferior hinges, and the rightful owners feared that it would not last very long.

Time to call the police. They arrived quickly, and they soon realized that it was best to take the person with them, because in her current state reasoning with her was impossible. They stuffed her into the back of the car and drove away. The street regained its calm.

The local residents were of course both shocked and curious. Most were not at home at the time, or they were vacuuming, so they didn't hear anything. The neighbors across the street had a security camera. Maybe it recorded something? Bingo! It was all there. When the woman walked up, she even looked straight into the camera. The police action was also beautifully depicted. The video was shared in the neighborhood app group - not for sensation, but because everybody knew that that lady would be walking around freely again in no time, and because the neighborhood would like to be prepared.

If I lived on that street, I would want that information too. You want to protect your family and your property, don't you? As an ordinary citizen, I would not hesitate to share the images with neighbors. But at the same time, from my profession, I wonder: is that actually allowed? What about privacy? People who do something wrong are also entitled to their privacy. The General Data Protection Regulation (GDPR) is European legislation that regulates our privacy. Every country has a GDPR supervisor; in the Netherlands this is the Dutch Data Protection Authority (AP). The AP is the perfect source to look for the answer to my question.

I read there that you may not share images in which people are recognizable without their permission. So do not put it on the internet and do not share it via social media. But there is an exception for personal or household use: “The condition here is that this person keeps the photos and videos private or at most shares them in a very limited circle. For example in a small app group.” That 'small app group' is a bit strange, because any member of that group could further distribute the images.

There's more going on. The GDPR states that you are not allowed to film public roads. Because that would constitute an infringement of the privacy of every passer-by. They understand that sometimes there is no other option than for your camera to film a part of the street. But even then there are rules. The most obvious: zoom in on your property as much as possible, in other words: make the violation of the rules as small as possible. There is actually no need to keep images, but there appear to be no concrete rules for this, because the AP says: “Delete the images as soon as you no longer need them. For example, after 24 hours.” You also have to inform people about your camera and secure the images properly - because if you are hacked, it means a data breach.

There is a double standard in the rule that you are not allowed to film public spaces. Because if something happens on your street, the police would love to have the images from your 'illegal' camera - they can even demand those images, in other words: you are obliged to hand them over. So it's not allowed, but if you do it anyway, it might help in fighting crime.

 

And in the big bad world...

 

2024-07-04

Crime from the holodeck

 

Image from Pixabay

You walk through a corridor that looks like all the other corridors, but eventually you stand in front of that one door. It whizzes open with that typical sound and you enter the room behind it. But no, you are no longer in a room at all. You are in a lush forest, hearing birds chirping and a stream babbling. And yet you really haven't walked outside, for the simple reason that you are on board a spaceship.

Some of the readers fully understand what I am talking about, others will hopefully also continue to read with curiosity. For the latter group, an explanation: you are on board a spaceship from Star Trek, the still popular science fiction series from deep into the last century, where in the 24th century they have the holodeck: a room in which holograms and force fields generate simulations of people, objects and environments. It all looks, feels, sounds and smells completely realistic and you can even touch things. The holodecks are mainly used for recreation and training purposes. The simulated environment can appear much larger than the space occupied by the holodeck. That's why you can walk through that forest for hours. But you could just as easily sit in a virtual cafe or play a game of tennis.

In the 1980s, when the holodeck appeared in Star Trek, this was an example of virtual reality avant la lettre. Only in the following decade did consumer versions of VR headsets become widespread – you know, those ski goggles with built-in screens and preferably speakers on the side, which immerse you in a sometimes frighteningly realistic illusion. You have to experience it to understand it.

As often happens with inventions that advance humanity, the technology to create virtual realities (a contradiction in terms if you ask me) has also been put to bad use. Because nowadays we have artificial intelligence (also a term with a built-in contradiction). AI is used by cybercriminals to present a false reality to their victims. Like that mother I was talking about a while ago, who really thought she heard her son on the phone saying that he had had an accident. You don't always have to set up a complete environment like a holodeck to get someone to believe something. Sometimes it's just a matter of showing, making it heard, felt or smelled what fits in a certain context. And criminals are particularly useful at this.
I call that AI crime.

If you regularly read the articles in the And in the big bad world... section below, you will have seen many events lately that will promote AI crime: a Brit who - if elected to the House of Commons – lets himself be controlled by AI, a student who has applied AI to cheat, 'intelligent' toothbrushes and other household appliances, and especially not to forget AI functions that are increasingly being built into everyday software.

Will you still be able to distinguish between fake and real? Is your perception complete? Already in the era of chemical photography (film, darkroom, chemicals) the truth was violated by retouching photos. Often to make them more attractive, but there are also group photos of important Soviet Union people in which disgraced comrades have been erased. They have been cancelled, we would say nowadays. With digital photos, photoshopping is a piece of cake. And you've probably seen portraits that claim to be AI-generated. Had you not been given that information, you probably would have thought you were looking at a real human being. And the same goes with sound: the criminal obtains a recording of someone saying something and then his AI application can make the same voice say something different. This can also be achieved analogously: in presentations I often show a video in which you think you see and hear the actor Morgan Freeman - the visual part is indeed made with AI, but the voice is 'simply' deepfaked by a voice actor.

Virtual reality and artificial intelligence form a fertile couple. If you put their abbreviations together, you get vrai. That is the French word for truth, or reality. Isn’t that bizarre?

 

And in the big bad world...

... unfortunately I didn't have time to fill this section this time due to a day off.

Water distress

  Image generated by ChatGPT Apeldoorn (the Netherlands), Friday 4 October 2024, 18:22 – 70 thousand households receive a mail bomb: the tap...