Starship of James T. Kirk, Jean-Luc Picard's predecessor. Image from Pixabay |
“We are
the Borg. You will be assimilated. Resistance is futile.” These three sentences gave the crew of
the USS Enterprise starship, led by Captain Jean-Luc Picard, a lot of
headaches. No, don't drop out now if you don't like Star Trek! As so often, my
blog is ultimately about something completely different.
The Borg are a collective life form,
consisting of many beings who share one consciousness and therefore no longer
have a will or personality of their own. They move through the universe and
violently assimilate everyone who can contribute to their pursuit of perfection
into their collective. They are very powerful; that is why they tell you right
away that it is useless to oppose them. The Borg grow in power as the
biological and technological characteristics of their subjects are added to the
collective. All Borg are equipped with various technological implants - they
must of course be recognizable to the viewer. When they have nothing to do, the
Borg are stowed away in a regeneration alcove. While the body is in a kind of
sleep, the brain is used for collective tasks.
That's all nice on TV, but in real
life living in such a society would be horrible. Although sometimes I wish
certain people had a little more collective intelligence and decency. But yes,
certainly in Western society we value individuality above everything else, and
that includes differences in intelligence and behavior. To some extent that
diversity is great; if it becomes willfully extreme, it can hinder a pleasant
society.
Artificial intelligence (AI) is on
the rise. As a kind of consumer version of AI, ChatGPT has quickly established
itself in our society. Many people understand that such a tool can greatly
facilitate their lives. Just think of pupils and students, who eagerly use it –
often to the sorrow of their teachers. Incidentally, AI detection tools are
also being developed, enabling them to check whether someone is submitting work
that originated from biological or artificial intelligence. ChatGPT is a 'large
language model', which I find difficult to understand. But things got a little
clearer earlier this week when a colleague asked me what the term is for a
particular phenomenon. I didn’t know that off the top of my head either, so I
consulted Google, which also yielded nothing. A language model is much better in
understanding what you actually mean to say than a search engine, and ChatGPT
came up with the right term.
AI is like dynamite: invented with
the best of intentions, often used maliciously. We still got the Nobel Prizes
from that. ChatGPT and its ilk follow the same path. You can ask them to look
for a security hole so you can close it, but you can also use that to break in.
And so lately we often get asked whether we should limit the use of ChatGPT in
our organization.
Maybe you shouldn't put such a
question to an information security officer. We will perform a risk analysis
and, by definition, look at it from the starting point: what could go wrong?
Well, I assure you AI is going to come out of that as a major threat. Subsequently,
you have to do something with all those identified risks. You may be able to mitigate
some of them, and management may accept other risks. With all that, however, we
are looking into the bad side, while AI can also be a blessing. I don't want to
be the one who stops the introduction of the steam train because it can travel
so terribly fast.
A wise long-retired colleague used to
say: “A measure without control is no measure.” I may have control over which
websites you are allowed to visit with your work laptop and keep you away from
ChatGPT, but I can't prevent you from using private devices to do so. At least,
not technically; we have all sorts of rules for this from an organizational
point of view. And then I can only hope that you know them and that you stick
to them.
We need a policy for applying
artificial intelligence to our work. From a security perspective, the leakage
of information must be taken into account if (too) specific questions are asked
of an AI tool. By the way, you can just as easily leak information via search
engines. Perhaps AI is not so special for information security officers after
all. In any case, it is pointless to resist it: it is there and it will not go
away. But it is important that we know what is real and what comes from the
collective brain of the computer.
And in the big bad world…
- According to this author, AI is mainly used for evil.
- the Dutch cabinet wil come up with a position on ChatGPT cs
- you can now train ChatGPT with your company documents (and there are probably risks involved).
- the customers of a Danish cloud provider can whistle for their data.
- North Korea made a fortune with its summer cybercrime.
- companies cannnot ask you for a copy of your passport when you make a GDPR access request.
- this phone hacking company asks law enforcement not to talk about their stuff.
- Ransomware is discovered more quickly these days.
- this malware traces its location through Wi-Fi routers.
- this smart lamp isn't so clear-headed after all.
No comments:
Post a Comment