Image from deperfectepodcast.nl |
Two
weeks ago I read with fascination the (internally published) story of fellow
blogger Guido about ChatGPT, the artificially intelligent chatbot that anyone
can use. Until now I knew chatbots mainly from companies that want to prevent
you from calling them, and instead offer you to exchange thoughts with their
always cheerful canned employee via the keyboard. But if I do have a question
for a company at all, it rarely falls into a category that Clippy's
grandchildren can handle; otherwise I would have found the answer myself in the
FAQ.
Do
you remember Professor Barnabas's computer from the Spike and Suzy (UK)/ Willy
and Wanda (US) comics? It spat out a card with the answer to the question
asked. I wanted to experience for myself how this works in practice. January 28
is European Privacy Day, and I promised the privacy colleagues in my team that
I would dedicate a blog to it. As an experiment, I outsourced its the writing for
once. However, not entirely: the following paragraphs are partly written by
ChatGPT, partly by me (in the ChatGPT pieces I only intervened grammatically
here and there). Can you recognize who wrote which paragraph? I will give the
solution next week.
As a
European, I am proud of the European Union and its role in protecting our
personal data. With the introduction of the General Data Protection Regulation
(GDPR) in 2018, the EU has introduced one of the strictest privacy laws in the
world. The GDPR gives consumers more control over their own data and makes companies
responsible for protecting those data.
While
the GDPR is an important step forward, the battle for our privacy will never
end. Technological developments are going faster than ever and with the rise of
artificial intelligence and the internet of things, there are new challenges we
face. Businesses and governments must continue to adapt to the changing
landscape to protect our data.
Let's
zoom in on artificial intelligence as a threat to our privacy. Sometimes you can
benefit from a company or government agency not knowing everything about you;
in countries with an oppressive regime this applies even more than in our
western democracy. As long as only people and traditional information systems
are involved, things are sometimes overlooked. But what if artificial
intelligence were able to link all kinds of sources together in an
unprecedented way and make connections that you, as a human being, or a 'dumb'
computer, would never have thought of? Is that still fair? Aren't we entitled
to a lucky break from time to time, because 'the system' overlooked something?
It is not only the responsibility of companies and
governments to protect our privacy. As consumers, we also have a role to play.
We need to be aware of our own data and how we share it. We must ask questions
about how companies use our data and we must oppose inappropriate practices.
Let's work together to protect our privacy so we can feel safe and protected in
our digital world.
Another important step is the use of privacy-enhancing
technologies, such as Virtual Private Networks (VPNs) and encryption tools.
These can help protect our online communications and protect our personal
information from being intercepted by hackers or other malicious actors. It is
also important to keep abreast of privacy issues and to advocate for stronger
privacy protections on a national and international level. For example, by
supporting organizations that are committed to defending privacy rights, and by
speaking out against government surveillance and data collection practices that
threaten our civil liberties.
In 2009, the scientists Murphy and Woods formulated the Three
Laws for Responsible Robotics, following the Three Laws of Robotics by
the science fiction author Isaac Asimov. The first of their laws reads: “A
human may not deploy a robot without the human-robot work system meeting the
highest legal and professional standards of safety and ethics.” Maybe we should
also apply this law to artificial intelligence, at least with regard to ethics,
and to prevent chatbots from threatening our privacy.
And in the big bad world…
This section contains a selection of news articles I came
across in the past week. Because the original version of this blog post is
aimed at readers in the Netherlands, it contains some links to articles in
Dutch. Where no language is indicated, the article is in English.
- ChatGPT
itself also believes that it can threaten our privacy. [DUTCH]
- teachers
want to prevent the use of ChatGPT for homework and reports. [DUTCH]
- a four-year-old version of the US no-fly list leaked through a regional airline.
- tens of millions of accounts have been stolen from T-Mobile in the US.
- WhatsApp
has received a hefty privacy fine.
[DUTCH]
- both
governments and companies monitor your social media. [DUTCH]
- the LockBit ransomware is explained here.
- the growing use of multi-factor authentication leads to more attacks on telecom companies.
- the Russian Kaspersky company trashed the Russian chat app Telegram.
- the
Army illegally collected information.
[DUTCH]
- the
police distinguishes between cybercrime and digitized crime, both of which,
incidentally, decreased last year in the Netherlands. [DUTCH]
- websites
must offer a button to refuse cookies right away. [DUTCH]
No comments:
Post a Comment