Image from Pixabay |
I’ve
said before that you shouldn’t ask an information security officer if you can
use AI for your work, because that will lead to a risk analysis that will
undoubtedly say: don’t do it. No, decisions about the application of certain
forms of technology should be made by ‘the business’, or perhaps a better term,
by the decision makers. They may well be influenced by our risk analyses, but
there are more factors that decision makers should and/or want to take into
account.
Sometimes
the decision is to be made at the political level. Like with AI. Enter the European
AI Act, a regulation on artificial intelligence (an EU regulation is
legislation that applies throughout the European Union, without country-specific
interpretations). The aim of the AI Act is to ensure that we get safe AI
systems that respect our fundamental rights. These rights include transparency,
traceability, non-discrimination and environmental friendliness. And the
systems must be under human supervision to prevent harmful consequences.
The
regulation divides the AI landscape into four risk levels. The highest level
contains systems that pose an unacceptable risk to the safety, livelihood and
rights of people and are therefore prohibited. Examples mentioned by the EU are
voice-controlled toys that encourage dangerous behavior and real-time biometric
identification (think of the facial recognition at traffic lights in China: if
you walk through a red light, you’ll find a ticket in your mail).
The
next category contains systems that pose a high but acceptable risk. They may
have a negative impact on our safety and fundamental rights, and they fall into
two subcategories: systems covered by EU product safety legislation, such as
toys, cars, aviation, medical devices and lifts, and systems in certain areas,
such as critical infrastructure, education, employment, law enforcement and
migration. Such systems are assessed before they are allowed to be put on the
market, and throughout their life cycle. National regulators must set up a
complaints procedure.
One risk
level lower are systems that pose a risk of deception. This includes generative
AI, which creates content itself, such as ChatGPT and Gemini. Artificially
generated content must be labelled as such. So if you chat with an AI chatbot
on a website, they must clearly tell you. Deepfakes – videos, photos and sound
fragments that are manipulated to make it seem like someone is doing or saying
something – must also be labelled. AI systems that pose a minimal risk are not
regulated. Examples include games and spam filters. According to the EU, the
vast majority of AI systems currently in use fall into this category.
The
AI Act will be implemented in phases. In February next year, unacceptable
systems will be banned. Six months later, the national supervisors should be
sitting in the saddle. Next year, the transparency rules for general AI (such
as ChatGPT) will also come into force. And a year later, the rules for
high-risk systems will come into force.
It is
good to see that the EU is taking this issue by the horns in a timely manner. But
you need have no illusions about everyone complying with the regulations.
Criminals in particular have a knack for breaking the law. They will certainly
continue to use deepfakes to make people believe that a loved one is in need
and urgently needs money.
And in the big bad world…
- the military also needs AI regulations.
- ChatGPT is not a formal source.
- the Dutch tax administration took down air traffic control.
- sometimes you have to fall back on old-fashioned systems.
- This website collects European alternatives for digital products and services.
- ethical hackers are legally protected in Germany.
- You should be alert to phishing from booking.com.
- Australia wants to ban children and teenagers from social media to protect their mental health.
No comments:
Post a Comment