The European Union seems determined to lead the way in the regulation of technology. You could say that the face of this cause is Margrethe Vestager, the European Commission’s executive vice president for the digital age. This time, she’s leading the EU in yet another fundamental transformation: regulation of artificial intelligence (AI). While the changes will take years to implement, it’s clear that new regulations will set strict boundaries for AI. “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Vestager explained for AP.
This was a long time coming. While AI is imperative should humanity move forward, in rare cases, it can also be… for the lack of a better word, creepy.
When AI goes too far
AI’s potential extends what most of us could even imagine, which also means we will need to draw the line somewhere. Several technologies are already balancing on a very thin line, while some projects were better off canceled.
Deepfakes, or what is even true anymore
It already feels like something as simple as identifying the truth is insanely exhausting today. On top of that, “thanks” to AI, you can make anyone say or do anything on video. In deepfakes, Obama tells you that Killmonger was right, women are regularly violated with revenge porn they never consented to, and in other news, there’s even deepfake geography. None of it is real; all of it is scary. Neural networks can now even create fake images of people that don’t exist and tailor them precisely to your taste.
Chinese Big Brother’s always watching
Chongqing, the world’s most surveilled city, boasts one CCTV camera for every 5.9 citizens. Nothing escapes the camera lenses, and oh boy, are they clever. Some of them use AI-based facial recognition software, and once you’re out on the streets, you’d better be a model citizen or else. Yes, crimes are minimized, you can easily find whatever you lost, and identifying missing people is suddenly easy. But on the other side of that spectrum, you can end up in something called a re-education center. It’s not exactly a concentration camp, but also not exactly anything else.
And as Time reports, many of these “inconvenient” people placed in the camps were essentially flagged by a computer algorithm. Millions of victims trapped in insanity where anything from newly grown beards to frequent business trips and not using social media could be suspicious. And the fact that China plans to mold perfect citizens with its Social Credit system reliant on Big Data is a bitter cherry on top.
If AI is just as good as we are, we might be in trouble
The problem with AI is that it’s only as good as the data we feed it. And that data is only as good as we are. The thing is, to err is human, and we err A LOT. And so AI is learning from our mistakes and our biases, forcing a giant like Amazon to scrap their secret recruiting tool. Why? Well, the system taught itself that men are preferred candidates and thus discriminated against women in its selection process. Sigh.
AI goes rogue
In some instances, AI confused even the researchers. One neural network managed to cleverly hide information from its creators so it can cheat when transforming satellite images into Google Maps. Facebook decided to discontinue its two artificial intelligence robots when they started talking to each other in a language no one could decipher. While it sounds like something straight out of a plot twist in a sci-fi novel, they weren’t shut down because of fear. They were simply useless since they couldn’t communicate with humans.
So what about the AI regulation in the EU?
The goal of the European Union is primarily to outlaw cases where AI manipulates, exploits, or influences. Vestager also stressed that there is “no room for mass surveillance in our society” – unlike in China.
The draft document proposing the changes focuses on “AI systems considered a clear threat to the safety, livelihoods, and rights of people”. The suggestions are not final; however, AI applications considered high risk should brace themselves for a new rulebook in the upcoming years. We’re talking about sensitive areas such as law enforcement, criminal courts, self-driving cars, or border controls. Using AI won’t be banned, but high-quality data and a human supervisor will be a must. The draft also proposed new, less strict rules for AI applications of “limited risk”, such as chatbots. The potential punishment: 30,000 euros. In the case of more profitable companies, it could be up to 6% of their global annual revenue. However, they will be given an opportunity to fix their mistake first.
While some regulation is necessary, it’s crucial to leave enough room for flourishing innovation of AI in the EU. The discussions in the upcoming years will show if the EU can step up to the challenge.
Image credit Radikale Venstre