in

We Want a Fourth Legislation of Robotics for AI



In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot.

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans.

But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience.

Deepfakes, Misinformation, and Scams

The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Reportcybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.

Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.

Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly commonand any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanishas he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.

Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots.

In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Actwhich includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans.

Therefore, we must make an addition to Asimov’s laws.

Fourth Law: A robot or AI must not deceive a human by impersonating a human being.

The Way Toward Trusted AI

We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human.

Implementation of this Fourth Law would require:

Mandatory AI disclosure in direct interactions,Clear labeling of AI-generated content,Technical standards for AI identification,Legal frameworks for enforcement,Educational initiatives to improve AI literacy.

Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem.

But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence.

Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.

From Your Site Articles

Related Articles Around the Web



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Howard College Swim Workforce Given Key To The Metropolis Of Eatonville

Elevate Robotics deploys UR cobots with Harmon for prime rise fastener set up