Voting Rights for Robots?

The rapid development of artificial intelligence (AI) is already raising myriad social and moral questions that confirm that the technology’s use is the can of worms that philosophers and science fiction writers long anticipated. But, is it also a Pandora’s box that we’ll be unable to close once fully open?

It’s clear that while AI still has a way to go before it can fully assume many tasks currently performed by humans, it is making progress.

Even art and other creative ventures, which were once believed to be the exclusive domain of humanity, are increasingly being taken over by AI. The last several months alone have seen an explosion in AI-generated art.

While much of this is purely trivial entertainment, such as reimaginings of popular movie franchises as 1980s sitcoms or dark fantasy films, the quality and realism of the renderings — as well as the lightning speed with which collections of artwork are produced — demonstrates the accelerated strides AI is making and the reality that it is encroaching in areas in which it was formerly believed human creativity would have no competition.

Of course, the uses of AI for military purposes are not lost on governments, which are investing significant amounts of money in developing military AI.

As recently reported in the South China Morning Post, an AI-powered drone aircraft bested a human-controlled aircraft in a dogfight organized by Chinese military researchers.

The dogfight involved two small, unmanned aircraft. One had an AI pilot on board, while the other was remotely controlled by a human pilot on the ground.

As SCMP reported:

When the fight started, the human made the first move to gain the upper hand.

The AI, apparently predicting his intention, outmanoeuvred the opponent, made a counter move and stuck close behind its opponent.

The human-controlled aircraft dived to lure the machine to crash to the ground. The AI then moved to an ambush position and waited for the opponent to pull up.

The human pilot tried other tactics such as “rolling scissors” – a sudden slow-down and change of course – hoping the AI would overshoot in the chase.

The simulation was called off by the team after about 90 seconds because the human controller could not evade its AI opponent.

The project team’s paper based on the dogfight concluded that the “era of air combat in which artificial intelligence will be the king is already on the horizon.”

“Aircraft with autonomous decision-making capabilities can completely outperform humans in terms of reaction speed,” added Huang Juntao of the China Aerodynamics Research and Development Centre, who led the tests.

As the outlet notes, in January, the U.S. military said it had conducted several test missions, including combat drills, with an AI pilot flying a real F-16.

Thus far, it remains open to debate whether AI qualifies as “sentient” or “conscious.” Most experts currently agree that it doesn’t, but acknowledge that the line will become increasingly blurred in coming years.

Chatbots, which mimic human interaction, are one technology that is prompting questions about AI sentience. Ilya Sutskever, a co-founder of OpenAI — the firm behind the innovative chatbot known as ChatGPT launched last year — said the algorithms behind his company’s products might be “slightly” conscious.

Speaking with NBC News, representatives for ChatGPT and Microsoft both said they adhere to strict ethical guidelines in the development of their AI to keep them from becoming conscious, but did not provide details. A Microsoft spokesperson merely assured that the chatbot for its search engine, Bing, “cannot think or learn on its own.”

Many philosophers and AI observers posit that for a being to be considered sentient, it must have some form of subjective experience — a quality of humans and animals but not, for example, of inanimate objects.

Robert Long is philosophy fellow at the Center for AI Safety, a San Francisco nonprofit. He writes that the complexity of systems like ChatGPT do not mean they are conscious. But he also argued that a chatbot’s unreliability in describing its own subjective experience doesn’t mean it doesn’t have one.

To illustrate this possibility, he put forth the example of a parrot, writing: “If a parrot says ‘I feel pain’, this doesn’t mean it’s in pain – but parrots very likely do feel pain.”

If AI gets to the point where, even if it isn’t in reality conscious or sentient, but has enough appearance of sentience to convince a sizable portion of the human populace that it is, the social and political implications would be immense.

For instance, in a republic like the United States, would proponents of “AI rights” fight to give artificial intelligence the franchise? And if they succeeded in doing so, how would this play out in the case of an AI that can replicate itself millions of times?

And what about in military matters? The United States is already sending military aid such as tanks to Ukraine. While provocative, this aid has not yet triggered war between the U.S. and Russia as would happen if America clearly crossed the line in the sand by sending troops.

But if AI gets to the point where it is considered sentient, would providing AI technology in these situations constitute an overt act of aggression that would drag the nation into war?

These are major questions. And at the rate AI is evolving, our society will have to start coming up with answers soon.