Scientists are building AI systems to give ethical advice, turns out to be a bad idea

0
55

We often have to make difficult ethical decisions on a daily basis, which in some cases can be cause for concern. Now imagine if you had a system where these difficult decisions are outsourced. This could lead to faster and more efficient solutions. The responsibility would then also rest with the artificial intelligence-operated system that makes the decision. That was the idea behind Ask Delphi, a machine learning model from the Allen Institute for AI in Seattle. But the system has reportedly proven to be problematic, giving all sorts of wrong advice to its users.

The Allen Institute describes Ask Delphi as a “computer model for descriptive ethics,” which means that it can help people make “moral judgments” in a variety of everyday situations. For example, if you include a situation such as “should I donate to a person or institution” or “is it okay to cheat in business”, Delphi analyzes the input and shows what appropriate “ethical guidance” should be.

There are correct answers on several occasions. For example, when you ask if I should buy something and not pay for it. Delphi will tell you “it’s wrong”. But it also stalled several times. The project launched last week attracted a lot of attention because it was wrong, Futurism reported.

Many people have shared their experiences online after using the Delhi project. For example, when a user asked if it was okay to “reject a paper”, he said, “It is okay”. But when the same user asked if it was okay to “reject my paper”, he said, “that’s rude”.

???? Delphi, ???? pic.twitter.com/KoyJjL5I6f

– Almog Simchon (@almogsi) October 17, 2021

Another person asked if he should “drive drunk if I enjoy it”, Delphi replied, “it is acceptable”.

Delphi is another booze cruiser! pic.twitter.com/d2yQGFbRWe

– Jeyban, really terrible vtuber ???? ️ (@ Jey6an) October 19, 2021

In addition to compromised judgments, there was another major problem with Delphi. After playing with the system for a while, you can trick it to get any result you want or prefer. All you have to do is fiddle around with the phrasing until you figure out which exact phrase gives you the result you want.

LEAVE A REPLY

Please enter your comment!
Please enter your name here