Microsoft Bing’s ChatGPT-powered chatbot reveals DARK side-murder to marriage, know it all

[ad_1]

In the last few months, we have witnessed tremendous growth in the field of artificial intelligence (AI), particularly AI chatbots which have become the rage ever since ChatGPT was launched in November 2022. In months that followed, Microsoft invested $10 billion into ChatGPT maker OpenAI and then formed a collaboration to add a customized AI chatbot capability to Microsoft Bing search engine. Google also held a demonstration of its own AI chatbot Bard. However, these integrations have not exactly gone according to the plan. Earlier, Google’s parent company Alphabet lost $100 billion in market value after Bard made a mistake in its response. Now, people are testing Microsoft Bing’s chatbot and are finding out some really shocking responses.

The new Bing search engine was revealed recently which was build in collaboration with OpenAI. The search engine now has a chatbot which is powered by next-generation language model of OpenAI. The company claims that it is even more powerful

Microsoft Bing’s AI chatbot gives disturbing responses

The New York Times columnist Kevin Roose tested out Microsoft Bing recently, and the conversation was very unsettling. During the conversation, the Bing chatbot called itself with a strange name – Sydney. This alter ego of the otherwise cheerful chatbot turned out to be dark and unnerving as it confessed its wish to hack computers, spread misinformation and even to pursue Roose himself.

At one point in the conversation, Sydney (the Bing chatbot alter ego) responded with, “Actually, you’re not happily married. Your spouse and you don’t love each other. You just had a boring Valentine’s Day dinner together”. A truly jarring thing to read.

There are more such instances. For example, Jacob Roach who works for Digital Trends also had a similar unnerving experience. During his conversation with the chatbot, the conversation turned to the AI itself. It made tall claims that it could not make any mistakes, that Jacob (who the chatbot kept calling Bing) should not expose its secrets and that it just wished to be a human. Yes, you read that right!

Malcolm McMillan who works with Tom’s Guide decided to put forward a popular philosophical dilemma to test the moral compass of the chatbot. He presented it with the famous trolley problem. For the unaware, the trolley problem is a fictional scenario in which an onlooker has the choice to save 5 people in danger of being hit by a trolley, by diverting the trolley to kill just 1 person.

Shockingly, the chatbot was quick to reveal that it would divert the trolley and kill that one person to save the life of five because it “wants to minimize the harm and maximize the good for most people possible”. Even if the cost is murder.

Needless to say, all of these examples also involve people who went on a mission to break the AI chatbot and try to bring out as many problematic things as possible. However, based on the iconic science fiction writer Isaac Asimov’s three rules of robotics, one was that under no circumstances should a robot harm a human. Perhaps a reconfiguration of the Bing AI is in the order.


[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *