AI chatbots behave less inappropriately if Microsoft’s solution works | Tech

AI chatbots behave less inappropriately if Microsoft’s solution works | Tech
AI chatbots behave less inappropriately if Microsoft’s solution works | Tech
--

If all goes well, AI chatbots will behave more neatly and less strangely from now on. Microsoft is taking extra security measures for the service that allows developers to create their own AI assistants.

This concerns adjustments to Azure AI Studio. That Microsoft program works with software from ChatGPT maker OpenAI. Developers can input their own data to build custom AI assistants.

Chatbots that work with artificial intelligence sometimes give strange answers. They can even be harmful. For example, Microsoft’s AI assistant Copilot told a user with PTSD that it does not matter whether he lives, according to reports last month. Bloomberg. The chatbot also accused a user of lying and said, “Please don’t contact me again.”

After research by Microsoft, it turned out that people were consciously trying to trick the chatbot into giving these types of answers. According to the company, very few users typically received inappropriate responses. Nevertheless, the group is taking measures.

Microsoft implements so-called prompt shields. These are designed to recognize and block incitement and malicious instructions. In this way, the company hopes to prevent illegal actions, such as theft of user data or hijacking of systems.

Microsoft also promises to better warn users if a chatbot makes things up or provides incorrect information.

Beeld: Getty Images


Lees meer over:

Microsoft Kunstmatige intelligentie Tech


The article is in Dutch

Tags: chatbots behave inappropriately Microsofts solution works Tech

-

PREV NS-Traxx locomotives get new paint scheme
NEXT Analysis: Can Shiba Inu Soar 90% From This Point? -BLOX