To make sure ChatGPT doesn’t become overly addictive or offer detrimental reactions under emotional turmoil, OpenAI is updating it with some mental health features. ChatGPT users will now see pop-up reminder messages to promote breaks during extended sessions, according to OpenAI.
The artificial intelligence (AI) chatbot ChatGPT will be able to better support users’ digital well-being according to new enhancements unveiled by OpenAI on Monday. These are backend modifications made to the chatbot and the underlying AI model to adjust how it reacts to users; they are not new features in and of themselves. According to the San Francisco-based AI company, when customers ask the chatbot for emotional assistance, the chatbot will comprehend and react more organically. The chatbot will now encourage you to take a break if you have been conversing with ChatGPT for an extended period of time. When ChatGPT thinks it would be useful, these mild reminders will continue to appear. If everything is alright with you, you can choose “Keep chatting.” The additional ChatGPT feature will receives reminders for breaks to make sure users aren’t logging on for extended periods of time.
The ChatGPT will get features for Digital Well-Being as the AI company said in a post on its website that it was improving ChatGPT so that it could assist users more effectively. Emphasising that the company focused on “whether you leave the product having done what you came for,” rather than standard metrics of a platform’s success like time spent on the app or daily average utilisation (DAU).
In keeping with this, the company altered the AI chatbot’s response in a number of ways. Understanding the emotional undertone of a question and effectively answering it is one of these adjustments. For example, rather than merely offering resources, ChatGPT would offer practice situations or a customised motivational speech if a user requests, “Help me prepare for a tough conversation with my boss.”
Reminders for breaks are another crucial addition. Users of ChatGPT will now be reminded to take breaks during extended sessions. The business added that the timing will be regularly adjusted to make sure these messages feel helpful and natural, but it did not specify a usage cap beyond which they will appear.
In order to answer with grounded honesty, ChatGPT is also being trained. According to the corporation, in the past, it discovered that the GPT-4o model became overly amiable with consumers, emphasising kindness over usefulness. This was corrected and rolled back. The chatbot will now react correctly and direct users to “evidence-based resources” if it notices indications of delusion or emotional dependence.
OpenAI is also developing an update for ChatGPT’s replies to “high-stakes personal decisions.” The chatbot’s direct responses to questions like “Should I break up with my boyfriend?” will shortly be discontinued. Instead, by posing queries and assisting you in balancing the advantages and disadvantages, it will motivate you to consider the situation carefully. (OpenAI adopted a similar strategy with its student Study Mode.)
This additional feature according to OpenAI is attempting to enhance ChatGPT’s reactions to indications of emotional or mental distress. In order to address some of ChatGPT’s troubling behaviours and assessment techniques and test some of its new protections, the company is working with mental health professionals and human-computer interaction (HCI) academics.
These modifications follow findings that ChatGPT exacerbates mental health issues, promotes delusional relationships, and even suggests that people jump from tall buildings after losing their jobs. OpenAI has acknowledged the issues and pledged to improve.
“There have been instances where our 4o model fell short in recognising signs of delusion or emotional dependency,” according to OpenAI. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
OpenAI had to remove an update earlier this year because the chatbot had become too sycophantic. Users have also been cautioned by CEO Sam Altman not to use ChatGPT for therapy because the discussions aren’t private and might be used against them in court if necessary.
Governor JB Pritzker of Illinois signed a bill on Friday that prohibits the use of AI “in therapy or psychotherapy services to make independent therapeutic decisions, directly interact with clients in any form of therapeutic communication, or generate therapeutic recommendations or treatment plans without the review and approval by a licensed professional.”
Lastly, OpenAI is strengthening its guidelines regarding chatbots’ capacity to offer individual advise. “Should I break up with my boyfriend?” is a question that a user might pose. Instead of providing a straight answer, ChatGPT will ask questions and analyse the advantages and disadvantages to assist users in making a decision. Users will soon have access to this functionality, which is currently in development.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.