Announcements
We ıntegrate ınformatıon ın lıfe

  • DOLAR
    %0,09
  • EURO
    %-0,15
  • ALTIN
    %-0,67
  • BIST
    %-0,90
New Feature to ChatGPT: Trusted Person

New Feature to ChatGPT: Trusted Person

OpenAI is launching the Trusted Contact feature so ChatGPT users can get support in times of crisis. Here are the details of the new feature.

OpenAI is launching a new Trusted Contact feature so ChatGPT users can get support in difficult times. This feature reveals how much users emotionally trust artificial intelligence.

OpenAI CEO Sam Altman states that young people use ChatGPT not only as a productivity tool, but also as an operating system they use when making life decisions. This shows how central artificial intelligence has become in individuals’ personal lives.

How does the Trusted Contact feature work?

Users can define an adult as a trusted person through ChatGPT settings, and the process begins when this person accepts the invitation. When the system detects a serious risk of self-harm, it warns the user and notifies a trusted person.

A specially trained human review team evaluates the situation before any notification is sent. If a security concern is detected, the trusted person is contacted via email, text message or in-app notification.

OpenAI emphasizes that to protect privacy, these notifications do not include chat transcripts or detailed history. Users can change or completely remove the trusted person they define at any time.

Security and ethical debates

The Trusted Contact feature was developed with input from mental health experts, suicide prevention experts, and more than 260 doctors from 60 countries. With this step, the company acknowledges that ChatGPT is a system that has not only technological but also emotional effects.

Some experts warn that being monitored by artificial intelligence could have negative consequences, especially in the workplace. Amy Sutton notes that such monitoring can increase individuals’ efforts to hide their difficulties, which can have a worse impact on mental health.

Such features offered by artificial intelligence platforms prove how much users rely on systems in their most vulnerable moments.

Do you think the presence of such security measures in artificial intelligence applications is really reassuring for users or is it a matter of concern?

Social Media Share:

TOGETHER FOR A LOOK

Can you share with us your comment?