But the problem remains: does Facebook need to change the way users risk suicide?
& # 39; People … you need to know that they may be experimented. & # 39;
If the artificial intelligence tool indicates a potential self-harm, the post will undergo the same human analysis as the post it reports directly to the Facebook user.
If artificial intelligence or another Facebook user flags a post, the company will review it. If the post determines that immediate intervention is required, Facebook can work with the first responder, such as the police department, to get help.
This paper suggests that Facebook's suicide prevention efforts should be consistent with the same standards and ethics as clinical studies, such as reviews of external experts and informed consent of the people included in the data collected.
"There is a need for discussion and transparency about innovation in the field of mental health. I think there are many possibilities for technology to improve suicide prevention to help mental health in general, but people think that these things happen, , They may be experimented, "Torous said.
"We agree that we want suicide prevention innovation, we want a new way of approaching people and helping people, but we want it to be done in an ethical way, which is transparent and cooperative." The average user of the system will claim that they do not even know it's happening, so they do not even know it. "
A Facebook researcher who designed the experiment, Adam D.I. Kramer said the study is part of an effort to improve the service rather than annoy the user. Since then, Facebook has made other efforts to improve the service.
Suicide prevention experts say one of the best ways to stop suicide is to help people who are distressed to care for friends and family. Facebook is in a unique position because of the friendships people have with our platform. Facebook 's World Security Director, Antigone Davis, wrote in an e – mail on Monday about questions about the new comments.
Experts promise to respond more transparently to suicide prevention efforts because they use technology to proactively detect content that can express their thoughts about suicide. She said.
Facebook has also pointed out that using technology to proactively detect content that might represent someone's thoughts about suicide does not stop at collecting health data. The technology says it does not measure the overall risk of suicide for individuals or mental health.
What health professionals want from a technology company
"Private commercial companies are another area where we start programs for promotional purposes, but we are willing to keep the information we collect, no matter how trustworthy they can be or how we can personally keep it, or whether they are Facebook or someone else This is not clear. " Caplan, who was not involved in the paper, said.
"We have enough of the big social media regulatory glance, and when we try to do good things, we get a general question that it does not mean it's right," he said.
"This private organization, which is not usually considered a health care organization or institution, is in a position to have a lot of medical information, especially using machine learning technology." "At the same time, they are almost outside of the existing regulatory systems to deal with those kinds of institutions."
"The information they collect – especially if they can predict health care using machine learning once and have health care insight for these people – all of them are protected in the clinical area, such as HIPAA. I'm getting health care, "Magnus said.
"But Facebook is not a target, Amazon is not a target, and Google is not a target," he said. "There is no need to meet the confidentiality requirements of how we deal with health information."
The only protection of privacy that social media users commonly have is the consent form in the company's policy documents that you sign or click "Accept" when setting up your account.
"It is strange to implement public health screening programs through these companies outside of the regulatory structure we talked about, because the research and algorithms themselves are completely opaque," he said.
& # 39; The problem is all this is too secret & # 39;
"In theory, I would like to be able to better manage the patient by using the data collected by all systems, which would be great. I do not want the book to become a closed book. I want to be publicly released … I like being in the form of informed consent, "Schlozman said.
On Facebook, all of this is too secret and Facebook is a multi – million dollar company for profit. So there is a possibility that this data will be used for anything other than apparent sunshine which seems to be gathered. "It's hard to ignore," he said. "I think a lot of people have committed pre-established ethical lines."