Monday , April 22 2019
Home / canada / Experts question on suicide prevention effort on Facebook

Experts question on suicide prevention effort on Facebook



But the problem remains: does Facebook need to change the way users risk suicide?

& # 39; People … you need to know that they may be experimented. & # 39;

In 2011, Facebook launched a suicide prevention initiative in partnership with the National Suicide Prevention Lifeline, allowing users to report suicide content posted by a Facebook friend. The person who posted the content will receive an email on Facebook calling the National Suicide Prevention Lifeline or chatting with the crisis staff.
In 2017, Facebook expanded its suicide prevention efforts to include artificial intelligence that identifies posts, videos, and Facebook Live streams that contain suicidal thoughts or content. That year, the National Suicide Prevention Lifeline was proud to partner with Facebook and said that innovations in social media companies have made it easier for people to get support and access to support.
"Whether your community members are online or offline, you should not feel as if you are an unhelpful viewer if you do something dangerous," said National Dicer, director of National Suicide Prevention Lifeline, in a press release in 2017. Facebook's approach is a unique tool that allows community members to actively take care of themselves, provide support, and report concerns when they are needed. "
The website is a tool to prevent veteran suicide.

If the artificial intelligence tool indicates a potential self-harm, the post will undergo the same human analysis as the post it reports directly to the Facebook user.

The move to use AI was part of an effort to support additional users at risk. The company faced criticism of the Facebook Live feature. Some users have performed live streaming events, including suicide.
In a blog post, Facebook details how AI finds patterns in posts or comments that can include references to suicide or self-harm. According to Facebook, "Are you okay?" And "Will I help you?" It can be an indicator of suicidal impulse.

If artificial intelligence or another Facebook user flags a post, the company will review it. If the post determines that immediate intervention is required, Facebook can work with the first responder, such as the police department, to get help.

Nonetheless, Facebook has been published in a journal claiming that it lacks transparency and ethics on Facebook in order to identify users' posts, identify those at risk for suicide, and warn of emergency services for that danger.

This paper suggests that Facebook's suicide prevention efforts should be consistent with the same standards and ethics as clinical studies, such as reviews of external experts and informed consent of the people included in the data collected.

Dr. John Torous, a digital psychiatrist at the Boston Department of Psychiatry at Beth Israel Deaconess Medical Center, and Ian Barnett, assistant professor of biostatistics at the University of Pennsylvania's Perelman School of Medicine, have co-authored this new paper.

"There is a need for discussion and transparency about innovation in the field of mental health. I think there are many possibilities for technology to improve suicide prevention to help mental health in general, but people think that these things happen, , They may be experimented, "Torous said.

"We agree that we want suicide prevention innovation, we want a new way of approaching people and helping people, but we want it to be done in an ethical way, which is transparent and cooperative." The average user of the system will claim that they do not even know it's happening, so they do not even know it. "

In 2014, Facebook researchers investigated whether negative or positive content is being viewed by users to generate negative posts. The study triggered anger by claiming that users did not know that they were doing it.

A Facebook researcher who designed the experiment, Adam D.I. Kramer said the study is part of an effort to improve the service rather than annoy the user. Since then, Facebook has made other efforts to improve the service.

Last week, the company announced a partnership with a specialist to protect users from self-harm and suicide. The announcement is about the death of a girl from a UK suicide, and her Instagram account contains miserable content about suicide. Facebook is the owner of Instagram.

Suicide prevention experts say one of the best ways to stop suicide is to help people who are distressed to care for friends and family. Facebook is in a unique position because of the friendships people have with our platform. Facebook 's World Security Director, Antigone Davis, wrote in an e – mail on Monday about questions about the new comments.

Experts promise to respond more transparently to suicide prevention efforts because they use technology to proactively detect content that can express their thoughts about suicide. She said.

Facebook has also pointed out that using technology to proactively detect content that might represent someone's thoughts about suicide does not stop at collecting health data. The technology says it does not measure the overall risk of suicide for individuals or mental health.

What health professionals want from a technology company

Professor Arthur Caplan, bioethics professor and founder of NYU Langone Health in New York, applauded Facebook for helping to prevent suicide, but he added that Facebook is taking a new approach to better privacy and ethics I was right.

"Private commercial companies are another area where we start programs for promotional purposes, but we are willing to keep the information we collect, no matter how trustworthy they can be or how we can personally keep it, or whether they are Facebook or someone else This is not clear. " Caplan, who was not involved in the paper, said.

"We have enough of the big social media regulatory glance, and when we try to do good things, we get a general question that it does not mean it's right," he said.

Facebook 's favorite & # 39; Prediction of race, religion and sexual orientation
Some technology companies, including Amazon and Google, are likely to be able to access large health data or be in the future, said David Magnus, Stanford University professor of medical and biomedical ethics.

"This private organization, which is not usually considered a health care organization or institution, is in a position to have a lot of medical information, especially using machine learning technology." "At the same time, they are almost outside of the existing regulatory systems to deal with those kinds of institutions."

For example, Magnus pointed out that most technology companies are outside the "common rules" or the federal policy of protecting personal information governing the study of humans.
13 Simple Ways to Protect Family Data

"The information they collect – especially if they can predict health care using machine learning once and have health care insight for these people – all of them are protected in the clinical area, such as HIPAA. I'm getting health care, "Magnus said.

"But Facebook is not a target, Amazon is not a target, and Google is not a target," he said. "There is no need to meet the confidentiality requirements of how we deal with health information."

The Health Insurance Portability and Accountability Act (HIPAA) requires the safety and confidentiality of personal, protected health information and, if necessary, the disclosure of such information.

The only protection of privacy that social media users commonly have is the consent form in the company's policy documents that you sign or click "Accept" when setting up your account.

"It is strange to implement public health screening programs through these companies outside of the regulatory structure we talked about, because the research and algorithms themselves are completely opaque," he said.

& # 39; The problem is all this is too secret & # 39;

Dr. Steven Schlozman, co-director of the Clay Center for Clinical Center at the Massachusetts General Hospital, expressed concern that Facebook's suicide prevention efforts do not meet the same ethical standards as medical research. New opinion paper.

"In theory, I would like to be able to better manage the patient by using the data collected by all systems, which would be great. I do not want the book to become a closed book. I want to be publicly released … I like being in the form of informed consent, "Schlozman said.

On Facebook, all of this is too secret and Facebook is a multi – million dollar company for profit. So there is a possibility that this data will be used for anything other than apparent sunshine which seems to be gathered. "It's hard to ignore," he said. "I think a lot of people have committed pre-established ethical lines."


Source link