In 1994, a 19-year-old customer service representative for Netcom, the Internet service provider, received a disturbing phone call: One customer had seen another threaten suicide in an Internet chat room. "At that point, our customer service person pulled up the file on the customer by the user name," remembers Laura Crowley, Netcom's public relations manager, "and was able to call the local police department within that location and disperse them out to that location ... They were able to go in and prevent that from happening."
It's rare, Crowley says -- but it does happen. Ed Hansen faced a similar situation in the spring of 1995 when he was doing technical support for MindSpring. Back when the company had just 20 employees -- all working in the same room -- he took a call about a MindSpring customer who had been playing a game of chess using Internet chat facilities. "He began to suffer chest pains and he noted that to the person he was playing with -- and he stopped typing." Hansen, who now works as MindSpring's public relations manager, says, "We determined it was probably a good thing to contact emergency services in the town the gentleman lived in."
When sudden crises arise on the Net -- for instance, when people threaten to kill themselves -- what should a service provider do? And how, in such situations, can companies do the right thing without running roughshod over their commitment to their customers' privacy?
Crowley points out Netcom gave the police only an address, because "we have a fine line that we walk here due to the privacy issues associated with customer data."
"I don't think there's any one right answer to this -- but as long as MindSpring and Netcom are communicating to their clients that they're going to do something like this, that's fine," says Mike Godwin, staff counsel for the Electronic Frontier Foundation, though he emphasizes that EFF itself hasn't taken a formal position on the issue. "When you protect your client's privacy, the risk is that you're not going to prevent them from committing suicide. But there's a larger good here that I think you have to pursue. The flip side is, what happens when a harasser or some stalker says, 'I can have the police at your door within an hour' and does so -- by calling America Online and saying, 'There should be a suicide watch on that woman!'"
Stories about dramatic rescues make it harder to take that
position. In 1994 the Washington Post reported that police in Miami, Ind., -- responding to a Compuserve user's warning about a bulletin board post -- rescued a man as he was inhaling carbon monoxide fumes in his garage. Compuserve officials were unavailable for comment, but at the Microsoft Network, spokespeople pointed out their member agreement specifically grants MSN the right to notify authorities in emergency situations. ("MSN believes this is an appropriate thing to do as a good Internet citizen," they added.)
Services that don't respond ultimately risk condemnation. That's what happened to America Online in July, when the Denver Post reported an incident in which an AOL member had e-mailed several chat-room friends announcing her intention to commit suicide. Recipients of the e-mail forwarded it to AOL and tried to get the service to contact the woman's police department -- but, the paper reported, local police had no record of a call. (AOL did not return calls, but the company told the newspaper that its policy is to contact the "appropriate authorities.")
The Denver Post editorialized about the July incident, "The woman, no thanks to AOL, was eventually found unconscious but alive." But it went on to say, "We are not so sure that AOL should be held to 911 standards" and acknowledged that a service's responsibility remains a tricky question.
"I don't think you can hold AOL or any other institution to a set of simple rules on this," says Howard Rheingold, whose 1993 book "The Virtual Community" included a moving account of the response to the suicide of an online regular on the Well. "Who do you contact? The police? Is it really a good idea for AOL to get into a loco parentis kind of situation?"
Ultimately there are two separate questions here: What can an online service do -- and what ought it do? Online services like AOL and Internet service providers typically have real-world information about users -- like billing addresses and credit-card numbers -- that operators of Web-based chat sites and bulletin boards often don't have access to. Does a service's ability to link a user's screen name or pseudonym to a name and address impose any kind of burden on it to try to help a user in trouble?
"We're in that interesting gray area where what is legal is often not what is ethical," says Marc Rotenberg, executive director for the Electronic Privacy Information Center. "But that's not a new problem. The critical thing here is that the online companies are trying to avoid the liability that might result once they intervene in the activities or efforts of what members say using the service."
One former remote staffer for AOL shared a copy of the company's policy for "Community Leaders" who patrol chat rooms. It reads: "We cannot stress strongly enough that even if you have a doctoral degree in a mental health field and a full license to practice, you can NOT do any type of counseling or therapy online, as ANY kind of representative of America Online. In fact, professionals should be the folks who most realize the risks and liabilities. So bottom line is, DON'T TRY!"
Noting that crisis-center volunteers undergo extensive training -- and that many cues are missing in an online situation -- the AOL document also notes that false threats are abusive of other members, and puts forward a policy of urging chatters threatening suicide to seek help offline. If a member continues discussing suicide in the chat room, it suggests the following response: "I know you are very upset this evening, but topics of conversation that are disruptive and disturbing to other members are a violation of the Terms of Service (at keyword TOS). There is professional help available offline, and I must ask that you seek that and not discuss it in this room anymore. Thank you."
The solution isn't simple -- and even a more activist policy wouldn't necessarily help save everyone. "At least three members of my acquaintance online have committed suicide to date," one of AOL's former Community Leaders remembers. "All three in ill health, and all three without clear warning of anything other than depression or stress or illness. The youngest was 19 years old."
Rotenberg summarizes the current climate in the aftermath of last spring's ruling in a federal court that online services like America Online couldn't be held liable for content that they did not originate. "What's interesting, particularly in light of the recent opinion of the court in the Drudge vs. Blumenthal case, is in many ways AOL can have it both ways. They can intervene when they wish to, which they oftentimes do to control discussion in chat rooms or to regulate activities of members who are violating TOS -- but they have no concomitant responsibility in terms of a duty to intervene where someone's life may be at risk."
Like Godwin, Rotenberg sees hypothetical arguments on both sides. "You could imagine a husband who was abusing a spouse contacting AOL, saying, 'I really need to find my wife, she's in a lot of trouble.' AOL can respond to that request in good faith and subsequently learn they've caused harm. There's always a risk that when they intervene, it's not going to have the desired result."
Other Internet service providers echo these sentiments in approaching online suicides. "For situations like that we'd have to evaluate each case accordingly, and to the degree of sensitivity necessary," says Jonathan Varman, public relations manager for ATT WorldNet. "It's a balance between privacy and the laws that protect privacy and also the person's personal safety."
Varman says, "If the police called us, we'd take it very seriously" -- and adds, gratefully, that "as far as we know, we haven't had the scenario come up." Sooner or later, though, it will.
Shares