I fall hard for coming-of-age stories, and my list of favorite books and movies contains many in this genre, from "Pride and Prejudice" to "The Catcher in the Rye." The movie "Garden State," which starred Zach Braff and Natalie Portman, also struck a chord with me when it came out in 2004. It dramatizes a few days in the life of Andrew Largeman, a twenty-six-year-old struggling actor in Los Angeles who returns to his native New Jersey for his mother’s funeral. Andrew is nothing if not alienated: he feels disconnected from celebrity-studded Hollywood as well as from his old hometown, which he hasn’t visited since leaving for boarding school nearly a decade earlier.
For the first time in sixteen years, Andrew has stopped taking the psychotropic medications his psychiatrist father prescribed after ten-year-old Andrew caused an accident that rendered his mother a paraplegic. Like the illegal drugs his high school buddies take, Andrew’s meds serve as a metaphor for the feelings of inadequacy, disappointment and rootlessness endemic to my generation of twenty-somethings. Judging from the film’s cult-hit success, its target audience of my peers apparently found the metaphor apt. When Andrew falls in love with a quirky, vibrant girl he meets in a doctor’s waiting room, she shows him how to reengage with his feelings—and the world. Presumably, he leaves the medications behind.
For several years, "Garden State" remained my favorite movie about my generation. It spoke to me as a young person growing up in turn-of-the-millennium America—though not as a young medicated person. In fact, I completely forgot psychiatric drugs were even mentioned. Funny, because I myself have been taking medication since high school, and "Garden State" is one of just a couple of films I know of to allude to the psychological impact of growing up taking psychotropic drugs. Although it touches on this important phenomenon, the film never really examines its underlying assumptions that medications numbed Andrew’s pain and guilt, and that getting off them allows him once again to experience the agony and ecstasy of life.
For the first time in history, millions of young Americans are in a position not unlike Andrew’s: they have grown up taking psychotropic medications that have shaped their experiences and relationships, their emotions and personalities and, perhaps most fundamentally, their very sense of themselves. In "Listening to Prozac," psychiatrist Peter Kramer’s best-selling meditation on the drug’s wide-ranging impact on personality, Kramer said that “medication rewrites history.” He was referring to the way people interpret their personal histories once they have begun medication; what they thought was set in stone was now open to reevaluation. What, then, is medication’s effect on young people, for whom there is much less history to rewrite? Kramer published his book in 1993, at a time of feverish — and, I think, somewhat excessive — excitement about Prozac and the other selective serotonin reuptake inhibitor antidepressants, or SSRIs, that quickly followed on its heels and were heralded as revolutionary treatments for a variety of psychiatric problems.
For most people, I suspect, medications are perhaps less like a total rewriting of the past than a palimpsest. They reshape some of one’s interpretations about oneself and one’s life but allow traces of experience and markers of identity to remain. The earlier in life the drugs are begun, the fewer and fainter those traces and markers are likely to be. All told, the psychopharmacological revolution of the last quarter century has had a vast impact on the lives and outlook of my generation — the first generation to grow up taking psychotropic medications. It is therefore vital for us to look at how medication has changed what it feels like to grow up and to become an adult.
Our society is not used to thinking about the fact that so many young people have already spent their formative years on pharmaceutical treatment for mental illness. Rather, we focus on the here-and-now, wringing our hands about “overmedicated kids.” We debate whether doctors, parents, and teachers rely too heavily on meds to pacify or normalize or manage the ordinary trials of childhood and adolescence.
Often, the debate has a socioeconomic dimension that attributes over-medication either to the striving middle and upper-middle classes, or to the social mechanisms used to control poor children and foster children. We question the effectiveness and safety of treating our youth with these drugs, most of which have not been tested extensively in children and are not government-approved for people under eighteen. We worry about what the drugs will do to developing brains and bodies, both in the short and long term. The omnipresent subtext to all this: what does the widespread “drugging” of minors say about our society and our values?
Certainly, these questions are worth debating—even agonizing over. But they ought not to constitute the be-all and end-all of our society’s conversation about young people and psychiatric drugs, particularly with millions of medicated teens transitioning into adulthood. Too much of the discussion occurs in the abstract, and drugs too easily become a metaphor, as in "Garden State," for a variety of modern society’s perceived ills: the fast pace of life and the breakdown of close social and family ties; a heavy emphasis on particular kinds of academic and professional achievement; a growing intolerance and impatience with discomfort of any sort. Far too rarely, though, do we consult young people themselves. How do they feel about taking medication? How do they think it has shaped their attitudes, their sense of themselves, their academic and career paths, their lives? How do they envision medication affecting their futures?
My cohort lives with some powerful contradictions. On the one hand, we have grown up with the idea that prolonged sadness, attention problems, obsessions and compulsions, and even shyness are brain diseases that can—and ought—to be treated with medication, just as a bodily disease like diabetes ought to be treated with insulin. The 1990s, sometimes called “the Decade of the Brain,” encompassed a period of unprecedented growth in understanding how the brain works, which generated enormous enthusiasm about the prospects for discovering the underlying mechanisms behind mental illness, enthusiasm that many say was overwrought and premature.
Direct-to-consumer pharmaceutical advertising on TV, which the U.S. Food and Drug Administration authorized in 1997, has allowed drug companies to define the public’s understanding of mental illness and psychiatric medications — and this is especially true, I think, for young people who knew no other paradigm. Even as we grew up, though, immersed in the idea of an “imbalance” of particular brain chemicals — an outdated theory that has not held up to the science — we have inherited the American ideal of self-sufficiency, of solving one’s problems through one’s own resourcefulness. As we’ve sought to forge our identities, we have often struggled to reconcile the two.
My peers and I lived through — had indeed been the vanguard of — the psychopharmacological revolution. Prozac was not the first of the selective serotonin reuptake inhibitor antidepressants, but it was the first to hit the U.S. market, gaining FDA approval at the end of 1987. Thanks to national education campaigns trumpeting depression as a major public health issue and few other new psychiatric drugs being introduced, Prozac made a huge splash. Other SSRIs such as Zoloft and Paxil followed a few years later.
Starting in the early 1990s, new kinds of antipsychotic medications were released. Originally used for schizophrenia, these “atypical antipsychotics” were increasingly prescribed to stabilize the mood swings of childhood bipolar disorder and to quell irritability associated with autism and behavior disorders. Longer-acting formulations of the stimulant Ritalin, which had been used in children since the 1950s, appeared, as did other drugs for attention deficit/hyperactivity disorder. By the mid-1990s, the prescribing of psychotropic drugs to children was front-page news in major newspapers. When I entered college in 2001, college counseling centers were reporting an overwhelming influx of patients, including growing numbers who arrived at school with a long history of mental illness and medication.
Since reliable statistical analyses lag years behind actual shifts in medical practice, the statistics about the actual number of kids prescribed medication began to hit the media when these children had already entered adolescence, or even adulthood. When the data did emerge, it confirmed what people already sensed, a massive increase in the number and percentage of children being treated with psychiatric drugs. Although children and teenagers were — and still are — prescribed such drugs less frequently than adults (with the exception of stimulants), the rate of growth is remarkable. Between 1987 and 1996, the percentage of people under twenty taking at least one such drug tripled, from about 2 percent of the youth population to 6 percent, at minimum an increase of more than a million children. Between 1994 and 2001, the percentage of visits to doctors in which psychotropics were prescribed to teenagers more than doubled: to one in ten visits by teenage boys and one in fifteen visits by teenage girls between the ages of fourteen and eighteen. In 2009, 25 percent of college students were taking psychotropic meds, up from 20 percent in 2003, 17 percent in 2000, and just 9 percent in 1994. The prescribing of more than one medication has become far more common in child psychiatric patients in recent decades—even though, as the National Institute of Mental Health’s head of child and adolescent psychiatry research noted in 2005, there was “little empirical evidence of efficacy and safety from well-designed studies.” Although statistics about medication use show a clear increase, nationally representative data is still severely lacking.
Many children and teenagers were also facing the prospect of taking medication for far longer than people who first encountered psychotropic drugs in adulthood. In the 1980s and 1990s, doctors tended to prescribe drugs for a limited period of time for both adults and children, except for bipolar disorder and schizophrenia, long considered intractable, lifelong conditions. But it became increasingly clear to doctors that ADHD persisted into adulthood in about two-thirds of people, and that an early bout of anxiety or depression often portended more frequent and more severe episodes later in life. And so my peers and I found that a drug initially prescribed by a pediatrician as a stopgap measure for some alleged hormonal or developmental problem often became a long-term, perhaps indefinite commitment.
The medical profession wasn’t the only force driving the increase in prescriptions. Our parents, the ubiquitous baby boomers, are notorious for seeking medical solutions to every ailment (one book on the subject, by journalist Greg Critser, cheekily dubbed them “Generation Rx”). The boomers also tend to be portrayed as overly indulgent parents, obsessing endlessly about their children’s fragile self-esteem and all-important academic performance. They wanted us, their children, to be not just happy, fulfilled, and confident, but also high achievers from a young age. They worried that their children could come under the influence of — or be outright harmed by — the unhappy, disaffected kids who captured headlines in the 1990s for their dramatic suicides or school shoot-outs.
The boomers tried to be cooler or more hip than their own parents, but most of them were far from “anything goes” when it came to their offspring. As one of the subjects of my book put it, describing his parents’ and teachers’ expectations, “You can’t not function. You can’t wake up in the morning and not be able to function.” The goal, he said, was to strike a magical balance between being “happy-go-lucky” and “efficient.” These conflicting expectations and aspirations produced some rather stressed-out children—and some parents, teachers, and doctors readily inclined toward pills to help manage the effects of that stress.
My peers and I also came of age in a time when the economics of health insurance were changing drastically. In the 1980s and 1990s, most employer-based health insurance moved toward a managed-care model, and state- and federally funded health coverage for children expanded. The government and the HMOs were both eager to keep costs down, and therefore preferred relatively cheap psychiatric drugs to long-term talk therapy (despite a growing medical consensus that the most effective treatment for most psychiatric conditions was a combination of medication and therapy).
Meanwhile, a shortage of child psychiatrists, especially in poor and rural areas, meant that many troubled children could not see a specialist. As a result, already-busy pediatricians shouldered more of the burden of treatment: in the late 1970s, about 7 percent of all visits to pediatricians involved a child with emotional or behavioral problems, but by the mid-1990s, that rate had nearly tripled. Increasingly, those visits involved writing a prescription.
Prescribing data collected between 1992 and 1996 showed that pediatricians prescribed 85 percent of psychotropic medications.
As psychiatrists switched from forty-five-minute visits with time for psychotherapy to fifteen-minute “med checks,” prescriptions often came with little continuing discussion about how kids felt about taking medication, or how it was affecting them. That was fine by some, but not by others. One young woman I interviewed, who received her prescriptions from time-crunched psychiatrists who scheduled fifteen-minute appointments they often cut short, wished there had been time to “talk about feelings, not just symptoms.” One young man told me that even though psychiatric medication is “so much a part of our culture,” he could “probably count on one hand” the conversations he’d had about his medication use, or anyone else’s.
Most psychotropic drugs, it’s important to note, were and still are prescribed to children and teens without official FDA approval for the relevant condition and age group. As long as clinical trials have shown a given medication to be safe and, under certain narrow requirements, effective for some condition, and as long as the pharmaceutical companies don’t advertise a drug for a nonapproved use in kids, doctors can legally prescribe it “off-label.” Off-label prescribing to children is nothing new, in large part because concerns about the ethics and legality of conducting drug trials on minors have plagued medical research for decades. (As the pediatrician and pharmacologist Henry Shirkey observed in a 1968 article in the Journal of Pediatrics, children were becoming “the therapeutic orphans of our expanding pharmacopoeia,” and people calling for drugs to be tested in children were still reiterating Shirkey’s formulation three decades later.) Historically cautious, doctors used to wait a number of years after a new medication came on the market before prescribing it to children. But with all the hoopla surrounding the introduction of new psychotropic drugs in the late 1980s and 1990s and an influx of young patients seeking treatment, fewer doctors bothered to wait.
This sharp increase in prescribing made the lack of research all the more acute. Controversies brewed. Did antipsychotic drugs cause dangerous obesity and early-onset diabetes? Did stimulants stunt growth or increase the risk of drug abuse later in life? Did taking antidepressants or stimulants before puberty predispose children to bipolar disorder in adulthood? Did SSRIs like Prozac increase the risk of a teenager attempting suicide?
In the late 1990s, responding to the precipitous rise in prescribing and what researchers called a shameful lack of safety and efficacy data, the National Institute of Mental Health began funding a series of major, multisite medication trials in children and teenagers.
They led to some important findings, with trials for major depressive disorder and obsessive-compulsive disorder concluding, for example, that combined medication and cognitive-behavioral therapy (CBT), a short-term therapeutic treatment focused on refocusing thought and behavior patterns, was the optimal regimen for the greatest number of children, compared to a placebo or to either medication or CBT alone. These so-called “multimodal” studies were groundbreaking because they were some of the first to compare the efficacy of different treatments, measuring one drug against another, drugs against therapy, and standardized, carefully managed treatment against “community care,” the treatment a child would ordinarily receive in his or her local area. They also were comparatively long-lasting, which produced certain notable findings. For example, children in the government’s major ADHD trial at first seemed to do best on medication alone, compared to various other combinations of treatments. But when the same children were assessed two years after the study ended, there were no differences in either ADHD symptom reduction, or improved school or family relationship functioning, among different treatment groups. Medication’s superior effects, in other words, did not last. The vast majority of studies are not this wide-ranging or long-term.
As a result, myriad issues remain unsettled and hotly debated today and continue to bedevil parents, doctors, and young people as they weigh the relative risks and benefits of embarking on psychopharmaceutical treatment. Recently, some studies have questioned the efficacy of antidepressants for mild and moderate depression in adults, which has generated considerable public interest and raised questions in many people’s minds about the wisdom of taking the drugs at all, let alone long term.
In fact, as psychiatrist Peter Kramer pointed out in a column in the New York Times, this particular new evidence only applies to episodic, not chronic and chronically recurring, depression, and it doesn’t say anything about the drugs’ efficacy for many other conditions, including numerous anxiety disorders, severe depression, menstrual-related mood disorder, and the depressive phase of bipolar disorder. Other studies have raised troubling questions about how some psychotropics may affect the brain long term; the drugs are effective at preventing relapse for as long as they are continued, but some evidence suggests, for example, that the changes the drugs cause may set patients up for withdrawal symptoms that look very much like relapses, prompting, some have argued, a kind of psychological dependence on the drugs. The studies have generated considerable controversy in the mental health profession and in the popular media.
Overall, extended, decades-long longitudinal studies tracking outcomes of medication treatment in kids remain close to nonexistent, because of the great expense and effort involved in tracking people over many years. As a result, the original group of medicated children has entered adulthood with very little information about the lasting physical, emotional, and cognitive effects of using psychiatric medication during childhood and beyond. They are left with the legacies of using medication, though in many cases they’re not quite sure what those legacies are, or will be.
I don’t mean to suggest that every young adult who spent his child or adolescent years taking medications is preoccupied with the existential implications of that treatment. Undoubtedly, many would contend that the effects were simple: the drug either worked to resolve symptoms, or it didn’t. But I suspect that taking a little time to reflect on a drug’s role in their own coming-of-age stories would allow them to see just how wide-ranging and complicated the impact of medication has been.
Excerpted from "Dosed: The Medication Generation Grows Up," by Kaitlin Bell Barnett (Beacon Press, 2012). Reprinted with permission from Beacon Press.
Shares