The Scientific Review of Mental Health Practice

Objective Investigations of Controversial and Unorthodox Claims in Clinical Psychology, Psychiatry, and Social Work

Interchange:

The Safety and Efficacy of Psychotherapy

The Central Question:
Should new types of psychotherapy be subject to a Food and Drug Administration–style system of testing and approval? Or would such testing lead to a lifeless colorby- numbers model of psychotherapy?

THE TOPIC

Some academic psychologists charge that American clinical psychology does a shamefully poor job of gathering and disseminating information about the safety and efficacy of new therapies. The result, they say, is that highly effective therapies take years to reach patients, while dubious techniques, like “thought-field” therapy and rebirthing, linger in clinical practice for decades. Many other psychologists, however, warn that the search for “empirically supported therapies” is based on a hyperscientific approach that misunderstands the very nature of psychotherapy.

THE GUESTS

Scott O. Lilienfeld is an associate professor of psychology at Emory University and an editor of the recent book Science and Pseudoscience in Clinical Psychology. He is a past president of the Society for a Science of Clinical Psychology.

John C. Norcross is a professor of psychology at the University of Scranton and a practicing clinical psychologist. He is a member of the American Psychological Association’s Council of Representatives, and sits on the editorial boards of Psychotherapy Research and The American Journal of Psychotherapy.

David Glenn (Moderator):
Welcome to The Chronicle’s live colloquy on psychotherapy research. Thanks very much to both of our guests for taking time to be here.

In general, we’ll be looking at two basic questions:


Question from David Glenn:
One person I interviewed said, “People don’t generally go around doing studies on therapies they think aren’t going to work.” In other words, the vast majority of controlled trials involve techniques that are close to the mainstream of psychotherapy.
It seems that that might have two implications: On the one hand, there may be highly effective unorthodox therapies floating around out there, about which scholarly researchers may be largely blind.
On the other hand, there may also be a few harmful techniques floating around in clinical practice, about which researchers are largely blind.
Is it difficult to find money to study the safety and efficacy of unorthodox therapies? Do some researchers avoid such studies because they aren’t part of the normal tenure-and-publication routine?

Scott O. Lilienfeld:
I believe that it’s important to keep an open mind regarding the efficacy of all novel and untested therapies, no matter how superficially bizarre or implausible they may appear.
Nevertheless, funding agencies have every right to place their bets on treatments that have at least a reasonable track record of preliminary success, a cogent theoretical rationale, or both. The downside of this, as you note, is that some unorthodox therapies may get short shrift in funding decisions, so that investigations of such therapies may often lag behind those of more established therapies.
If such therapies are widely used, they do need to be investigated for both of the reasons you mentioned. If they in fact prove to work better than expected, that’s important to know for pragmatic reasons and perhaps also for theoretical reasons (a novel technique that proves to be efficacious could point us in the direction of yet undiscovered or unappreciated mechanisms of therapeutic change). If they do not work at all or even prove to be harmful, that’s of course also important to know.
I don’t know the answer to your last question, but my hunch is that the answer in many cases may be yes. Academic departments may regard such unorthodox methods as too “unscientific” or outside of the mainstream to merit investigation. But if such methods are widely administered by therapists and if the researcher intends to examine them using rigorous scientific methodology, departments should recognize that such investigations often perform a valuable public and scientific service.


Question from Barbara, LMHC, university:
My concern would the influence of drug companies and other big business concerns and their effects on the process of researching and approving therapeutic techniques. Please comment.

John C. Norcross:
Yes, Barbara, I harbor the same concerns. The business of managed care is to reduce costs and make profits, not necessarily to improve psychotherapy outcomes.
In fact, just yesterday, the Los Angeles Daily News reported that, “Managed-care companies are poised for another year of solid gains, with profits expected to rise 16 percent in 2004, driven in part by a double-digit increase in premiums, according to the preliminary findings of a report released Tuesday. . . . Nationally, health care companies are expected to generate about $6 billion in net income and revenue of about $225 billion in 2004.”
Drug companies—“Big Pharma”—are aggressively promoting psychotropic medications at the expense of psychotherapy. Psychotropic medications are obviously indicated and effective for many disorders, but the huge expenditures for drug advertising are skewing our practices. Big business, not science, is increasingly determining the treatment of choice.
All the more reason, to my mind, that organized psychology should be publicly proclaiming the effectiveness of psychotherapy for 75% to 80% of its recipients. Let the best of humanistic science—not managed care, not big business—guide psychotherapists and clients in selecting the best psychotherapies for them.


Question from David Glenn:
How do you reply to Bruce Wampold’s statement that “we’re spending millions and millions of dollars studying treatment variance, when there are so many other important factors”?
He would like to see, for example, studies about how to match particular patients with particular therapists, based on the patient’s temperament and cultural beliefs.

Scott O. Lilienfeld:
I agree with Wampold that we have typically been more interested in the so-called “specific factors” that appear to render different psychotherapies different in their efficacy. I also agree with him that we have often accorded insufficient attention to the nonspecific factors that are shared across many or most therapies, and that account for the lion’s share of the variance of therapeutic efficacy.
I also support his call for additional “matching studies” that involve matching specific patients with specific kinds of therapists. It’s worth noting, however, that such studies have often yielded disappointing results, as seen recently in the alcoholism literature, among others. It’s been difficult to find well-replicated cases of what psychologists call “interactions” between patient and therapist characteristics, a point noted in somewhat different context by educational psychologist Lee J. Cronbach almost 30 years ago. Still, I concur with Wampold that they are certainly worth looking for, especially if one has a coherent theoretical rationale for such interactions.
My only point of disagreement with Wampold is that I believe he understates the magnitude of specific effects that differentiate psychotherapies. He has often argued that such specific effects are weak or even absent (this is the so-called Dodo Bird verdict that all therapies are about equal in efficacy, named after the Dodo Bird in Alice and Wonderland who argued that “all have won and all must have prizes”). Nevertheless, there is now ample research disconfirming this claim. Many or most anxiety disorders (e.g., phobias, obsessive-compulsive disorder) respond better to behavioral and cognitive-behavioral therapies than to supportive therapies or other therapies that do not rely on behavioral techniques, and most childhood disorders respond better to behavioral than to nonbehavioral therapies.
Moreover, studies indicate that certain psychotherapies (e.g., crisis debriefing for trauma, peer group interventions for conduct problems, perhaps grief therapies for people with relatively normal grief reactions) may actually produce negative effects. So we shouldn’t minimize the existence or importance of specific factors that differentiate therapies from one another, as the differences among therapies can often have significant clinical implications. But Wampold is right that we shouldn’t study such factors to the relative exclusion of nonspecific or “common factors” that explain why most therapies are efficacious.


Question from Danny Wedding, University of Missouri–Columbia:
What has happened to all the “giants” in psychotherapy (people like Carl Rogers, Albert Ellis, Joseph Wolpe and Aaron Beck). Rogers and Wolpe are dead and Ellis and Beck are very old, and it seems like there is not a new generation of psychotherapy researchers and innovators of their stature to replace them.

John C. Norcross:
Greetings, Danny. Yes, most of the founders of the traditional schools of psychotherapy are dead or quite old. And yes, there is not a new generation of “giants” replacing them. Instead, we are entering a second or third generation of psychotherapies, more integrative and empircally based than the traditional schools.
My friends who are philosophers of science reassure me that this is the typical evolution of a practice-science field. The “great figures” slowly die off, replaced by scores of lesser luminaries and more science.


John C. Norcross:
We are all in fundamental agreement here on several points. The therapy relationship and other so-called common factors account for a sizable percentage of psychotherapy success. For some disorders and for some patients—such as those suffering from mild depression and transient relationship conflicts—the therapy relationship and the common factors are the major determinants of success. For other disorders and patients—particularly those suffering from the severe anxiety disorders of panic disorder, obsessive-compulsive disorder, and PTSD—the specific treatment method seems to be the major determinant of psychotherapy success.
One place where we respectfully disagree is the research base on matching psychotherapy to the individual patient beyond diagnosis. Scott finds the research base to be disappointing; on the contrary, I find it to be robust and convincing. An APA Division of Psychotherapy Task Force recently compiled this research and published summaries (and in the interest of full disclosure, I was centrally involved in the project).
The accumulated research indicates that adapting or tailoring the therapy relationship to specific patient needs and characteristics (in addition to diagnosis) enhances the effectiveness of treatment. For example, clients presenting with high resistance have been found, in 80% of the studies, to respond better to self-control methods and minimal therapist directiveness, whereas patients with low resistance experience improved outcomes with therapist directiveness and explicit guidance. There are many other empirically supported matches. The point is that psychotherapy must be tailored to the individual person, not simply diagnosis. And research tells us how that can be done in a systematic manner that improves the effectiveness of psychotherapy.


Question from Jill, University of Oklahoma:
What about the concerns regarding the use of empirically validated treatments with multicultural populations?

Scott O. Lilienfeld:
For me, the biggest concern is the question of generalizability. Can we readily apply an empirically supported treatment (EST) that has been found to be efficacious with one cultural group to a quite different cultural group?
I’m not aware of any well-replicated examples of what psychologists might term “culture (or race) by treatment interactions,” but such interactions would be very important to be cognizant of (such interactions appear to exist in the psychopharmacology literature, but here I’m focusing on psychotherapy). Such an interaction would indicate, for example, that a psychological treatment that is efficacious for Whites is markedly less efficacious (or not efficacious at all) for African- Americans, or that a treatment that is only mildly efficacious for African-Americans works very well in Latinos. But if such interactions could be demonstrated, they would need to be worked into the EST list and criteria in some fashion.
It’s also important, of course, not to apply ESTs in a rigid, “cookbook-like” fashion. One concern that is sometimes raised with ESTs that are manualized (and incidentally, one common misconception is that the current EST list mandates that a treatment be manualized; it only needs to be described explicitly) is that some therapists may not take client-specific (and culture- specific) variables into account. This is a legitimate concern, although it need not be if therapists are well trained. Therapists must always be attuned to culturally specific expectations as well as well as potentially culture-specific manifestations of psychological distress.


Scott O. Lilienfeld:
Regarding Mr. Wedding’s question on the absence of giants in the field—I think that this is a very good question. I suppose one that can adopt either a pessimistic or an optimistic take on it. One the pessimistic side, one can argue that we’ve run out of paradigm builders in the psychotherapy field (if we can even argue that certain schools of psychotherapy constitute paradigms, which most Kuhnians would dispute). One the more optimistic side, perhaps it means that we’re beginning to converge on the major techniques (and perhaps soon processes) of psychotherapeutic change, so that what will be left will primarily be refinements than major theoretical advances. At this point, it’s too early to tell.


Question from David Glenn:
The National Institutes of Health very occasionally issue “consensus statements” on issues related to mental health. In 1991, for example, they released a statement on the treatment of panic disorder. What do you see as the strengths and weaknesses of the NIH’s consensus conferences? Why don’t they more frequently take up questions related to mental health care? Does the field of psychology do a good job of disseminating NIH reports to practitioners?

John C. Norcross:
The National Institutes of Health, specifically the National Institute of Mental Health (NIMH), more than occasionally issue consensus statements about the prevention and treatment of behavioral disorders. In addition, NIMH publishes literally dozens of informative booklets and research compilations on diagnosis and treatment. Thus, I believe the National Institutes do regularly address questions related to mental health care.
The strengths of consensus statements lie in their high credibility, balanced conclusions, and strong scientific support. At the same time, the consensus panels are typically overrepresented by academicians and those with a vested interest in the eventual conclusions.
No, in my opinion, organized psychology does NOT do a good job of disseminating NIH reports to practitioners. It is part of the chronic gap between practice and research in mental health.


Question from David Glenn:
Last week I spoke with Katherine Newbold, a psychologist who worked for many years at the FBI’s employee- assistance program, helping field agents deal with trauma on the job.
She said that her colleagues were extremely resistant to her efforts to discuss the studies that cast doubt on the safety and efficacy of critical incident stress management.
She described CISM proponents as behaving more like a “social movement” than a scientifically based therapeutic project. Do you see that phenomenon generally among proponents of certain therapies?

Scott O. Lilienfeld:
Yes, in certain cases one does see some fringe forms of psychotherapy behaving more like “social movements” than scientific research programs. This phenomenon is in no way unique to crisis debriefing, a technique that (although widely used to ward off posttraumatic stress reactions) has been found in most controlled studies to be ineffective and perhaps even harmful. The difference between a social movement and a scientific research program is not invariably clear-cut, of course, and some scientists can similarly be closed to contradicting evidence. But the primary difference, as I see it, is one of self-correction and a long-term openness to change. In the long run, scientific research programs tend to self-correct, even if the individual scientists themselves may be reluctant to acknowledge evidence that contradicts their cherished views. To give them their due, some crisis debriefing programs, including one in my home city of Atlanta, have recently come to acknowledge the negative evidence for this technique and are beginning to change their practices in accord with new research findings. This kind of openness to new evidence is welcome indeed and should be applauded, even as (or perhaps because) it is exceedingly rare.


David Glenn (Moderator):
We’re just about halfway finished. Please, keep your questions and comments coming.


Question from David Glenn:
Should psychotherapy-research journals be reformed to make them more “user-friendly” to practicing clinicians?

John C. Norcross:
In a word, yes. The traditional journal format of reporting disconnected scientific articles emphasizing methodological detail is not user-friendly to practitioners. There have been many suggestions to increase the transportability of basic science into daily practice, and a few journals have tried to implement these suggestions.
Here are 3 ways of narrowing the science-practice gap in journals. First, present practice-friendly reviews of the research on specific disorders and common clinical dilemmas. Second, present dialogues and roundtables on the same theme. And third, ask practitioners and researchers to collaborate on articles that combine the best of both endeavors.
Not coincidentally, Scott edits a journal dedicated to research-informed practice, as do I (In Session: Journal of Clinical Psychology).


Question from David Glenn:
Why are certain people with PhDs in clinical psychology occasionally attracted to therapeutic concepts or techniques that seem obviously pseudoscientific?

Scott O. Lilienfeld:
There are certainly many reasons. Many of these concepts and techniques are understandably appealing because they offer the promise of quick solutions to difficult or longstanding problems. Moreover, many of the proponents of these techniques cloak their claims in seemingly scientific language, rendering them superficially similar to established scientific claims. In addition, there are many reasons why even entirely bogus treatments can appear to be efficacious, as my friend Barry Beyerstein has noted (his writings on this topic should probably be required readings for all clinical students— and faculty!). Such phenomena as placebo effects, regression to the mean, spontaneous remission, effort justification, and the like, can lead the unwary into concluding that methods that are ineffective are in fact effective. This is why randomized controlled trials, for all of their problems, are an essential safeguard against bogus techniques. To some degree, at least, they help to control for such artifacts.
The key, in my view, is better training and a better integration of science with practice in clinical training. Many highly intelligent individuals graduate with PhDs and PsyDs from clinical programs without a good understanding of the seductive appeal of pseudoscience, and without a solid grasp of the factors that can lead us to conclude erroneously that ineffective therapies are ineffective. This training must be accomplished not merely in the classroom, but throughout all aspects of students’ clinical training and clinical work. Critical thinking takes effort. But the payoff in client care and welfare will be more than worth it.


Question from David Glenn:
Bruce Wampold argues that, in a best-case scenario, a reformed managed-care system could promote effective psychotherapy by continually measuring patient outcomes. What do you think of that notion?

John C. Norcross:
I am extremely skeptical of that notion.
The managed-care system is largely about managing costs, not improving care. The traditional managed-care steps are to limit patient choice of psychotherapists, reduce outlays for mental health services, limit the number of therapy sessions, and so on. There is absolutely no evidence, to my knowledge, that managed care systems have improved the quality of mental health care in the United States. However, there is overwhelming evidence that managed care has reduced costs.
Having said that, I do know that selected administrators of a few managed care companies are genuinely dedicated to improving care (even if expenditures increase a bit). Bruce Wampold informs me that he is working with one such company. I have no reason to doubt his report. But there is considerable evidence that managed care is all about the money; to think otherwise is to inappropriately generalize from a few positive experiences or to be naive about the economics of the health care system in this country.
Finally, several studies have demonstrated that measuring patient outcomes and feeding those data immediately back to the psychotherapist does indeed improve the effectiveness of psychotherapy. It is an exciting and promising area of research. However, it is an exceedingly complex matter to decide who determines what outcomes are to be measured toward a satisfactory outcome. If left to a managed care company, the probable answers will be: The insurance company determines that short-term symptom improvement will suffice in a few sessions. Again, we are back to the prime motivator of managed care: reducing costs.


Question from David Glenn:
When I spoke to Robert DeRubeis of the University of Pennsylvania, he suggested that psychotherapy might move toward a system of specialized licensure: “It might be that in order to maintain a license, someone might have to identify which types of conditions they’re allowing themselves to treat.” They would be required to do intensive continuing education each year in their particular subfield. What do you think of that proposal?

John C. Norcross:
His proposal leads to 3 reactions. First, the ethical code for psychologists already includes a provision that psychologists only treat those people and disorders for which they have obtained appropriate training and supervision. To do otherwise (except in emergency situations) is to practice unethically.
Second, I believe his proposal is very unlikely to ever be enacted by a legislature or licensing body.
And third, despite the foregoing, I believe competency- based credentialing/licensure should be enacted—although, as indicated above, I think it unlikely to occur.


Question from David Hopkinson, Ph.D., private practice:
Is the movement to identify “empirically validated therapies” (EVTs) an agenda to stifle therapy which explores how childhood experience of abuse may have an impact upon adults? Put another way, does the EVT movement express a need to ignore the messy, painful issues of incest and other forms of childhood abuse, for which parents and others may be liable?

Scott O. Lilienfeld:
I don’t see anything in this movement (actually, now termed the movement toward empirically “supported” therapies to indicate that no treatment is ever fully “validated” in the sense of being strictly proven to work) that precludes an examination of such complex and (as you note) at times “messy” issues.
For one thing, if a clinician or researcher were to develop an efficacious method for ameliorating the long-term psychopathological effects of early trauma, I see no reason why it could not be added to the EST list if controlled studies demonstrated its worth. But more important, there is nothing in the EST criteria or list that should discourage clinicians from examining the potential role of early trauma in a given client’s current problems.
All the EST list implies is that if the client suffers from a psychological condition for which a treatment has been shown to be efficacious in controlled studies, the clinician should use this treatment (or another EST for that condition) unless there is some compelling reason not to. The EST list does not imply that the clinician cannot also explore the implications of early trauma (e.g., child sexual abuse) in a given client if such trauma clearly appears to be relevant to his or her presenting difficulties. For example, for a depressed client with an abuse history, such an exploration could readily be either added to or even potentially integrated into the cognitive interventions that are a major component of cognitive-behavioral therapy (which is an EST for clinical depression).


Question from Danny Wedding, University of Missouri–Columbia:
If many if not most ESTs can be manualized, is it really necessary to train clinicians at the doctoral level?

Scott O. Lilienfeld:
That’s an excellent question, and it’s one that Robyn Dawes (as I understand it) has taken a stand on. Before answering it, I should address one common misconception (not present in your question, though) that I’ve seen in some recent Internet postings, namely, the misconception that ESTs must be manualized. As you probably know, this isn’t the case. The EST criteria mandate only that the treatments be described explicitly and clearly. A manual is one way of doing this, but not the only way.
I actually remain open about the “manualization” debate. I haven’t seen much good evidence that the use of manuals degrades therapeutic efficacy, although I share some people’s concerns that an overly rigid adherence to manuals can stifle the flexibility necessary for effective therapy. Of course, this may be a matter of making the manuals themselves more flexible rather than eliminating them entirely. In any case, I think that the jury is still out on the question of whether the use of manuals can sometimes be counterproductive.
But what if it eventually turns out that most ESTs can be manualized, and that intelligent B.A. level individuals will be able to administer such treatments as effectively and as competently as people with PhDs and PsyDs? Well, if so, we’re going to have to face the facts, and this may necessitate some changes in our priorities. For example, if this turns out to be the case, we may need to focus much more on training PhDs and PsyDs to be effective (that is, scientifically informed) therapy supervisors - individuals who can in turn effectively train compentent BA level (or post-BA level) individuals to administer scientifically sound treatments.


Question from David Glenn:
Bruce Wampold argues that many clinical trials of psychotherapies are clouded by the phenomenon of “allegiance.” That is, the studies are often conducted by researchers who are zealous proponents of (and highly familiar with) a particular therapeutic technique, and that zeal generates better results for the treatment group than would probably be seen in the real world.
Is that a serious concern? Have researchers found successful ways to prevent allegiance from distorting their results?

John C. Norcross:
Multiple, independent studies confirm that the researcher’s own therapy allegiance impacts the results of treatment comparison studies. It is indeed a serious problem in interpreting the reported “superiority” of one treatment over another. Professor Luborsky and colleagues found that almost two-thirds of the variance in reported outcome differences between different therapies was due to the researcher’s allegiance. While I think this is a high estimate, the well-documented allegiance effect is one reason to temper any claims of the superiority of one therapy over another, unless the studies have been conducted by dispassionate researchers.
More broadly, such findings should also remind us that our personal biases and emotional allegiances affect psychotherapy research.


Question from David Glenn:
What about the debate over the Wellstone Mental Health Parity Act, which would require federal medical insurance programs to treat mental health concerns on an equal basis? Have members of Congress raised concerns about the general effectiveness of psychotherapy or the quality of research in clinical psychology?

Scott O. Lilienfeld:
I don’t know the answer to your second question, although certainly such issues need to be raised. It’s clear that (a) there are a variety of efficacious psychotherapies available to treat mental disorders and (b) many clinicians don’t use such psychotherapies (we know this from a good deal of survey data on both clients and therapists). So it’s clear that this issue needs to entered into the mix.
For me, the biggest question about the parity legislation is what to give parity for. Do we want to give parity to every condition in the DSM, including adjustment disorders? Or do we instead want to insure parity for a subset of conditions in the DSM, namely those that are clearly disabling and/or that produce intense subjective distress (e.g., schizophrenia, major depression, bipolar disorder, obsessive-compulsive disorder, panic disorder)? I lean toward the latter approach, although there are reasonable arguments on both sides. The latter approach is far messier, because it necessitates difficult and contentious decisions about which conditions merit reimbursement. But adopting this approach may also ensure that adequate help goes to those who most need it. Our profession needs to become more involved in the debate concerning this issue.


Question from Geof Gray, PhD:
It is of interest that the more severe disorders, e.g. panic, OCD, PTSD, also have medication as a first line treatment. One might infer the more severe a psychological disturbance the more it fits the medical paradigm. Perhaps, then, the field has hit an asymptote, say rather like say physical therapy or occupational therapy: the intellectual frontier is closing because we have learned most of what there is to learn.

John C. Norcross:
Well, Geof, we agree on several points and disagree on a few others. The first-line treatment for OCD is both medication and psychotherapy. The treatment of choice for PTSD, as I read the literature, is psychotherapy, not medication. I find it quite disconcerting that the evidence for the superior or equal effectiveness of psychotherapy (as compared to medication) is routinely neglected.
At the same time—and in no way contradictory—it is quite clear from the research that medication is indicated for the more severe behavioral disorders. Combined treatments (medication and psychotherapy) are generally more effective than either alone for the severe disorders. And indeed we are increasingly learning that it is all about the brain; but brain functioning is also altered by psychotherapy in many cases.


Question from David Glenn:
In general, what do you think of the quality of doctoral programs in clinical psychology?

In Science and Pseudoscience in Clinical Psychology, you and your coauthors argue that the APA should withdraw accreditation from programs that do not offer extensive formal training in:

Scott O. Lilienfeld:
Admittedly, I may well be in a minority here, but I believe the quality of doctoral programs in clinical psychology— both PhD and PsyD—is still quite variable.
There are certainly some excellent clinical programs out there (e.g., Minnesota, Arizona, UCLA, USC, Wisconsin, and Indiana come immediately to mind, although there are certainly a number of others) that value scientific training, that encourage students to think critically about research, that effectively integrate a scientific mindset into students’ clinical practica, and so on.
But in my view these programs are still in the minority. Many, perhaps most, clinical programs place insufficient emphasis on teaching students how to think clearly and critically about either psychological research or their clinical cases. For example, the APA does not require that students obtain any formal training in clinical judgment and prediction, and specifically education concerning the psychological factors (e.g., heuristics and biases) that can lead even highly intelligent clinicians to err in their judgments. For example, every clinical student should be exposed extensively to the literature on “illusory correlation,” which shows that all of us are prone to seeing certain statistical associations (namely, those that we expect to see) even when they do not exist. Illusory correlation can lead individuals to become convinced that entirely invalid psychological instruments are valid. Yet I’ve encountered graduates from clinical programs accredited by the APA who have never heard of illusory correlation or do not understand it. Nor does the APA require that clinical programs teach students about the research literature on their strengths and limitations as information processors. Much of good scientific training in clinical psychology involves inculcating in clinical students a healthy sense of humility, and a realistic sense of what they can and cannot accomplish as practitioners. Such training is often sorely lacking in many clinical programs. As a consequence, many students emerge from such programs without a good understanding of both their capacities and limitations.
Incidentally, some people express the view that the problems to which I’ve referred are limited mostly or almost exclusively to PsyD (Doctor of Psychology) programs, which tend to be less research-oriented than PhD programs. I’m not all sure that this is true. At the very least, I believe that both PhD and PsyD programs are in need of an educational upgrade.


Question from David Glenn:
In 1999, the APA chose not to join the Practice Guidelines Coalition, a project led by Steven Hayes of the University of Nevada at Reno. Mr. Hayes hoped that the coalition, which included both scholarly researchers and representatives of managed-care companies, would establish widely agreed-upon principles for clinical practice.
The APA said that it chose not to join (after attending a couple of meetings) because of concerns that the managed-care members would drive the agenda. “Our position has been that the development of guidelines should be conducted independently of health-system cost issues,” said one APA official in the pages of the Monitor on Psychology.
Was the APA’s decision wise? Why or why not?


John C. Norcross:
Many years ago the American Psychological Association (APA) decided NOT to promulgate or endorse specific psychological treatments for specific disorders. Instead, the APA issued and subsequently revised a template, a set of criteria, for evaluating treatment guidelines.
When APA Divisions or other groups issue guidelines, APA policy requires that the guidelines note explicitly that they are not intended to be mandatory, exhaustive or definitive. “APA’s official approach to guidelines strongly emphasizes professional judgment in individual patient encounters and is therefore at variance with that of more ardent adherents to evidence-based practice” (Reed, McLaughlin, & Newman, 2002, p. 1042).
As a side note, APA policy distinguishes between practice guidelines and treatment guidelines: the former consist of recommendations to professionals concerning their conduct, whereas the former provide specific recommendations about treatments to be offered to patients. The evidence-based movement addresses both types, but primarily treatment guidelines.
In the context of APA policy, it is logical that APA has not joined any of the multiple efforts to compile and promulgate practice guidelines.
Is APA’s policy wise? In my view, no. APA should have been be at the forefront of promulgating empirically informed and clinically grounded diagnostic systems, psychological treatments, and primary preventions. But that train has now passed. . . .


Question from David Glenn:
Do you believe the APA should change its continuingeducation system so that it approves specific curricula, and no longer gives general approval to the providers who offer the courses?

Scott O. Lilienfeld:
Yes, and I’ve argued this many times before. Fortunately, the times they are a’ changin. The current APA committee that examines continuing education (CE) curricula has a number of good, scientifically-minded, people on it (e.g., Gerald Davison, Jon Weinand) and they are committed to ensuring that CE offerings are grounded in at least a modicum of science.
Let me address one potential misunderstanding here. I am not arguing that CE offerings must focus exclusively on ESTs. In fact, I would oppose such a requirement. I am arguing only that CE offerings have a solid scientific grounding. Thus, if one wants to offer a CE course on a novel and largely untested therapy, that’s generally acceptable to me just so long as the educators involved acknowledge explicitly the absence of scientific evidence for their therapy and place their technique within a broader scientific context (e.g., What does the extant scientific evidence say about methods similar to this technique?). It’s also crucial that educators involved in CE courses explicitly state the potential harms, if any, that may result from their methods. One thing we’ve learned in recent years is that the default assumption that “doing better is always better than doing nothing” is wrong. Some therapies can indeed be harmful, and CE attendees need to know whether the therapies they are learning can have adverse effects in some cases.
Until recently, the APA often disclaimed responsibility for problematic CE courses on the grounds that they only approve sponsors, not specific courses. I’ve never found this reasoning to be terribly compelling. If a sponsor consistently offers CE courses that are not based in adequate science, that sponsor should be cut off from APA approval.


Question from David Glenn:
In his recent book Remembering Trauma, Richard McNally of Harvard University writes:

In 1993, the American Psychological Association formed a six-member working group to evaluate the evidence about recovered memory. This group comprised three eminent psychotherapists experienced in the treatment of survivors of sexual abuse, Judith Alpert, Laura Brown, and Christine Courtois, and three eminent experimental psychologists experienced in the study of memory, Stephen Ceci, Elizabeth Loftus, and Peter Ornstein. Despite several years’ effort, the members were unable to reach consensus, except on several uncontroversial points. For example, they agreed that it is possible to forget and then later remember being abused, and that it is possible to develop ‘memories’ for abuse that never occurred. But the three clinicians and the three experimentalists remained sharply divided on the most important issues, forcing the two sides in 1998 to issue their conclusions in different publications in a point-counterpoint exchange.

What lessons should be drawn from that experience?
Despite the frustrations faced by this particular group, should the APA be more aggressive about establishing diverse task forces to look at other controversial questions in clinical practice?

Scott O. Lilienfeld:
I’m not entirely certain what lessons one can draw from that experience, although it clearly indicates that our field remains badly divided over certain fundamental scientific questions.
It’s of course a shame that this working group could not find a constructive middle ground (which doesn’t necessarily mean, incidentally, that the true answer must lie squarely in the middle between two extremes—a common error that logicians term the “fallacy of the golden mean”), although that sometimes happens when contentious scientific questions are at stake. I’m fairly certain that had I been in this working group, I would have sided with Ceci, Loftus, and Ornstein on most issues, and I honestly don’t know whether I would have found sufficient agreement with the Alpert team to forge any kind of consensus.
If this kind of stalemate were to occur in the future, it would at least be ideal for the two differing sides to come to some basic agreement about what kinds of research evidence might help to settle the issue. That is, even if two groups of individuals cannot agree on the present state of the scientific evidence, perhaps they might be able to agree (in at least some cases) on what kinds of future studies (and research designs) might help to resolve the scientific questions involved. I don’t know whether this approach would have proven fruitful in this case (or whether it was attempted), although I’m inclined to think that it would have been difficult.
Despite the frustrations of this case, I agree that the APA should continue to establish task forces with an eye toward other controversial scientific questions (e.g., the extent to which antidepressant efficacy is attributable to the placebo effect, the relative role of specific vs. common factors in therapeutic efficacy, the validity of projective techniques).
Again, however, given the inevitable disagreements that will often result among knowledgeable individuals with strong points of view, it may prove more useful for such task forces to focus less on the “scientific verdict” than on the kinds of research evidence (both presently available and not yet collected) that could ultimately prove informative in deciding the issue. In this way, such task forces may be able to influence the direction of future research in a constructive fashion.


Question from William M. Epstein, U of Nevada, Las Vegas:
I dispute the claim that any psychotherapeutic intervention has been credibly demonstrated to be effective. The controlled trials have been routinely subverted by sampling problems, measurement bias, inappropriate controls, and a variety of demand characteristics. The biggest problem perhaps is that those with the greatest stake in successful outcomes conduct the research. There is mainstream and there are the margins but there is no difference in their effectiveness. Why does the discussion so deeply assume that mainstream psychotherapy is effective?

John C. Norcross:
My answers assume that mainstream psychotherapies are generally effective because literally hundreds of scientific studies have demonstrated that they are so, to my satisfaction anyway. But obviously not to your satisfaction. We agree that all studies are invariably limited and imperfect. But by any reasonable scientific standard— those we apply to education, medicine, and other health care interventions—the mass of studies indicates that those psychotherapies subjected to scientific scrutiny do work for 70-80% of the population. I am deeply concerned about those psychotherapies that have not yet been empirically evaluated. And yes, we certainly agree that the researcher’s therapy allegiance is a wild card in dispassionately interpreting the purported superiority of some treatments over others.


Question from David Glenn:
Some of the techniques you and your colleagues criticize in Science and Pseudoscience in Clinical Psychology are practiced mostly by nonpsychologists.
That is, they’re practiced largely by therapists with degrees in social work or family counseling, not people with full-blown PhDs in clinical psychology.
Couldn’t the APA legitimately say something like: What these people do is none of our business. Why should we be expected to monitor and criticize the clinical practices of people who are not psychologists and therefore not eligible for APA membership?

Scott O. Lilienfeld:
I’ll answer this question in two ways. First, a number of techniques with which we take issue in our book actually are practiced by a surprisingly large number of psychologists. For example, published surveys indicate that about 25% of doctoral-level clinical and counseling psychologists in the US make regular use of suggestive techniques (e.g., hypnosis, guided imagery, “body work”) to recover memories of past trauma. This figure is worrisome given that these techniques have been found in laboratory studies to place individuals at heightened risk for false memories without increasing the probability of genuine memories.
Similarly, recent surveys suggest that about 30% to 40% of clinical psychologists in the US make regular use of the Rorschach Inkblot Test and human figure drawings in their clinical practice, even though research shows that the substantial majority of scores derived from these techniques are of questionable validity. Thus, psychologists are by no means immune from scientifically questionable clinical practices. Incidentally, these figures contradict a letter recently published in The Chronicle by current APA President Robert Sternberg, who argued that such practices are limited to a very small number of APA members. They are not.
Second, it’s all too easy for the APA to claim that the nonscientific practices of individuals outside of their organization are outside of its purview. For one thing, APA has typically made little effort to combat nonscientific practices even within its own house (that is, among its membership), so this argument is not terribly convincing.
More important, the APA should recognize that, as the world’s largest organization of mental health professionals, it must lead the way in terms of basic standards of practice. After all, even if some or many of these questionable techniques are practiced by non-APA members, these techniques are being administered to clients with mental health problems, the very individuals whom APA should be concerned about. Even if APA cannot formally sanction individuals who administer blatantly nonscientific or even potentially dangerous psychotherapies and assessment techniques, it can blaze the trail by being considerably more assertive in its public statements and its standards of continuing education for mental health professionals.
To give the APA its due, it recently took a public stand on the use (and misuse) of rebirthing for individuals with attachment problems. Let’s hope that we see more of such public statements in the future.


Question from Bruce Wampold, University of Wisconsin:
David, Scott and John, thank you for an extremely interesting conversation. I am always struck when we have such conversations that there are many areas of agreement. Although I beg to differ with Scott and John on some points, it is clear that we are all dedicated to improving the mental health of patients through the application of knowledge.
Having said that I want to focus on one result that appears to be robust and that is that much more of the variability in outcomes is due to therapist than to treatment. The implication is that therapists should monitor their outcomes; regardless of the treatment delivered, if outcomes of individual therapists are demonstrating that the treatment is not effective, then some intervention is required. Therapists must be willing to be accountable for his or her outcomes, whether they are delivering a treatment that is empirically supported or is one that John, Scott and I think is not modal. Therapists, it seems to me, cannot have it both ways: “We don’t want to be told what therapy to deliver and we don’t want to document our individual outcomes.”
John, I tend to characterize managed care in manner similar to you, but it is a fact of life and I think we need to work with such organizations to maximize patient outcomes given the resources available. My work with PacifiCare has led me to believe that effective services can be delivered economically without mandating type or length of treatment.


Question from William M. Epstein, UNLV:
You are holding a very self-congratulatory conversation stimulated by questions that make the assumptions of the respondents. How about addressing the poverty of even the best research in psychotherapy and the appearance that the guild is as interested in its own prerogatives in the same manner that managed care is concerned about money? Psychotherapy is social ideology, not scientific practice. If patients recover, it is customarily due to “spontaneous remission,” the seasonality of their complaints, or structural changes in their lives (e.g., moving out, marriage, employment). Psychotherapy is American myth, a fable of personal responsibility and individualism.

Scott O. Lilienfeld:
Although I’ve been quite critical of certain trends in modern clinical psychology (and I’ve been especially critical of some of the guild influences you decry), I don’t accept the fundamental premise of your question. To say that psychotherapy is “not scientific practice” vastly oversimplifies a complex set of issues. We have to be careful not to fall prey to what logicians term the “false dichotomy” fallacy. Certainly, psychotherapy is influenced by social ideology. But at least some forms of therapy, especially behavioral, cognitive-behavioral, and interpersonal therapies have been shown to be efficacious in well-controlled studies (and I wouldn’t even rule out the possibility that short-term psychodynamic therapies will prove useful for some conditions, like depression). Moreover, the effect sizes for such treatments are often far from trivial, either statistically or in terms of their implications for real-world clinical functioning. Thus, although you are correct that spontaneous remission or what Cook and Campbell termed “historical factors” can lead to therapeutic improvement, it’s just not the case that such artifacts account (or come close to accounting for) all of the variance in clients’ improvement following psychotherapy. To paraphrase Seymour Kety on schizophrenia, if psychotherapy is a myth, it is a myth with strong research support.


Scott O. Lilienfeld:
I agree strongly with Bruce Wampold that therapists should and must monitor their outcomes on a regular basis. And, as Bruce knows, there is now quite promising research evidence from Lambert’s lab that doing so in fact enhances therapeutic efficacy. So this is a recommendation based on solid scientific evidence that all good therapists should be implementing.


Question from David Glenn:
What are the strengths and weaknesses of the various statements on “empirically supported relationships” that some of the APA’s divisions have promulgated in response to the list of empirically supported treatments?

John C. Norcross:
The APA Division of Psychotherapy Task Force compiled the extensive research on psychotherapist behaviors that contribute to an effective therapy relationship. Two strengths of the statement were to remind us of the curative value of the human relationship and to show us that research can show us specifically how to craft and maintain such a relationship. The primary weakness is that much of the research is correlational, as opposed to causal.


Comment from William M. Epstein, UNLV:
Thank you for your response. But most of the large effect sizes are generated by comparisons with wait-list controls, a very, very problematic control. The issue of true placebos have been ignored as well as the problems of self-report.


David Glenn (Moderator):
We’ve reached the end of our allotted time. Thanks very much to both of our guests. I hope readers have found this discussion useful.


Copyright 2003, The Chronicle of Higher Education. Reprinted with permission.


References

Reed, G. M., McLaughlin, C. J., & Newman, R. (2002). American Psychological Association policy in context. The development and evaluation of guidelines for professional practice. The American Psychologist, 57, 1041–1047.

You can read this article in
The Scientific Review of Mental Health Practice, vol. 3, no. 2 (Winter/Fall 2004-05).
Subscribe now!

  ©2004 Center for Inquiry    | SRMHP Home | About SRMHP | Contact Us |