Interviewer: I am Dr. Jackie Persons. I am Director of the Cognitive Behavior Therapy and Science Center, which is a small group private practice in Oakland, California, and I am also a clinical professor at UC Berkeley in the psychology department. I am here today with Dr. Michael Lambert. Dr. Lambert is a professor of psychology at Brigham Young University, where he teaches in the clinical psychology program, and he is a very active researcher, as you will hear. Dr. Lambert is also a clinician, so he has been in private practice as a psychotherapist throughout his career, which is now more than 40 years long. And we are talking with him in particular because he is unusual, I think, in doing research that makes contributions that are really important to clinicians. He is a prolific researcher; he has edited, authored, or co-authored nine research-based books, 50 book chapters, and over 150 articles. His research focuses, in particular, recently, on reducing treatment failure and non-response. We are certainly going to talk about one of his articles today, and I would also like to highlight his book, published in 2010 by the American Psychological Association. It’s called “Prevention of Treatment Failure: The Use of Measuring, Monitoring, and Feedback in Clinical Practice.” Thank you very much, Dr. Lambert, for being here with us today.
Dr. Lambert: Thank you for that nice introduction, Jackie, and thank you for inviting me to participate.
Interviewer: Well, thank you. Now this interview that I am doing here today is part of a series titled “Translating Science to Practice” that is hosted by the Society for a Science of Clinical Psychology. The goal of our series is to help clinicians access and use findings from basic science to guide their clinical work. In my mind, the research of Dr. Lambert over the many years of his career is some of the most important research that clinicians need to pay attention to if they want to be doing science based practice. So I am really delighted to have you here today, Michael, to talk to us.
Dr. Lambert: Thank you, Jackie, and that goal is what has guided my research career, that is, trying to gather scientific findings that would make a difference to clinical practice. So I have always been an applied psychotherapy researcher.
Interviewer: To my listeners, after you listen to this interview, I hope you will go to a survey that’s located on the same webpage as the interview and give us some feedback. As you will hear, we’re going to be talking today to Dr. Lambert about feedback, and we would also like some feedback about this interview! Now our discussion today I am sure is going to range widely over the topics of progress monitoring and feedback, but we are also going to focus on this article, published in 2011. It is titled “Collecting Client Feedback.” Michael Lambert of course was one of the authors, the first author, his co-author is Kenichi Shimokawa. Could I ask you to help us out, Dr. Lambert, by describing to us -- this articles is a little intimidating, it’s got a lot of technical things in it-- could I ask you to tell us in your words what would you say is the main finding that we should try to digest as we read it?
Dr. Lambert: T his article is actually a simplified version. So if it’s hard to read, the original version is even harder to read. So the basic article discusses the problem, which is that about 5-10% of adult clients are worse off when they leave therapy than when they started, and there’s a substantial number who aren’t improved or deteriorated, they just don't change one way or the other. So they’re an additional part of the problem. And that therapists don't on their own have the intuition to recognize such cases before they depart from therapy, or even after. So some kind of assessment, like a sort of mental health ‘lab test,’ can be applied that will improve clinician’s ability to recognize cases that are on track to fail, have a negative outcome in psychotherapy. And then a major part of it is the presentation of just how helpful it is, so we did six clinical trials that are summarized that try to estimate if there is an effect, and if so, how large is it. How able are we to reduce the deterioration rates and how much can we improve positive treatment response rates if we give this lab test data to clinicians? So that’s the essence of what we are trying to do.
Interviewer: Okay. So if you were going to offer clinicians one or two or three main take home messages for their clinical practice based on the findings you report in your paper here, what recommendations would you make to clinicians?
Dr. Lambert: I would say the first lesson to be learned from our line of research is that clinicians themselves are lousy at identifying patients who are going to end up deteriorated, and so they cannot rely on their clinical intuition because it is way over optimistic. So if they continue to practice with their intuition for the problem of identifying failing cases, they are going to miss almost all of the patients who deteriorate. So they cannot continue to practice that way if they want to solve the problem of the failing case.
Interviewer: Okay.
Dr. Lambert: That is the major thing.
Interviewer: So the major thing is that clinicians cannot continue to, or they cannot assume their intuition is going to allow them to anticipate and identify which cases are failing, because if they do that, they’re going to be making a mistake.
Dr. Lambert: That’s right. So for example, when we ask our clinicians at our university clinic to, after each session seeing a client, immediately after the session, to identify that this is going to be one of the 10% that worsens, out of 550 clients that they made ratings on they only predicted three would worsen, whereas 40 did, and they were only correct once out of 40. The one clinician who identified the one case that deteriorated was a trainee. So the licensed professionals with an average of 10 years’ experience did not identify a single case out of the 40 that eventually deteriorated. So we’re not talking, we didn't have to do statistics to find a significant effect. There is essentially zero ability. It is not just a little off, it’s like, way off. We think this is because clinicians have to remain optimistic, and so they never make the prediction. They never think that anybody that they see will be among the 10% that get worse. Maybe their colleagues will have worsening cases, but they won't, that is what it looks like.
Interviewer: That is fascinating. Okay, and so then, if intuition doesn't do the job, I assume there is another take-home message having to do with using tools and measures to monitor outcome to try to identify those cases, but I don't want to get ahead of you. But would that be your next take home message?
Dr. Lambert: That would be the next take home message. We developed a brief, 45 item measure of depression, anxiety, interpersonal problems, and social role problems, like problems at work, or problems as a homemaker, or problems as a student studying, so whatever your job is we tried to measure that as well. So we would call that psychological function or mental health functioning. So you can get that down to about 45 items which means the patient can complete it in 5 minutes. So we developed a measure of mental health functioning that can be taken before every session using about 5 minutes of patient time. And so you can measure it, and of course we took our measure called the OQ-45 and validated it against other measures like the Beck Depression Inventory and the Beck Anxiety Inventory and the Symptom Checklist-90 and other measures of psychological disturbance. So we know it’s a valid measure. So it can be done taking very little clinician time, and even nowadays we do it, if people want, we do it online so they can take it outside of the office, otherwise they come to the office and either use the office computer or a handheld device to enter their answers to these 45 questions. The computer program called the OQ analyst will score that measure in about half a second, and it will apply the algorithms for predicting treatment failure, and the clinician can access the patient’s predictive report in about 18 seconds of calling up the patient's name on their office computer. So we can have the patient come in 5 minutes early and we can get the information in the hands of the therapist in about 20 seconds, after the patient finishes the measure. So we can use a scale to measure it, we developed very sophisticated and quite accurate predictions of treatment failure so that we can identify 100%, or close to it, maybe at the worst 85%, of the cases who are going to deteriorate before they do so. So whereas clinicians can't do it with any accuracy, we can do it very well with a psychological measure. So those algorithms that predict treatment failure are the most important aspect of feedback. Feedback amounts to an alarm signal that comes up on the report on the clinician’s screen with a red colored marker that says “alarm.” And so we are not giving people test scores, well, we give them test scores, but that’s not the critical thing. The critical thing takes half a second to look at: it’s going well, or it’s a predicted treatment failure. That is the information that we think makes the difference. That is the essence of feedback: this is an at risk case.
Interviewer: So I want to hear more about your thoughts about of the importance of the feedback signal as being the, because I don't actually use the OQ-45, but I can see that I am missing out on having the signal. I am not getting a clear signal, because what I have is a plot in front of me so I can see the trajectory of the change, of my patient’s progress, but what you’re saying is that in your view and maybe have data to support this idea, that it’s actually the signal of yes this patient is on track or not this patient is not, it is that piece of information that is the most vital piece of feedback, is that what you are saying?
Dr. Lambert: That’s correct. So when you look at a graph of your patient’s scores on the measure you use, they fluctuate from week to week, so some weeks they are feeling better, other weeks they’re feeling a little worse, so it’s not a steady improvement, it’s a fluctuating improvement that trends towards greater mental health, but what human beings can't do is they cannot calculate accurately what’s a normal fluctuation in a person’s mental health and what’s an alarming fluctuation.
So if everybody used what you are using, everybody would use different clinical judgment to say, ‘that is worrisome.’ So what we need to do to is to eliminate the human judgment because you just cannot calculate it accurately. You can get a vague impression, but to do it accurately, you have to know how disturbed was the patient when they started, and how much should they change after one session, after two sessions, after three sessions, after four sessions. So, it’s that when I say how much they should change, what I mean is how much have they changed in comparison to the average client. So essentially we have an expected recovery graph that plots a line in terms of how the average patient proceeds through therapy based on how disturbed they were to begin with, and then what we have is a system of identifying people that deviate far from the ordinary expected recovery, such that only 10% of the patients are that worse off at that session of care compared to when they entered treatment. So that’s quite complex for a human mind to take into account, because some people coming in quite disturbed, because some people people moderately disturbed, some mildly disturbed, and the expectation what will happen them psychotherapy depends a lot where they start.
Interviewer: Yup.
Dr. Lambert: And it depends a lot on how many sessions they’ve had. So you can't do that math in your head, only your computer can do that math. And it can take into account thousands of patients to make that judgement. So it’s essentially using big data to estimate what’s expected and acceptable and what’s a deviation that is too far out.
Interviewer: Right. And if I’m looking at the plot of the data I’m looking at from my patient, part of the problem is my mind can't do the necessary calculations because they are too complicated and another part of the problem is your tool provides certain data that I do not have in my system, because all I know is how is my patient looking this week in comparison to previous weeks. What your tool in addition presents is how does this patient look in comparison to a very large sample of other patients who started treatment in the same situation. So that’s another whole piece of data that allows the clinician to draw conclusions about the progress of the patient who is in his office at that moment.
Dr. Lambert: Yes, and so then what we do is to test the accuracy of the algorithms, and we know we can identify 100% of deteriorators with our algorithms. Our problem is, or our weakness is, that we over predict. So we send false alarms. And the problem of clinicians is they under predict, massively, totally under predict. So there are inaccuracies in our predictions, but there are alarms signals instead of indicating a client’s well off when they are not.
Interviewer: So you have a tool that has been shown in research to improve outcomes in patients who have an initial poor outcome, and yet most clinicians are not using it. And most clinicians are not doing any kind of progress monitoring or formal feedback collection from their patients, and I'm interested in your thoughts about why that is.
Dr. Lambert: Yeah. We could help develop further tools to assist clinicians, and they seem to add on to simply giving an alarm. So they’re a diagnostic tool to focus why is this patient of track for a good outcome. And I think the reason that clinicians don’t use this methodology much is that we clinicians have a great deal of confidence in ourselves. So in our survey of clinicians, if you ask clinicians where would you place yourself among all clinicians as being better than or worse than? 90% of us, so virtually all of us clinicians, regard our patient outcomes as superior to those of our peers. So 90% of think that we are at or above the 75th percentile of therapists. So we all see ourselves in the top quartile. And this misperception of ourselves and our effectiveness is not just unique to psychotherapy. If you ask engineers at GE how you think you do as an engineer compared to your fellow engineers, virtually every engineer ranks themselves in the top quartile. If you ask cabinet makers, how are you doing in making cabinets, quality-wise, compared to your fellow cabinet makers? The same phenomenon exists. Every cabinet maker sees themselves as the top cabinet maker. If you look at policeman, their belief in their ability to detect lies in people they’re interviewing, they see themselves as better than other policeman and also as very good and yet their performance is 50/50. They can just as easily flip a coin, in terms of their accuracy. So it's this hubris, that’s natural in all of us to make ourselves happy. Like I'm really happy to not go around thinking I'm average. I didn’t come into the field to be average. I came in to be great. And according to my own self-delusions I am great. It's not that my outcomes have been measured and I have been found to be average, it's that I have to assume I'm average, since almost everybody is, every clinician is, in terms of producing good outcomes in their clients. So in the absence of feedback information about how effective you are as a therapist and your belief that you’re fantastic, why would you adopt anything new?
Interviewer: So are you suggesting that clinicians, like other professionals, view themselves as having a high quality skill, so they don’t need to collect outcome data?
Dr. Lambert: That’s right.
Interviewer: They already know the answer, partly because they have excellent intuition skills.
Dr. Lambert: Yes.
Interviewer: Oh my goodness.
Dr. Lambert: And I’d further say that at some level they know that they’re kidding themselves, that they’re in a defensive position of overestimating their value. And they don’t want to have actual measure outcomes, they don’t want to see how a self-report measure for example would estimate the effects of their treatments. So we are all just happy living in a bubble.
Interviewer: Yes, we are.
Dr. Lambert: And so I would say that maybe one reason we’re a little bit stuck at becoming more and more effective at helping patients improve, because we find experienced therapists don’t appear to get better outcomes than graduate students. So the supervisors don’t actually have better outcomes. As appalling as that sounds.
Interviewer: Well, I understand that.
Dr. Lambert: There are some really good trainees out there. And maybe therapists are born and not made. Know what I mean? They have characteristics that are sharpened a bit by training but sometimes even dulled by training. They are actually a little worse off for their training.
Interviewer: That’s interesting. So what ideas could you offer us about, what can we do as a field, or those of us who are clinicians who are listening to this interview who are also teachers or trainers, what can we do to increase clinicians’ use of progress monitoring in their work? What thoughts do you have? I am sure you have thought about that a lot.
Dr. Lambert: Yeah, I have. I think it has to start in grad school. Trainees are not going to get into the habit of monitoring their patients’ progress if they’re not asked or required to do it in graduate school. And actually we find trainees are quite open to this methodology. They love to have graphs, and they like to have feedback and it makes them feel less anxious because they not in the dark. And the majority of our patients are getting some improvement so often what they are seeing is the improvement of their patients. And so I would say generally students are eager to have the information. But nobody is teaching it.
Interviewer: Okay.
Dr. Lambert: And so if it starts in grad school, I am sure it will become common practice, because it's actually really nice to see a graph. As you know. I mean, when you look at, don’t you like your data graphed?
Interviewer: Of course I do, I love my data. I love my data. I don’t see how I could do my clinical work without my data. It’s true, I learned how to do this when I was a practicum student, so I think this idea of getting people started early is important and it makes a lot of sense to me.
Dr. Lambert: Yeah. I think by the time people have practiced ten years they’re pretty pleased. They always like going to a workshop where they can get a new idea or a new skill, but really the evidence that those workshops produce better patient outcomes is certainly not proven. But I think it’s fun to get together with your peers and it’s fun to see new ideas presented and to adopt them as we wish. But that’s not the same as adopting an idea like this that actually has a sort of scientific basis and actually does something we can't do. Like I know people, therapists, they have used this stuff for years. They don’t learn how to predict treatment failure just like MDs cannot manage chronic illness without blood work. Who wants a physician managing high blood pressure without measuring it? You know, I don’t want my MD to use their intuition about what my blood pressure is or my AC1 level. I’ve got to look at the lab data. So partly it’s that you don’t learn anything. It's not like learning a new intervention skill that you can practice and internalize. This methodology is quite different, because you actually just practice relying on lab test data to add something to your clinical judgment that you can't do. Does that make sense?
Interviewer: Yes, it makes a lot of sense.
Dr. Lambert: But older clinicians have been practicing for years without it and think they’re having great outcomes so they don’t want to bother. And it does require a little bit different way of practicing, like it's taking advantage of information technology, computers. So older clinicians are not always the most up to date. People joke and I'm quite serious. My grandkids are more familiar with using information technology than I am. I’m slow. So it's partly generational I think. We’ve just not really relied on electronics to improve our clinical practice.
Interviewer: I see. That makes sense to me. So these are two impediments, one is we didn’t learn how to do it in graduate school and it's not so easy to be receptive to learning this kind of thing later in your career, and number two, many of the more senior clinicians, people like your age and my age, you know, to start using web-based tools to monitor progress, it's not so easy. So I do appreciate that.
Dr. Lambert: And then the third impediment, that may be the most important, is in surveys of, if I pass a survey out to clinicians about how many of their patients benefit from psychotherapy, the average answer is typically about 85%. So we already think we’re having great outcomes. If you look at clinical trials, that presumably might use the best therapist available who are committed to a therapeutic approach, trained, supervised and monitored in delivery, and where the clients are not too sick and not too well, so we have the ideal clients who presumably have a disorder, recovery rates are typically around 60-65% depending on the disorder. So we are not having 85% in 14 sessions of a manualized treatment, it's 65%.
Interviewer: Yup.
Dr Lambert: And so if we imagine we remember well all the people who benefitted the most and then imagine that’s true of everybody we saw. Of course if you talk to a clinician that works in a community mental health center they wouldn’t be so high.
Interviewer: I see.
Dr. Lambert: You know, there are some settings where, you know, if you go to VA and half your clients are just trying to get their compensation from the government assured and they have more interest and advantage in having a disability than help, you may not think you have that great a success rate, even though you’re preventing a lot of suicides. But not every one of your clients are suicidal. I just think there's a defensive cover on our work. I hate to be Freudian, but there's some defenses, psychological defenses that help us bolster our self-esteem. So I think it's very threatening. So why don’t people do it? It's threatening, it's not going to give them the rosy picture that they can give themselves. And part of it is the measures like the ones we use are asking people to report on their weekly psychological functioning as measured by 45 distinct items. So we ask people how are your headaches, how is your sleep, how’s your sex life, how’s your loneliness, etcetera. And clinicians can't ask people 45 questions at the beginning of every session. So clinicians can't really tap into mental health functioning, because they have to do therapy, they can't do assessment. That would take half of the whole session so they not getting the information.
Interviewer: Alright. So let’s talk about, I just want to make sure, I think you summarized, in a way, four different obstacles or impediments to why clinicians do not pick up these outcome monitoring systems and use them in their work. One is they weren’t trained to do it in graduate school, and so then it's not so easy to adopt a new thing like this later. Another is that often making the best use of these systems involves using computers and technology that are not always so easy for clinicians. Another is clinicians have an inflated view of how well they are doing and they think they have all the skills needed to predict outcome and take care of their patients and they don’t need this additional tool. And then a fourth issue is that they feel a little bit defensive or threatened by the idea of actually collecting the data because it's kind of scary to find out that maybe your patients aren’t doing as well as you like to think they are. So that sounds like a pretty compelling list and helps us understand why the rates of use of outcome monitoring systems, both yours and others, are in my mind surprisingly low, but there we are. Is that a good summary of what you are saying?
Dr. Lambert: Very good summary.
Interviewer: So let me ask you another question, which is what is your view of the relationship between the use of a progress monitoring tool like yours, which especially in your hands, given how much data you have collected, is an evidence based practice, right, the practice of using a progress monitoring tool, it's an evidence based practice, what is the relationship between that tool and another approach to evidence based treatment which is the empirically supported treatments? How are these things related, and one of the questions is, if clinicians read this paper that we are talking about here today and your work more generally, might they take home the message and would you be suggesting they take home the message that, whatever work I am doing, it doesn’t matter what kind of treatment I am doing as long as I am incorporating outcome monitoring? Is that how you would see it or would you see it differently?
Dr. Lambert: I would probably lean in that direction. So I think if you look at the literature comparing a specific treatment for specific disorder, compared to let’s say treatment as usual, the effect size is about 0.2.
Interviewer: Really?
Dr. Lambert: Yeah.
Interviewer: Which is about the effect size of your tool.
Dr. Lambert: No, our effect size is about 0.5 for progress monitoring, and then if you add this diagnostic tool for problem solving, the clinical support tool, the effect size is about 0.7. And even in the clinical trials literature, comparing one psychotherapy with another, if you take into account researcher allegiance, because the studies are usually done by one group of people interested in the particular form of treatment they’re developing, the effect size is closer to 0.10. So 0.10 versus 0.70. So in one way I have of saying it is nobody needs the right treatment for the right disorder if it is not working for them. So for me, it’s like, if Barlow’s panic treatment is the right treatment for panic disorder, it’s still not working in a portion of the cases. So, it is not right for a particular patient.
Interviewer: Right. Or another way I think about it, which I think is the same point, is the randomized control trials tell us on average, you know, which treatment is more effective for which problem than another. Whereas what progress monitoring data tell us is the answer to a different question, which is, is this treatment helping this person who is in my office right now at this moment. And that last question is the question that clinician really wants to know the answer to, and the patient really wants to know the answer to.
Dr. Lambert: Yes, you are exactly right. That’s how I’d put it, and I’d say the idea that you can give the right treatment for the right disorder and get the same bang for the buck that you get from monitoring, I mean, it’s just an easy solution, that we can get the right treatment for the right disorder. If that was true, we would all be forced to switch to interpersonal psychotherapy for depression.
Interviewer: We would? I don’t know about that.
Dr. Lambert: Yeah, I mean, there are some meta analyses that show if you compare the two, there is a small advantage of IPT, even if not, it’s not like the right treatment is 100% right. It has a slight advantage generally, if any.
Interviewer: Very interesting.
Dr. Lambert: It is not a world of difference between giving, I mean, if you combine ACT with cognitive therapy for depression, you probably get about the same results as if you just use cognitive therapy.
Interviewer: Well that I believe. So if I listen to what you’re telling me, what I’m hearing from you is you’re suggesting that if a clinician wants to do evidence-based practice, really the most important thing is to be doing progress monitoring using a tool like yours. And you are not seeing empirically supported treatment protocols as very important to that clinician for a couple of different reasons, and one of them I’m hearing from you is that if you read the literature and look at the effect sizes comparing empirically supported treatment protocols to treatment as usual you’re seeing an effect size of like 0.2, and if you compare that to the effect size of your tool, you’ve collected a lot of beautiful data to support that and you found an effect size of your tool of like 0.5 to 0.7, and so then we look at those comparison numbers and you think, okay so I’ll use the progress monitoring tool. The question I would ask you is it seems like when you set it up that way it becomes like a choice, one or the other. Couldn’t the clinician do both? Couldn’t the clinician get the benefits of like an additive, even larger, effect size of doing both the empirically supported treatment protocol and adding a tool like yours into what they’re doing? What’s your thought about that?
Dr. Lambert: Yeah, it definitely doesn’t have to be a choice of one or the other. I think that progress monitoring is compatible with every type of psychotherapy.
Interviewer: Ok.
Dr. Lambert: And so whether you’re doing the right psychotherapy or a psychotherapy whose effects are largely unknown, you add progress monitoring on it just because people fail in empirically supported psychotherapies just as they fail in treatment as usual. And even if they fail at higher rates in treatment as usual it may make it more necessary to monitor treatment as usual, treatment that’s not really following a protocol for evidence based practice very carefully. Even in evidence based practices maybe 30-40% of people fail to respond. So you still can pick up quite a few people who are predicted treatment failures.
Interviewer: So your point of view is doing progress monitoring is necessary to be doing the evidence based practice and largely sufficient, whereas (you’ll tell me if I’m summarizing you accurately) whereas my point of view would be doing progress monitoring is necessary to be doing evidence based practice but it’s not sufficient, and in my view it would also be important to be using empirically supported protocols to guide the work.
Dr. Lambert: Yeah. Well, so let’s, I’m not sure if we disagree with each other or not.
Interviewer: Ah, okay.
Dr. Lambert: I don’t know if it’s essential to do an evidence based practice, or necessary. It’s definitely not sufficient in the sense that we can improve outcomes if we monitor patients who are at risk for treatment failure. So I would say that using an evidence based practice might be a first step but it’s not all that can be done to maximize patient benefit.
Interviewer: But I also see that collecting progress monitoring data from your patient and using it to guide your treatment is essential and it’s very curious to me that the empirically supported protocols themselves do not include that element. Have you thought about that or noticed that?
Dr. Lambert: Yeah, I think partly it’s because it’s relatively new on the scene, meaning like 10 years old. And the number of studies supporting the practice is just growing in the last 10 years. So I think it just hasn’t become a standard of care, so people are busy comparing CBT for depression with IPT for depression and that’s just the focus of their study, and they don’t really think about monitoring as, you know, that’s a different experiment.
Interviewer: It is a different experiment. And it’s an experiment you’ve been doing, Michael.
Dr. Lambert: Yeah. And I think, I think there’s a pretty convincing body of evidence. So one could argue that there ought to be a standard of practice.
Interviewer: Oh, I absolutely agree with you.
Dr. Lambert: But it’s not yet.
Interviewer: It’s not?
Dr. Lambert: Well, I don’t think so. I mean, not that many people are doing it and it's not common in training programs.
Interviewer: I see, so in that sense it’s not a standard practice.
Dr. Lambert: Yeah. And those effect sizes, the 0.5 to 0.7 effect size, that’s limited to a subset of the patients who go off track.
Interviewer: I hear you.
Dr. Lambert: So if you look at 100% of the patients the effect size isn’t nearly that big.
Interviewer: I hear you. Thank you.
Dr. Lambert: So it’s more or less an intervention for off-track cases rather than everybody.
Interviewer: Yes.
Dr. Lambert: But we just don’t know who the off track cases are unless we’re monitoring everybody.
Interviewer: Yes. So that’s a good thing for me to summarize at this point, which is that one of the major goals of your developing this tool and your whole research program is to help the clinicians identify the off track cases, the patients who are failing early, be able to notice who those patients are, and be able to step in and take action to prevent failures.
Dr. Lambert: Exactly.
Interviewer: Beautiful. This is a very interesting discussion, but I’m starting to run out of time here, so let me just take a pause and say that what we’ve been talking about here is use of progress monitoring to monitor outcomes, to correct for clinicians’ tendency to think they’re doing a great job, and for their inability to collect all the needed assessment data that would help them. Then we’ve been talking about why clinicians have trouble doing this and highlighting how it can help them make better judgments for their patients. Any final word that you would want?
Dr. Lambert: One thing that’s important in the article people are going to read if they do is that all the six research studies that are presented there, in every study, therapists were their own control.
Interviewer: Yes.
Dr. Lambert: So, we split a therapist to see half the cases with feedback and half just doing their own thing without feedback. So, whatever effect size we are having, it’s within a therapist. It’s not like we have a special set of trained people to use this feedback stuff.
Interviewer: Yes. No, I agree.
Dr. Lambert: We have lots of people who don’t believe in it doing it, and they’re surprised when they see how much better their patients do when they have the feedback. And they don’t know how to account for it. Because they’re always doing their best.
Interviewer: Right.
Dr. Lambert: The other thing is, if you alert therapists that their patient is not progressing, they find ways to solve the problem. They solve it better when we give them our clinical support tool, but they do a pretty good job of solving the problem without any further assistance. So it’s really a problem of awareness, not a…
Interviewer: Lack of problem solving skill.
Dr. Lambert: Yes. So it’s not like this takes a long time to learn. It takes an hour.
Interviewer: Yup.
Dr. Lambert: You know? And so, it’s not like we are trying to dictate any kind of psychotherapy.
Interviewer: Yes, I see that.
Dr. Lambert: We just stay out of that.
Interviewer: Clinicians can use a system like this no matter what kind of treatment they provide.
Dr. Lambert: Yeah.
Interviewer: Well, thank you so much Dr. Lambert, it has been very illuminating to talk with you and hear your thoughts. I know that you have been thinking about this issue for many, many years, and I appreciate your wisdom and benefits of your experience, and thank you so much for your time today.
Dr. Lambert: Thanks a lot for noticing my work, Jackie.
Interviewer: Oh, that was easy. Thank you so much.
Dr. Lambert: Take care of yourself.
Interviewer: Will do. Okay, bye bye.
Dr. Lambert: Bye bye.