Steele et al., 2013 spokesperson Nicole Prause conducted several interviews about her July, 2013 EEG study on people complaining of having difficulties controlling their porn use. Commenting under the Psychology Today interview of Nicole Prause, senior psychology professor emeritus John A. Johnson said:
A gap in logical inference
{https://www.psychologytoday.com/comment/542939#comment-542939}
Submitted by John A. Johnson Ph.D. on July 19, 2013 – 2:35pm
Mustanski asks, “What was the purpose of the study?” And Prause replies, “Our study tested whether people who report such problems [problems with regulating their viewing of online erotica] look like other addicts from their brain responses to sexual images.”
But the study did not compare brain recordings from persons having problems regulating their viewing of online erotica to brain recordings from drug addicts and brain recordings from a non-addict control group, which would have been the obvious way to see if brain responses from the troubled group look more like the brain responses of addicts or non-addicts.
Instead, Prause claims that their within-subject design was a better method, where research subjects serve as their own control group. With this design, they found that the EEG response of their subjects (as a group) to erotic pictures was stronger than their EEG responses to other kinds of pictures. This is shown in the inline waveform graph (although for some reason the graph differs considerably from the actual graph in the published article).
So this group who reports having trouble regulating their viewing of online erotica has a stronger EEG response to erotic pictures than other kinds of pictures. Do addicts show a similarly strong EEG response when presented with their drug of choice? We don’t know. Do normal, non-addicts show a response as strong as the troubled group to erotica? Again, we do not know. We don’t know whether this EEG pattern is more similar to the brain patterns of addicts or non-addicts.
The Prause research team claims to be able to demonstrate whether the elevated EEG response of their subjects to erotica is an addictive brain response or just a high-libido brain response by correlating a set of questionnaire scores with individual differences in EEG response. But explaining differences in EEG response is a different question from exploring whether the overall group’s response looks addictive or not. The Prause group reported that the only statistically significant correlation with the EEG response was a negative correlation (r=-.33) with desire for sex with a partner. In other words, there was a slight tendency for subjects with strong EEG responses to erotica to have lower desire for sex with a partner. How does that say anything about whether the brain responses of people who have trouble regulating their viewing of erotica are similar to addicts or non-addicts with a high libido?
Two months later Johnson published this Psychology Today blog post which he posted about under Prause’s interview.
Perhaps Prause’s preconceptions led to a conclusion opposite of the results {https://www.psychologytoday.com/comment/556448#comment-556448}
Submitted by John A. Johnson Ph.D. on September 22, 2013 – 9:00pm
My mind still boggles at the Prause claim that her subjects’ brains did not respond to sexual images like drug addicts’ brains respond to their drug, given that she reports higher P300 readings for the sexual images. Just like addicts who show P300 spikes when presented with their drug of choice.
How could she draw a conclusion that is the opposite of the actual results? I think it could be do to her preconceptions–what she expected to find. I wrote about this elsewhere.
http://www.psychologytoday.com/blog/cui-bono/201308/preconceptions-may-color-conclusions-about-sex-addiction
Johnson’s Psychology Today post: Preconceptions May Color Conclusions about Sex Addiction. Key take-away: In his post Johnson describes Prause’s behind the scenes behavior, such as legal threats (as she had done with Wilson) and battering Psychology Today editors with threats, forcing them to remove two blog posts critical of Prause’s unsupported assertions (1 – Gary Wilson’s critique of “Steele et al., 2013″, 2 – critique by Robert Weiss, LCSW & Stefanie Carnes PhD). He also describes receiving disturbing and threatening emails from Prause:
When I first conceived this blog post and began to compose it about a month ago, my original intention was to describe in exquisite detail the specific ways in which I saw the proponents of opposite sides of the debate exaggerating or overextending their arguments beyond the actual data in the study. I subsequently changed my mind when I observed a firestorm of emotionally-charged rhetoric erupting among the debate participants. Not arguments about what the data logically implied, but ad hominem threats, including threats of legal action. I saw a PT blog post disappear, apparently because one of the parties demanded that it be taken down. I even received a couple of angry emails myself because one of the parties had heard that I had raised questions about the proper interpretation of the research in question in a scientific forum.
So, I have decided to quietly tip-toe out of the room. I have also decided to go ahead and post here what I had already composed a month ago, simply to present an example of my empirical claim that science is not a purely objective enterprise, and that actual scientists can become very personally and emotionally involved in their work. The controversy in question is also an excellent example of a common trend among U.S. researchers to overestimate soft-science results.
This angered Prause who argued (using fake names) with Johnson in the comments section of his Psychology Today blog post about Prause’s 2013 EEG study (note that Johnson doesn’t really have an opinion on sex addiction). It’s certain that “anonymous” is Nicole Prause; perhaps Jen H is too.
PRAUSE & JOHNSON “DEBATE”
LOL {https://www.psychologytoday.com/comment/556243#comment-556243}
Submitted by Jen on September 21, 2013 – 5:44pm
Thanks Dr. Johnson,
I too have been tip toeing around these, ahem, most passionate, sex addiction addicts.
Best of luck should you decide to throw yourself into the pit. I’ll be hoping for some good empirical work on the subject in the near future.
Regards
Jen H., CSW
Passionate is the word for it! {https://www.psychologytoday.com/comment/556450#comment-556450}
Submitted by John A. Johnson Ph.D. on September 22, 2013 – 9:10pm
Thanks for your comment, Jen.
It seems to me that passion is a double-edged sword. On the good side, passion for a subject means the person is willing to invest a lot of time and energy on that subject. Why would someone study something unless he or she had a passion for it?
On the other hand, if the passionate person already has his/her mind made up, all of that passionate energy is going to be directed toward one possibility, right or wrong. And when wrong, passion leads to blindness to the truth.
I am likely to stay out of these debates and let the empirical researchers decide.
Website to a fraud? {https://www.psychologytoday.com/comment/565636#comment-565636}
Submitted by Anonymous on November 2, 2013 – 6:26pm
As you mention, this debate since rife with agendas. However, linking a science debate to some random dude trying to sell books? How is this an improvement? I also think you missed the point of the study…all people show the pattern. This group (1) looks exactly like everyone else, and (2) just to be sure, the brain measure wasn’t related to any measure of hypersexuality (though it was to desire for sex with a partner). I’m not sure why it wasn’t related to desire to masturbate too, although the authors administered the whole scale and do talk about why that might be.
Perhaps I did miss the point {https://www.psychologytoday.com/comment/565666#comment-565666}
Submitted by John A. Johnson Ph.D. on November 2, 2013 – 9:39pm
If the point of the study was to show that “all people” (not just alleged sex addicts) show a spike in P300 amplitude when viewing sexual images, you are correct–I do not get the point, because the study employed only alleged sex addicts. If the study *had* employed a non-addict comparison group and found that they also showed the P300 spike, then the researchers would have had a case for their claim that the brains of so-called sex addicts react that same as non-addicts, so maybe there is no difference between alleged addicts and non-addicts. Instead, the study showed that the self-described addicts showed the P300 spike in response to their self-described addictive “substance” (sexual images), just like cocaine addicts show a P300 spike when presented with cocaine, alcoholics show a P300 spike when presented with alcohol, etc.
As for what the correlations between P300 amplitude and other scores show, the only significant correlation was a *negative* correlation with desire for sex with a partner. In other words, the stronger the brain response to the sexual image, the *less* desire the person had for sex with a real person. This sounds to me like the profile of someone who is so fixated on images that s/he has trouble connecting sexually with people in real life. I would say that this person has a problem. Whether we want to call this problem an “addiction” is still arguable. But I do not see how this finding demonstrates the *lack* of addiction in this sample.
To my knowledge, my post did not contain links to a random dude trying to sell books. The Porn Study Critiques site contains contributions by a number of individuals interested in the debate, and I invited readers to judge for themselves which arguments might have merit. I did not notice any book advertisements on that site.
Okay, I’m going to be {https://www.psychologytoday.com/comment/565897#comment-565897}
Submitted by Anonymous on November 3, 2013 – 8:37pm
Okay, I’m going to be optimistic and assume neither the author of this PT post nor the authors of the research article are intentionally biased. On the one hand, that change (sexual pics having the highest change) I would estimate has been replicated by at least 100 laboratories in controls. It’s extremely stable. Also, controls are exactly people who are on the low/absent end of the construct of interest. The regressions (not correlations) conducted, could be critiqued for not having the low end well-represented, but the range of the construct appears represented. Finally, we don’t know that a control wasn’t collected. Science is slow. It could be coming before you throw the scientist out with the biohazard (ha!)
That said, there are many questions this study raises:
1. How would a person with other sexual problems respond?
2. What will change with different kinds of pictures?
3. What about films?The bigger question, though, is…why did it take so long to get a study like this done in the first place? Really, both the pro and con crowd should be embarrassed by the poor level of science in this area.
There are actual scientists blogging about this topic if you need better links. This is a blogger who appears to have no credentials and made many mistakes in their “review”. I’ll even give you the pro-addiction science links. PT shouldn’t be relying on crappy reviews like that. Perhaps it was meant to be a commentary on bias that the PT author chose only a pro-addiction link only from a non-scientist blogger?
Your optimism about me is warranted {https://www.psychologytoday.com/comment/556243#comment-556243}
Submitted by John A. Johnson Ph.D. on November 3, 2013 – 9:50pm
I may have biases on this topic, but if I do, I am not aware of them, and I certainly am not intentionally trying to skew the debate one way or another. So you are right to assume that any bias in my writing is not intentional. Whether the authors of the study are intentionally biased, I cannot say. I do suspect that they wanted their study to demonstrate that the neural responses of alleged sex addicts are indistinguishable from the responses of non-addicts in order to discredit the concept of sex addiction. They certainly were willing to report in the popular media that their study cast grave doubt on the concept of sex addiction. But of course without a control group of non-addicts to show that neural responses between the two groups are indistinguishable, the claim of discrediting the concept of sex addiction is premature.
You say we don’t know if a control group was run. In response to this question in a scientific forum, the researchers said they did not have a control group because none was needed, that their subjects served as their own control in their within-subjects design. I found that response unintelligible because the only comparisons made with their within-subjects design were the P300 responses to the different types of photographic stimuli. This demonstrated that the P300 spike was higher for the erotic images was higher than for the other images. But whether the relative magnitude is similar to or different from self-described non-addicts, we don’t know. If there are findings from hundreds of laboratories on this one, the authors could have made that comparison. But they didn’t.
If the researchers had included self-described non-addicts in their study, the statistically significant negative correlation between P300 amplitude and desire for sex with a partner could have been even stronger than the coefficient they reported. The correlation they found was probably reduced due to restriction of range in P300 amplitude. So they did themselves a disservice by not including a more diverse sample that included people who did not report problems regulating their online viewing of erotica.
I use the terms regression and correlation interchangeably. Whether one conducts a simple bivariate regression or one of the forms of multiple regression, it’s all a version of the general linear model. We abbreviate the Pearson correlation coefficient with the small letter r, which stands for regression. Let’s not get side-tracked on irrelevancies.
Because I do not have a stake in the sex addiction debate, I don’t want to pick only on this anti-addiction research study and not the pro-addiction critics of the study. The blog to which I linked contains reviews which are certainly biased in their own way, although again I do not want to speculate on whether the bias is intentional or not. I was asked by the author of one of the reviews on this site to look at his critique before it was published, so I did, and I described what I thought was correct and incorrect in the critique. He followed some, but not all, of my suggestions for revising his critique. So, yes, there are mistakes in the review because not all of my suggestions were followed. I pointed to this blog merely as a starting place for the issues that are being debated. If you could provide links to higher-quality commentary (either pro-addiction or anti-addiction), that would be a great service to those in the audience who are interested in the concept of sex addiction.
As I said, my major interest is in the psychological factors that affect the conduct and interpretation of scientific research, more than the concept of sex addiction per se. Perhaps it was easier for me to point to the site of a true believer in the concept of sex addiction to illustrate possible psychological factors affecting the interpretation of research than to a more staid, neutral site maintained by professional sex researchers. If there is such an allegedly non-biased site (pro- or anti-addiction), I’d love to get the URL to see for myself whether it is indeed unbiased. Finding a non-biased discussion of sex addiction would be a first for me.
Craptastic {https://www.psychologytoday.com/comment/566091#comment-566091}
Submitted by Jen on November 4, 2013 – 4:02pm
Indeed. Sounds to me like the author perhaps ought to have paid greater heed to your feedback, prior to publishing.
Hate to point out what is so painfully obvious here, buuuut, it can be safely said that if the major debate surrounding one’s publishing is it’s validity, rather than it’s content, there is a definite problem.
A problem for psychology as a whole {https://www.psychologytoday.com/comment/566277#comment-566277}
Submitted by John A. Johnson Ph.D. on November 5, 2013 – 11:14am
Yes, if the problem is not obvious, it should be. This problem is not unique, though, to this particular topic. It runs rampant in academic psychology.
Psychologists get so much training in critical thinking, by which I mean looking for flaws in research studies and generating alternative interpretations of results, that most of us have developed hypertrophy of our critical function and atrophy of our constructive, creative function. Psychologists will endlessly pick at flaws in the methodology of studies that do not support the content they already believe. This an indication of a problem with the discipline of psychology as a whole. No study is absolutely methodologically perfect, even published studies that have undergone thorough review. It’s one thing to be able to find flaws in studies that draw conclusions you don’t like; it is another to design and conduct a study the produces unequivocal support for an alternative view.
Eh, not to get sidetracked {https://www.psychologytoday.com/comment/566638#comment-566638}
Submitted by Anonymous on November 6, 2013 – 6:58pm
Eh, not to get sidetracked but “We abbreviate the Pearson correlation coefficient with the small letter r, which stands for regression” definitely not. Regression locates the error differently than correlation. You can easily tell who actually read the study reviewed…if they say “correlation” they did not know what was done statistically (guy in your link made the same mistake). Don’t be that guy!
Anyway, I didn’t find a ton of scientific bloggers talking about this issue, but there were some really nice, more balanced reviews you could reference:
Other PT blogger and academic addictions guy:
http://www.psychologytoday.com/blog/addiction-in-society/201307/the-apocryphal-debate-about-sex-addictionFrom the main guy trying to get hypersexuality into the DSM:
https://web.archive.org/web/20160313043414/http://rory.net/pages/prausecritque.htmlA guy who publishes on addiction, though not about this study:
https://web.archive.org/web/20150128192512/http://www.sexologytoday.org/2012/03/steve-mcqueens-shame-valid-portrayal-of.htmlSure beats a random massage therapist in Oregon for their ability to more evenly critique. I don’t agree with all these either, of course, but that’s the point. These at least highlight the good and the bad, whereas the critique cited is actually factually false (e.g., the SNP authors collected and reported the entire SDI scale). It’s always better not to promote patently false information!
Quoting from the study {https://www.psychologytoday.com/comment/566673#comment-566673}
Submitted by John A. Johnson Ph.D. on November 6, 2013 – 10:29pm
Let me quote from the study, which I actually did read prior to writing my post. From http://www.socioaffectiveneuroscipsychol.net/index.php/snp/article/view/20770/28995:
“Pearson’s correlations were calculated among the mean amplitudes measured in the P300 window and the self-report questionnaire data. The only correlation reaching significance was the difference score calculated between neutral and pleasant–sexual conditions in the P300 window with the desire for sex with a partner measure, r(52) = − 0.332, p =0.016.”
Yes, the researchers also conducted some multiple regression analyses, but you can see from the above quote that they computed Pearson correlation coefficients.
Furthermore, I maintain that regression and correlation are not two different things. I am aware that some people say that the correlation coefficient, r, is “merely” a quantitative index of the strength of the linear relationship between x and y, whereas regression refers to estimating either x or y in terms of the best-fitting line, either y’ = bx + a or x’ = by + a. But if we regress y on x, the optimal value for the slope, b, is r * Sy/Sx. Pick up any textbook on psychological statistics (e.g., Quinn McNemar) and read its discussion of correlation and regression.
Thank you for adding the additional references. I was familiar with Peele’s position (Stanton Peele is indeed a legitimate expert on the topic), and I had read Rory Reid’s piece, but not the James Cantor post (although I am familiar with and respect his thinking). These additional references are a service to those who want more information.
Analyses misrepresented again {https://www.psychologytoday.com/comment/566683#comment-566683}
Submitted by Anonymous on November 6, 2013 – 11:15pm
“To directly assess the relationship between condition amplitude differences in the P300, two-step hierarchical regressions were calculated.”
I’m frequently statistical consultant, and you’re embarrassing yourself. The errors term is different between regression and correlation…they are, in fact “two different things”. How on earth are you employed in a psych dept? At least stay away from my students!
I am not sure why you {https://www.psychologytoday.com/comment/566750#comment-566750}
Submitted by John A. Johnson Ph.D. on November 7, 2013 – 9:32am
I am not sure why you provided the quote from the study, “two-step hierarchical regressions were calculated,” when I already acknowledge that the researchers’ analyses included both multiple regression and the computation of Pearson correlations.
As I said, “Yes, the researchers also conducted some multiple regression analyses, but you can see from the above quote that they computed Pearson correlation coefficients.”
The reason that I pulled out the quote, “Pearson’s correlations were calculated . . . ” was because you implied that I and the critic did not read the study. You said, “You can easily tell who actually read the study reviewed…if they say ‘correlation’ they did not know what was done statistically (guy in your link made the same mistake).”
If you want to maintain that regression and correlation are two different things, be my guest. I have no idea who your students are because you are anonymous. Even if I did, I would not bother them. I am not embarrassed about my career as a psychologist; I hope that you find your career satisfying.