Pornography Consumption Effect Scale: Useful or Not?
PCES yields peculiar results measuring self-perceived effects of porn
This post addresses a psychometric tool (questionnaire) known as the Pornography Consumption Effect Scale (PCES). Several studies have employed it, the most well-known of which concluded that "Young Danish adults [18-30] believe that pornography has had primarily a positive effect on various aspects of their lives."
The study only measures "self-perceived" effects of porn. This is like asking a fish what it thinks of water, or like asking someone how her life has been altered by growing up in Minnesota. Indeed, asking young adults about porn's effects is not unlike walking into a bar at 10pm and asking all the patrons how beer is affecting their Friday night. Such an approach doesn't isolate porn's effects. In contrast, comparing users' reports with the reports of non-users or following people who quit porn would do more to reveal porn's actual effects.
On its face, the outcome that young Danes liked porn is not shocking (although upon closer inspection, some of the study's conclusions are highly suspect). The study came out in 2007, and the data was gathered a decade ago, in 2003—before streaming porn videos on tube sites, before wireless was universal, and before smartphones. Reports of severe porn-related symptoms (especially among younger users) have increasingly been surfacing for the last half dozen years. A decade ago, it's quite possible that young Danish adults using porn weren't noticing much in the way of problems. Internet porn could well have been looked upon as a welcome masturbation aid, or at least an innocuous one.
As the finding that young Danes deemed porn use beneficial seemed not unreasonable for its era, we hadn't bothered to read the entire study or look at the PCES questionnaire—until it was employed in a more recent study. When we actually looked at the PCES we were dumbfounded. It seems to be a measure of little but its creators' enthusiasm for demonstrating that porn use is "positive," and some of its conclusions are beyond belief. Consider the following:
1. First, this study, "found that both men and women generally reported small to moderate positive effects of hardcore pornography consumption and little, if any, negative effects of such consumption."
In other words, porn use was always beneficial with few, if any, drawbacks.
2. Further, "After all the variables were entered in the equation, three sexual background variables made statistically significant contributions to the positive effects: Greater pornography consumption, more perceived realism of pornography and higher frequency of masturbation."
- In other words the more pornography you use, the more real you believe it is, and the more you masturbate to it, the more positive the effects in every area of your life.
- Applying the researchers' conclusions, if you are a 30 year-old shut in who masturbates to hardcore porn 5 times a day, porn is making a particularly positive contribution to your life.
- By the way, the PCES results actually did not support the statement that perceiving porn as real is beneficial. Quite the contrary as you can see from the in-depth analysis of the study data below this post.
3. Most remarkably of all, "The report of overall positive effect of consumption generally was found to be strongly and positively correlated in a linear fashion with amount ofhardcore pornography consumption."
- So, the more hardcore the porn the greater its positive effects in your life. Attention 15-year olds: Watch the most extreme, violent porn you can find so you, too, can experience benefits.
- Notice that the researchers are not even saying there's a bell curve, where too much would be detrimental as compared with moderate use. Their finding is that, "More is always better." Astounding, no?
How could 3 variables—the harder the porn, the more you think it's real (sic), and the more you masturbate to it—always be associated with greater benefits?
First, nowhere else in nature does "More is always better" show up. More food, more water, higher concentration of oxygen, more vitamins, more minerals, more sun, more sleep, more exercise....there comes a point in all things that more causes negative effects, or even death. So how could this single stimulus be a radical exception? It can't.
Second, if all you have ever known is porn use, you have no idea how it is affecting you until you quit (and usually not for months afterward).
Third, the PCES questions are geared to find that "more is always better."
Applying the PCES questions to life
Put yourself in the position of many young, male porn users of today. You have seen every kind of porn imaginable in high-resolution video, and vanilla genres no longer arouse you. You are also suffering from one or more of these widely reported symptoms: loss of attraction to real potential mates, erectile sluggishness or delayed ejaculation with real partners, escalation to confusing porn tastes, and perhaps even uncharacteristic social anxiety and lack of motivation. But you've never quit using porn for long enough to find out, or even suspect, whether any of those symptoms are related to your porn use.
Given your circumstances, could you end up with anything less than a positive score on the PCES? We don't think so. 7 is the maximum score for any question. Of the 47 PCES questions, 27 (the majority) are "positive." This occurs because the researchers assume that "sexual knowledge" can only be positive. Thus, the 7 "extra" sexual knowledge questions have no counterparts. This is an interesting assumption, as we've seen many porn users report that they have seen and learned things from porn that they fervently wish they could forget.
In any case, how might the young hypothetical porn user described above score these sample "positive" questions?
14. ____ Has added to your knowledge of anal sex? "Hell yes! =7"
15. ____ Has positively affected your view of the opposite gender? "I guess so. Porn stars are hot. =6"
28. ____ Overall, has been a positive supplement to your sex life? "Yes, I never masturbate without it. =7"
45. ____ Has made you more sexually liberal? "Absolutely. =7"
Here are some of the 20 "negative" questions:
2. ____ Has made you less tolerant towards sex? "Are you kidding? I watch sex for hours every week. =1"
25. ____ Has reduced your quality of life? "I can't imagine life without my porn, so no. =1"
40. ____ Has led to problems in your sex life? "No, I'm a virgin. =1"
46. ____ Generally, has given you performance anxiety when you are sexually active on your own (e.g., during masturbation)? "Are you kidding? 'Course not. =1"
The researchers then divided users' answers into several categories: 1) Sex Life, 2) Attitudes Toward Sex, 3) Sexual Knowledge, 4) Perception/Attitudes Toward Women, 5) Life in General. Unlike the Sexual Knowledge category, the other 4 categories had both "positive" and "negative" questions. For these categories, the researchers reported whether the positive average was higher than the negative average. In fact, they give us the differences between "positive" and "negative" question averages for the 4 categories, without showing us the actual averages of the young Danes. In other words, for all we know the response to some "positive" questions could have been lukewarm, but the associated "negative" question scores were so low that the spread between them was wide enough to give a false picture that the Danes felt quite positive about porn, when, in fact, they may not have felt porn was all that beneficial, but simply didn't see much in the way of downside to its use.
If this is incomprehensible, see the explanation below—supplied by a senior professor who frequently peer reviews psychology research. He also points out that, in contradiction to the researchers' theory that men perceive fewer negative effects from porn use than women, men actually reported significantly higher negative effects than women in two areas: Sex Life and Life in General. The researchers don't discuss these findings, which obviously didn't influence their porn-positive conclusions. Yet we find them interesting because in the intervening years male highspeed porn users have increasingly reported sexual performance problems and other symptoms that make life less enjoyable.
Apart from the technical issues alluded to above, here are some of the conceptual problems that concern us about the PCES:
- Reduced quality of life, damage to relationships, and a nonexistent sex life, are on equal footing in the PCES with learning more about sexual practices and more liberal attitudes toward sex.
- Many guys have been using porn since puberty (or even before) but have never had real sex. They can't possibly know how it has affected their views of the opposite gender or their sex lives. Compared with what? For these guys, many PCES questions are the equivalent of asking how being your mother's child affected your life.
- Most guys don't fully realize what symptoms were associated with their porn use until months after they stop using it, so even if they are having severe symptoms (delayed ejaculation, erectile dysfunction, morphing sexual tastes, loss of attraction to real partners, severe uncharacteristic anxiety, concentration problems, or depression), few current users would connect such symptoms with Internet porn use—especially given the vague terms the PCES employs: "harm" "quality of life."
In other words, your marriage could be destroyed and you could have chronic ED, but your PCES score can still show that porn has been just great for you. In fact, if you are one of the vanishing species of human who hasn't used Internet porn, your PCES score could easily imply that not using porn is having detrimental effects on your life because you might only know about vanilla sex practices. As one recovering porn user said after viewing the PCES:
"Yeah, I've dropped out of university, developed problems with other addictions, never had a girlfriend, have lost friends, got into debt, still have ED and never had sex in real life. But at least I know about all the porn star acts and am up to speed on all the different positions. So yeah, basically porn has enriched my life no end."
"I know how to insert a dildo in an anus expertly, but my kids are living in another town because of what my ex found on our computer."
Encourage researchers to ask the important questions
Where are the studies asking the most at-risk group (young men) the questions that would reveal the kinds of symptoms they are increasingly reporting today? Such as,
- "Can you masturbate to climax without Internet porn?"
- "Have you become less socially active since you began using Internet porn?"
- "Are you still able to climax to Internet porn genres you began with?"
- "Have you escalated to Internet porn genres that you find disturbing?"
- "Have you begun to question your sexual orientation since you began using Internet porn?"
- "When you compare your erections during Internet porn use to your erections with a real partner do you notice problems with the latter?"
- "When you compare your ability to climax during Internet porn use to your ability to climax with a real partner do you notice problems with the latter?"
Fortunately, research coming from neuroscientists is revealing the brain changes that accompany sexual conditioning and overconsumption of Internet stimulation. It's becoming apparent that no matter how many artful questionnaires are constructed to persuade the public that Internet porn use is "positive," if users are reporting sexual performance problems, other severe symptoms, and addictions that resolve when they quit porn, such questionnaires are inadequate in important ways. For many of today's highspeed porn users, porn is proving "sex-negative."
The conflict between authorities is a good reminder that normative isn't necessarily a guarantee of normal. It's a very short step between "normative" and the implication that a common behavior is also "normal," or even "healthy." Yet "normal" actually means within the parameters of healthy functioning. No matter how many people are engaging in a behavior or how much they like it, if it produces pathology, legitimate medical researchers would not label the result "normal." Think smoking in the 1960s. Today, urologists are reporting surprising numbers of young guys with ED, a pathology that many healthcare givers and ex-porn users are connecting with overconsumption of Internet porn.
Anyone interested in pornography's effects would be wise to read beyond headlines and conclusions based on PCES questionnaire results. Analyze the entire study. Did the researchers ask questions that would have uncovered the severe symptoms some of today's porn users are reporting? Did they compare users to former users, so as to see the effects of removing the porn-use variable? Did they ask questions that would primarily only elicit, for example, porn-positive data? Was the evidence gathered and analyzed responsibly? Did researchers screen their subjects for addiction, using a test such as the new s-IAT (short-form Internet Addiction Test) developed by this German team?
Just because you like it doesn't make it good for you
Above all, be skeptical of porn studies based on self-perceived effects. These can tell us nothing about porn's actual positive and negative outcomes, yet they make scientific-sounding, reassuring headlines, which heavy porn users often rely on to rationalize continued use despite warning signs and symptoms. See, for example the more recent "Self-Appraisals of Arousal-Oriented Online Sexual Activities in University and Community Samples." It employed a shortened version of the PCES, and, not surprisingly, found that participants reported greater positive than negative outcomes from their porn use.
The danger of such studies is that they subtly promote the mistaken belief that "If I like porn enough, it's having a positive effect on me." This is on a par with creating a study that reassures kids that if they like sugar-coated cereal enough it's good for them.
"The study is a psychometric nightmare"
A senior professor at a major university, who frequently peer reviews psychology research, heightened our concerns about the PCES methodology:
A major problem with this study is that the researchers decided they could create "positive" and "negative" effect scales in a priori fashion simply based on the wording of the items. This led them to conduct factor analyses at the level of their pre-determined positive and negative scales rather than at the level of the individual items. Had they done an item-level factor analysis, they might have found that items addressing the same area (sex life, life in general, etc.) all loaded on the same factor rather than on separate positive and negative factors. If this result had been obtained, this means the items are assessing a continuum of negativity-positivity rather than separate positive and negative effects. And if that was the result, it would be impossible to interpret whether the mean score truly indicated more positivity than negativity.
Just because a mean score is above the mid-point (e.g. > 24 on an 8-item, 7-step Likert scale where scores can vary from 8 to 56), this does not mean that the score indicates a genuinely positive effect. Self-reports can't be accepted at face value this way. If they could, and we asked a group of people to rate their own intelligence, we would find that people are generally above average in intelligence. The researchers seem to be aware of this problem, as they discuss the issue of first- versus third person perceptions of media influence in the introduction of the article. Then they go ahead and take self-perceptions and self-reports at face value.
... Using t-tests to compare the means is problematic. Indeed, you can compute t-tests and get results such as those reported in Table 4. But that doesn't mean that the results make sense. For example, take the 1.15-point difference in mean scores for Life in General for males. The researchers do not report actual means, only mean differences, so let me make up some means. Let's say the sample had a mean score of 24.15 on the positive Life in General scale and 23.00 on the negative Life in General scale (both are 4-item, 7-step Likert scales, so scores can vary from 4 to 28). For this to be a sensible difference, a score of 23 or 24 or whatever on one scale would have to represent the same degree of magnitude on the other scale. But we do not know that, for the same reasons that a score above the midpoint cannot be assumed to be "above average." Furthermore, we do not know if the means were 24.15 versus 23.00 or something like 6.15 versus 5.00, which would surely merit a different interpretation.
In short, if I had been a reviewer on this manuscript, I would have probably rejected it on the basis of inadequate statistical methodology as well as various conceptual problems. ... It is impossible, given the nature of the data, to draw firm conclusions.
[We asked a few follow-up questions]
First, the researchers created a Sexual Knowledge scale as one of their components of the "positive effects dimension" because they assumed that more sexual knowledge is always a good thing. Unlike the other four components of positive effects, there is no corresponding negative version of Sexual Knowledge. As far as I can tell, the only analysis where they left out the Sexual Knowledge scale was when they conducted t-tests between the positive and negative versions of each construct (Table 4). This was out of necessity—there was no negative Sexual Knowledge to compare with positive Sexual Knowledge.
You didn't ask, but I can't help but comment on this Sexual Knowledge scale. Obviously, high scores on the scale reflect only participants' perceptions of obtaining knowledge, which is no guarantee that these perceptions represent accurate knowledge. Good luck to the guy who thinks he has learned what women like by watching pornography. Second, although personally I think that having knowledge is almost always a more positive thing than not having knowledge, who knows whether or not there should be a negative analog to the positive Sexual Knowledge scale? I can even imagine some items, e.g., "I saw some things I wished I had not seen." "I learned some things I wish I hadn't." The researchers made a lot of assumptions about what is "positive," probably based on Danish culture (e.g., being experimenting, being sexually liberal).
Concerning your question about scale validity, this is a fundamental concept in psychological measurement, but one that even many professionals have failed to grasp. To say that the PCES was validated by the Hald-Malamuth study is absolutely fatuous. One cannot test the validity of a psychological measure with a single study. Assessing the validity of a psychological measure requires years of programmatic research involving multiple investigations. It is actually a never-ending process, where we learn more and more about a measure's validity, but never establish a final figure for the validity of a psychological test (like "the test is 90% valid").
The definitive explanation of psychological test validation is a 1955 article by Lee Cronbach and Paul Meehl. Read and understand it and you will know more about psychological test validity than most psychologists: http://psychclassics.yorku.ca/Cronbach/construct.htm.
Here's a short summary of the Cronbach-Meehl classic: To say that a measure of a psychological construct possesses validity is to say that differences in scores on the measure correspond to other measurements in a manner predicted by the theory underlying the construct. We therefore assess the validity of a psychological test by administering it to groups of people, gathering other information our theory says is relevant to the construct allegedly represented by the test, and examine whether the scores on the test correspond to the other information as predicted by the theory. Results of validation are usually mixed, with some supporting and some disconfirming findings, which is one reason why we can't establish for all time exactly how valid a test is. It is a matter of the preponderance of confirming versus disconfirming evidence. Even when results are negative, we cannot say for sure whether the psychological test lacks validity or whether there is something wrong with the theory that made the prediction. Test validation is theory-testing as understood generally in science.
In the Hald-Malamuth study, there was actually very little test validation, despite a long section with the heading "Validation of the Pornography Consumption Questionnaire (PCQ)." According to Hald and Malamuth's informal theory of positive and negative effects from pornography, there are different kinds of perceived positive and negative effects, and the different types of positive effects should intercorrelate with each other, as should the different kinds of negative effects. Tables 1 and 2 present results that confirm this prediction, so this can be regarded as some support for the validity of the PCQ. The researchers also claimed that the positive and negative effects are absolutely independent of one another (meaning they should correlate zero), but they do not report correlations between the five positive effects scales and four negative effects scales in Tables 1 and 2. I suspect they are hiding disconfirming information. They do report that the sum of all positive PCQ scales correlates only r = .07 with the sum of all negative PCQ scales, but I wonder why they withheld information on the correlations among the different five kinds of positive effects and four kinds of negative effects.
Hald and Malamuth report, as they should, reliability estimates for their scales, and these numbers are all excellent. But reliability is not validity. A scale can be perfectly reliable but still not have good validity. Reliability and validity are both essential properties of psychological tests, but they are two entirely different things.
Hald and Malamuth then report tests of three hypotheses that are relevant to their theory of perceived positive and negative effects of pornography and therefore have some bearing on the validity of the PCQ. Their first hypothesis is that perceived positive effects are greater than perceived negative effects. I stand by what I wrote previously about these analyses, reported in Table 4: it was inappropriate for the researchers to conduct t-tests comparing the means of each positive effect with the means of the corresponding negative effect, because we cannot assume that a mean of "3" on a positive effect scale has the same meaning as a "3" on the corresponding negative effect scale. Perhaps the participants were more willing to report positive than negative effects because pornography is condoned in Denmark. So maybe a "3" on a negative effects scale is more like a "4" on a positive effects scale. We just do not know, and there is no way to know from the way the data were gathered. So the results reported in Table 4 must be taken with a very large grain of salt, maybe an entire salt shaker.
I noticed the authors played a funny trick in Table 4, comparing the positive and negative effects. Instead of reporting means for both the positive and negative scales (as they do for sex differences in Table 5), they report only mean differences. For example, the mean difference between overall positive and negative effects for men is 1.54. You have to go to Table 5 to see that this 1.54 is the difference between 2.84 for overall positive effect for men and 1.30 for overall negative effect in men. Sure, the difference of 1.54 is statistically significant and substantial according to Cohen's D (but only if we assume that a positive scale 3 = a negative scale 3). However, let's look at the absolute value of the positive effect score, 2.84 on a 1-7 scale. Since 4 is the mid-point, half-way between 1 (not at all) and 7 (to an extremely large extent), 2.84 is not very positive in an absolute sense.
The researchers' second hypothesis was that men would report more positive and fewer negative effects than women. The results supported the prediction about men reporting more positive effects. However, in contradiction to their theory, men also reported significantly higher negative effects [than women] in two areas: sex life and life in general. Either there is a problem with the validity of their scales or with their theory that men perceive fewer negative effects than women. What do you think?
Finally, the researchers reasonably hypothesize that background factors might be related to perceived effects of pornography, and some of these factors did correlate as predicted. The largest correlation for positive effects is with pornography consumption, r = .51. The heaviest users tend to report the most positive effects. As the researchers themselves acknowledge, this correlational finding can't tell us the degree to which consuming more pornography actually creates positive effects versus heavy consumption leading to rationalizing and wanting to believe in positive effects. For the record, although the researchers do not discuss this, Table 6 also shows a positive correlation between consumption and negative effects, r = .10. It is smaller, but statistically significant.
One thing the researchers got completely wrong (backwards, in fact) is the relation between degree of realism in pornography and positive effects. Table 6 shows that it is a negative relation (r = -.25), and this is confirmed by a negative beta weight ( β = -.22) in the regression analysis in Table 7. The negative correlation means that the more realistic the porn, the less positive the perceived effect. But the authors of the article go on and on describing the opposite (wrong) interpretation, that realism is related to positive effects. Whoops!
I hope these comments are helpful. I'd be happy to respond to any more questions you have. (Emphasis added)