Wednesday, January 14, 2015

1 in 5 and Lying with Statistics

I think at this point it's fair to say that almost everyone has heard the statistic, "1 in 5 women will be sexually assaulted before graduating from college." So, I was intrigued when I stumbled upon this article from the Washington Examiner that claims that the statistic has been debunked. Really? Interesting! Let's check it out!

I followed the links in that article, which took me to a very official-looking special report from the United States Department of Justice. According to the article from the Washington Examiner, "The survey [analyzed by the Department of Justice] found that between 1995 and 2013, an average of 6.1 for every 1,000 female students were raped or sexually assaulted each year. That's about 0.61 percent annually, or (at most) 2.44 percent over the average four-year period (one in 41). That’s way smaller than 20 percent." Naturally, as an evaluator-in-training and a critical thinker, I wondered what they were basing that data on, so I looked through the report.

First off, I'd like to note a few things that most feminists like to stress about rape and sexual assault. For one, these are deeply personal incidents. People who have gone through any type of trauma often need some degree of internal healing before they can talk to strangers about what they have gone through, but yet we have this idea that rape victims will immediately go running to the warm, understanding arms of the police for help and comfort. Police are generally not perceived as warm and comforting; even if an officer is the best human being in the world, he or she is still most likely a stranger, and he or she must ask a ton of probing questions that force the survivor to relive an extremely uncomfortable experience that he or she is likely still sorting through internally. And in the end, that officer can and often does decide that a rape or assault did not (or probably did not) occur, or the case is thrown out due to lack of evidence, or a thousand other things can happen that lead to inaction or underreporting.

To make things worse, victim-blaming is a very real issue when it comes to dealing with the legal world of rape and sexual assault. There are steps that would-be victims are expected to take to make themselves as unvictimizable as possible, and even if they take all of them--not drinking, dressing conservatively, not flirting, not dancing, not walking alone, not going out at night, etc--most people can still think of more precautions that "should" have been taken. Most people are already pretty good at blaming themselves for the things that go wrong in their lives, so this is upsetting and counterproductive for survivors to hear, and it contributes to people not wanting to report their experiences to the authorities.

There are so many other nuances--especially about the experience of surviving sexual assault in a college or university setting--that I would love to bring up here, but I want to bring this back to the Department of Justice report before this becomes a novel.

When one is collecting data about a sensitive, nuanced, highly personal incident in a large population from strangers, it's important to look at the survey instrument being used. For example, what were the questions like? How were they worded, and how were the surveys administered? What populations were covered?

A lot of survivors of sexual assault are reluctant to use words like "rape" or "sexual assault," because they sound so black and white and individual experiences rarely feel that way, but when acts are described in more concrete ways, survivors will agree that they were coerced or forced to do those acts. (While it may be easy as a reader to make the jump that being forced or coerced into sex acts is by definition sexual assault, try to keep in mind that most rapes and sexual assaults are committed by perpetrators who know their victims. Most people do not consider the people they know to be rapists or sex offenders, so it really is hard for most people to put the label on the act.)

The Department of Justice report explains that the surveys preface their questions by briefly acknowledging that it can be hard to talk about rape and sexual assault. Then, they immediately dive in and ask, "Have you been forced or coerced to engage in unwanted sexual activity by a) someone you didn't know before, b) a casual acquaintance, or c) someone you know well?" This question is followed by others designed to capture details about the incident(s), "including the type of injury, presence of a weapon, offender characteristics, and reporting to police."

Please note, this survey is administered in person and over the phone, so the respondent is either staring a stranger in the face or speaking to a stranger over the phone answering these questions. Realistically, how many people are going to discuss a deeply personal experience with a stranger? A stranger who works for the government? (But don't worry guys, they got a 74% overall response rate, so at least they talked to a lot of people, even if no one wanted to actually open up about their experiences.)

To their credit, the authors of the report compared the survey that they used, which is called the National Crime Victimization Survey (NCVS), to two other widely used survey instruments: the National Intimate Partner and Sexual Violence Survey (NISVS) and the Campus Sexual Assault Study (CSA). They acknowledge that the other two instruments obtained substantially higher estimates of of victimization rates than the NCVS, but they justified their use of the NCVS by saying that:

  1. Responses were more consistent among groups and subgroups with NCVS than with the other survey instruments
  2. NCVS is easier to administer in a wider variety of contexts, allowing for easy comparison across groups and time.
Okay, responses across groups and subgroups were more consistent with NCVS...but should they have been? Maybe there are simply large discrepancies between the victimization rates of certain groups and subgroups. I'm not convinced that this is a good reason to choose this instrument. I do think that it is important to choose an instrument that allows for comparison across groups and time, but I am not convinced that the other instruments would not allow for similarly effective comparison. Furthermore, I don't think that a survey that requires people to talk face-to-face to strangers on the street about sexual assault is going to give effective comparisons in any world.

Another thing that I found interesting in the comparison of the NCVS to the other instruments was the orientation of the instrument versus the purpose of the report. The authors note that NCVS is presented as a survey primarily about criminal behavior, and responses about sexual victimization often exclude those not seen as criminal. Conversely, the NISVS and CSA are more behaviorally oriented; they give exhaustive lists of behaviors and acts, and they discuss the absence or presence of consent or the capacity to give consent (for example, if one is under the influence of alcohol or drugs, one is not considered able to give consent). While a survivor of sexual assault may not see the perpetrator as a criminal and thus may respond negatively to questions on the NCVS (or may not want to discuss their experiences with a stranger on the street), the more specific questions on the other two instruments may resonate more with that same individual. Moreover, the CSA and NISVS are self-administered and phone only, respectively; neither requires a face-to-face conversation.

So, to sum up: Interesting report, U.S. Department of Justice. I am not at all convinced by your choice of survey instrument, and I think that your statistics are not representative of what you are trying to report on. If you are looking to gather data about rates of sexual assault, you need language that captures that, collected in a way that doesn't scare people off. Please try again. Thank you.

2 comments:

  1. Very well written - can't wait to read some of your program evaluations 😊

    ReplyDelete
  2. Well said :)

    The most frustrating bit for me was how the DOJ justified using that tool because it was most consistent. A bad tool will often produce that same inaccurate result over and over, leaving you with really consistent and wildly unusable data. It's akin to answering 21 for every math problem ever.

    ReplyDelete