Home > Social Psychology > Just How Squishy is Social Science?

Just How Squishy is Social Science?

I find psychology and sociology generally fascinating, and part of that fascination extends to just how full of holes these fields are. I believe that one of the largest parts about understanding any area of knowledge is to understand the limits of that area. Just because a particular field may or may not be scientific does not mean it isn’t valuable. But it may mean we have to be careful how we regard and interpret findings within these fields.

Is psychology a science? Is sociology? For that matter, is economics? In universities, graduates in these fields are typically awarded science bachelors. When people talk about their relative veracity, however, or they’re repeatability, they’re generally referred to as the “squishy” sciences. That’s when they’re referred to as sciences at all.


Dr. Lee Jussim is the Psychology department chair at Rutgers. He’s a field expert in the accuracy of social perceptions. He is also a blogger for Psychology Today. His blog is called the Rabble Rouser – a name that suggests a welcome spring of iconoclasm and skepticism.

Yesterday he published A Scientific Critique of (Mostly) Psychological Science). It’s a survey of seven articles (one of them his own) that call for increased skepticism of science research results, but especially the social sciences. Jussim’s self-cited article Nonscientific Influences on Social Psychology lists some of the malevolent influences that tend to cloud scientific publication:

Fads, politics, self-interest, self-serving self-promotion, story-telling, and certain dysfunctions in our norms for everything from methods to statistics to publication processes to middle school-style popularity contests all undermine the quality, validity, generalizability, and replicability of some areas of social psychological research and conclusions.

Landscape of a Scientific Hell…

Another article that he cites, The Nine Circles of Scientific Hell, speaks more in depth about the “scientific sins” perpetrated bywell-meaning researchers under enormous pressure to publish significant findings. This is a paraphrase of some of them:

  • Exaggerating importance of findings (particularly the dramatic and irreplicable)
  • Hypothesizing after the result (Texas sharpshooter fallacy)
  • “P-value” fishing (Using every experiment design, sample size and statistical test until one gives you something under 0.05)
  • Excluding inconvenient outliers
  • Plagiarism
  • Inventing data and/or withholding all or part of its publication

These are factors that could effect any scientific field. But the “squishy” sciences have, by their very nature, an increased disadvantage. They already suffer from a lack of concrete measurability in contrast to fields like medicine. Many studies have to be accomplished with surveys and self-reporting, which is much less reliable than, say, measuring the presence of a certain chemical with a saliva swab.

Psychology, sociology and economics also have an inherent lack of repeatability. It’s harder in these fields to repeat an experiment that replicates the exact circumstances of the experiment before. These fields instead tend to speak to “trends” and “tendencies”. Also, they tend to foster certain factions – “schools” of belief – and operate from hypothesis designed to validate the beliefs of the faction rather than seek truth. Most research on meditation, for example, is conducted by institutes that promote the benefits of – wait for it – meditation.

Some of the nonscientific influences cites in Jussim’s article are particularly influential in the squishy sciences:

1) Politics, particularly a liberal bias in social psychology

I’m not parroting Fox News here. There are a disproportionate number of political liberals in the field of social psychology, and the respected researcher Jonathan Haidt points out that this may result in a prejudice regarding findings.

One real life example of this is the mounting research regarding the behavioral difficulties associated with households where the father is absent. It is very difficult to acknowledge this research without further stigmatizing single mothers – a political rather than a scientific consideration.

2) Questionable research practices, especially from liberal latitude in experiment design

As we stated earlier, social science researchers can’t always take medical measurements. It’s not always obvious how to design an experiment to measure an effect, and so these fields give wide latitude to acceptable experiment design. This gives rise to false positives from having, for example, too small a sample size or surveys that don’t really measure what they’re designed to measure.

3) Fads can lead to periods of drops in scientific credibility

There’s a big push right now towards neuroscience. This is mostly because of the drama associated with seeing parts of the brain light up on an fMRI machine. Neuropsychology therefore receives a disproportional amount of attention, money, and brains. But this field is in its infancy, and doesn’t yet tell us as much as we think it does. Much of the published research from this field is suspect because of low sample sizes, reducing what’s called the statistical power of the experiment. Many of these experiments fail on attempted replication.

Jussim’s blog is fascinating. Now that you’ve read this, go check it out.

  1. July 8, 2013 at 5:31 AM

    Interesting stuff, shame we aren’t all critical of statstic’s until we understand the context they were created in and the purpose of the experiment’s.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: