Skip to main content
Fri, Apr 3, 2026
S&P 500 5,142.30 +0.87%|NASDAQ 16,284.75 +1.12%|DOW 38,972.10 -0.23%|AAPL $192.45 +1.80%|TSLA $241.80 -2.10%|AMZN $178.92 +0.54%|GOOGL $141.20 +0.32%|MSFT $415.60 -0.15%|
S&P 500 5,142.30 +0.87%|NASDAQ 16,284.75 +1.12%|DOW 38,972.10 -0.23%|AAPL $192.45 +1.80%|TSLA $241.80 -2.10%|AMZN $178.92 +0.54%|GOOGL $141.20 +0.32%|MSFT $415.60 -0.15%|
As of Apr 3Sample data
ScienceIndia1 sourcesNeutral

Landmark Study Assesses Reproducibility in Social and Behavioural Science Research

A groundbreaking US study evaluating thousands of social science research papers reveals critical insights into the reproducibility crisis and the overall credibility of findings in the field.Photograph: Kind courtesy ThisIsEngineering/Pexels.comKey PointsA seven-year US study found that...

TF
Text Feed
via Text Feed

A groundbreaking US study evaluating thousands of social science research papers reveals critical insights into the reproducibility crisis and the overall credibility of findings in the field.

Photograph: Kind courtesy ThisIsEngineering/Pexels.comKey PointsA seven-year US study found that approximately half of social science research papers examined were precisely reproducible.

Landmark Study Assesses Reproducibility in Social and Behavioural Science Research

The study highlights a reproducibility crisis in science, where many scientists struggle to replicate results from published studies.

Coding mistakes, transcription errors, and faulty record-keeping can contribute to irreproducible outcomes in research.

Analytical robustness is challenged as different justifiable analyses of the same data can yield varying results.

Replication attempts, involving redoing experiments with fresh data, showed statistically significant results in the original pattern for 55% of claims.

A seven-year-long project in the US that analysed 3,900 claims from research papers in social sciences has revealed that results from about half the papers examined for reproducibility were precisely reproducible as they yielded the same result when the same analytical method was applied to the same data.

Findings help provide a picture of scientific credibility in the social and behavioural sciences. A random selection of 600 papers published between 2009 and 2018 in 62 journals and spanning across social and behavioural sciences was analysed for reproducibility, researchers including those from the US-based Center for Open Science Charlottesville explained.

The scientific issue of reproducibility crisis points to how about 60-70 per cent of scientists cannot reproduce results from their own or others experiments described in journal-published and peer-reviewed studies, especially those in economics, political science, cognitive science and psychology, among other fields.

We assessed 143 out of the 182 available datasets and found that 76.6 papers (53.6 per cent) papers were rated as precisely reproducible and 105.0 (73.5 per cent) were rated as at least approximately reproducible, the authors wrote.

Irreproducible outcomes can occur due to coding mistakes, transcription errors or a faulty record-keeping, many of which are unintentional and all of which are unwelcome, they said in one of a series of papers that published findings from the US SCORE programme in the journal Nature.

The Systematizing Confidence in Open Research and Evidence (SCORE) project is run by the Center for Open Science, a Washington DC-based non-profit organisation.

More than 850 researchers contributed towards evaluating 3,900 claims from social and behavioural sciences papers published between 2009 and 2018, with findings summarised across nine papers, according to the Center for Open Science website.

Results from SCORE provide important insights into the current state of scientific credibility in the social and behavioural sciences, it says.

Analytical Robustness and Alternative AnalysesAnother study examined 100 papers for analytical robustness, the same dataset can be analysed in different justifiable ways to answer the same research question, potentially challenging the robustness of empirical science, researchers explained.

For one claim per study, at least five experts independently re-analysed the original data, they said.

Thirty-four per cent of the independent reanalyses yielded the same result as was originally reported, indicating that the common single-path analyses in social and behavioural research should not be assumed to be robust to alternative analysis, the authors said.

They recommended using practices that explore and communicate this neglected source of uncertainty.

Replication Attempts and FindingsA third study replicated 274 claims, redoing an experiment to collect fresh data, from 164 papers across 54 journals.

A replication attempt involves testing the same research question as a previous investigation with independent evidence, researchers explained.

Replication helps discover regularities in nature — a central aim of science, they said.

They found that for 55 per cent of the claims (151 of 274) and 49 per cent of the papers (80.8 of 164), replications showed a statistically significant result in the original pattern.

The authors observe that challenges for replicability extend across social-behavioural sciences, illustrating the importance of identifying conditions that promote or inhibit replicability.

Disclaimer: News content is sourced from the stated source. Headlines, summaries, section headers, and images are automatically generated or selected using AI/algorithms and may not always be fully accurate. Readers are advised to refer to the full article for complete context.

Source Verification

Corroboration Score: 1

This story was independently reported by 1 sources. Click any source to read the original article.

Comments

0 comments
Be respectful and constructive.
Loading comments...