ReferenceError: "department" is not defined.
Skip to main content

The Institute for Ascertaining Scientific Consensus (IASC)

IASC logo (June 2023)

Long-term, 2024-30 and beyond

We aim to set up a hub-and-spoke, international network provisionally titled ‘The Institute for Ascertaining Scientific Consensus’ (IASC). IASC will be capable of emailing >100,000 scientists, asking for an agree/disagree response regarding a specific statement of interest, a candidate scientific fact such as ‘Covid is caused by a virus’. Emails will be sent internally within each participating institution, by a spoke representative, and the time-demand to read the email and respond will be less than two minutes. In this way the response rate will be very high, compared with the usual scientific opinion surveys. The responses will be instantly and anonymously recorded in a database, and the strength of consensus calculated. The network will be humanity’s premier means for measuring strength of scientific consensus regarding a specific statement of interest. It will thus stand as a useful tool for policymakers, especially given the proven[1] ability of consensus announcements to influence opinions and actions. It will also serve to inform laypersons, fighting against ‘fake news’ and misinformation. In other cases, it will serve to illuminate where experts in different countries, or different parts of the world, see things differently.


Pilot project, 2022-23

During 2022-23 a miniature version of IASC is operating out of Durham University, UK, led by Professor Peter Vickers. The hub-and-spoke network currently in place (December 2022) looks as follows:

  1. Durham, UK (Peter Vickers) - HUB
  2. Oxford, UK (Neil Levy)
  3. Cambridge, UK (Jacob Stegenga)
  4. Exeter, UK (Stephan Guttinger)
  5. Birmingham, UK (Henry Taylor)
  6. Leeds, UK (Simon Graf and Ludovica Adamo)
  7. UCL, UK (Siabhainn Russell, Jaspreet Jagdev, and Dolores Iorizzo)
  8. Edinburgh, UK (Nathalie Dupin and Kaja Horn)
  9. Lancaster, UK (Sam Fellowes)
  10. Iceland (Finnur Dellsen)
  11. Linköping University, Sweden (Harald Wiltsche)
  12. Stockholm, Sweden (Henning Strandin)
  13. Uppsala, Sweden (Rebecca Wallbank)
  14. Amsterdam (Hanneke Poorta)
  15. Purdue, USA (Dana Tulodziecki)
  16. Nebraska Omaha, USA (Haixin Dang)
  17. UCI, USA (Kyle Stanford)
  18. UCSD, USA (Gabriel Nyberg)
  19. Pittsburgh, USA (Edouard Machery and Laura Gradowski)
  20. Bucknell, USA (Matthew Slater)
  21. UPenn, USA (Cory Clark)
  22. Michael O’Rourke (Professor of Philosophy, Michigan State, USA)
  23. Toronto, Canada (Andrew Doppenberg, Alice Huang, and Mark Hallap)
  24. Conicet, Buenos Aires, Argentina (Eleonora Cresto)
  25. UNAM, Mexico City (Aline Guevara and Gabriela Frias)
  26. NYCU, Taiwan (Mike T. Stuart)
  27. University of Hyderabad, India (Shinod N. K.)
  28. University of Johannesburg, South Africa (Sean Muller)
  29. University of Pretoria, South Africa (Emma Ruttkamp-Bloem)
  30. Macquarie, Australia (Mark Alfano)

During the pilot project, the method will be fully tested twice: the first statement to be tested will be a rather uncontentious statement, so that we can set a baseline for a strong consensus, and iron out any teething problems with the methodology and ICT architecture. The current front-runner is:

  1. Science has put it beyond reasonable doubt that COVID-19 is caused by a virus.

The second statement to be tested is yet to be decided. Amongst the huge variety of options are the following:

  • Science has put it beyond reasonable doubt that depression can often be effectively treated with SSRIs such as fluoxetine.
  • Science has put it beyond reasonable doubt that global warming is ‘anthropogenic’, in the sense that human activity past-and-present (especially burning of fossil fuels) is the dominant causal factor.


The project has two advisory boards. The first consists of a group of experts who will periodically advise on overall project strategy:

Advisory Board (project strategy)

  1. Steve Lewandowsky (Chair in Cognitive Psychology, Bristol, UK)
  2. Cory Clark (Behavioral Scientist, University of Pennsylvania, USA)
  3. Henry Taylor (Philosopher of Psychology, Birmingham, UK)
  4. Angela Woods (Director of the Institute for Medical Humanities, Durham, UK)
  5. John N Parker (Professor of Sociology and Human Geography, Oslo, Norway)
  6. Sanford Goldberg (Chester D. Tripp Professor in the Humanities, Northwestern, USA)
  7. Michael O’Rourke (Professor of Philosophy, Michigan State, USA)
  8. Peter Heslin (Professor and Data Science Specialist, Durham, UK)
  9. Alan Real (Director of Advanced Research Computing, Durham, UK)
  10. Mariann Hardey (Professor, Durham University Business School, UK)
  11. Mike Bath (Research Commercialisation and Exploitation, Durham University, UK)
  12. Kieran Fernandes (Associate Dean for Internationalisation, Durham University Business School, UK)
  13. Tarun Menon (Azim Premji University, Bangalore, India)
  14. Wei Wang (Tsinghua University, Beijing, China)
  15. Dave Sweet (MD, FRCP(C); ‘Clarity’ Co-President and Co-Founder)
  16. Tom Lamb (JD; Entrepreneur; ‘Clarity’ Co-President and Co-Founder)

The second advisory board consists of scientists from a range of fields, who will participate in a test run prior to any large-scale international rollout:


Advisory Board (scientists)

  1. Jim Al-Khalili (Professor, Theoretical Physics, Surrey, UK) 
  2. Alice Roberts (Professor of Public Engagement in Science, University of Birmingham, UK) 
  3. Steve Brusatte (Chair of Palaeontology and Evolution, Edinburgh, UK) 
  4. Sean McMahon (Chancellor's Fellow in Astrobiology, Edinburgh, UK) 
  5. Erik Svensson (Professor, Evolutionary Ecology, Lund, Sweden) 
  6. Axel Maas (Professor, Theoretical Particle Physics, Graz, Austria) 
  7. Casimir Ludwig (Reader, Experimental Psychology, Bristol, UK) 
  8. Anantha Murthy Sharath (Professor of Physics, Hyderabad, India) 
  9. Joby Joseph (Center for Neural and Cognitive Sciences, Hyderabad, India) 
  10. Þröstur Þorsteinsson (Professor of Geophysics, University of Iceland, Iceland)  
  11. Luke Drury (Professor of Astrophysics, Dublin Institute for Advanced Studies, Ireland)  
  12. Avi Loeb (Frank B. Baird, Jr., Professor of Science, Harvard University, USA) 
  13. Lars-Olof Pålsson (Associate Professor, Department of Chemistry, Durham, UK) 
  14. Aakash Basu (Assistant Professor, Department of Biosciences, Durham, UK) 
  15. Beth Bromley (Professor, Physics Department, Durham, UK) 
  16. Madeleine Humphreys (Associate Professor in Earth Sciences, Durham, UK) 
  17. Arto Maatta (Associate Professor, Department of Biosciences, Durham, UK) 
  18. Dorothy Cowie (Associate Professor, Psychology, Durham, UK) 
  19. Bob Kentridge (Professor, Psychology, Durham, UK) 
  20. Ravi Kumar Kopparapu (Astrobiology, NASA Goddard GSFC, USA) 
  21. Anil Seth (Professor of neuroscience, University of Sussex, UK) 
  22. Marc Knight (Professor, Department of Biosciences, Durham, UK) 
  23. Ehmke Pohl (Professor, Department of Chemistry, Durham, UK) 
  24. David Dryden (Assistant Professor, Department of Biosciences, Durham, UK) 
  25. James Baldini (Professor, Earth Sciences, Durham, UK) 
  26. Soazig Casteau (Assistant Professor, Psychology, Durham, UK) 
  27. Sören Holst (Assistant Professor, Physics, Stockholm, Sweden) 
  28. Evy van Berlo (Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam) 
  29. Melanie Davies (Professor of Diabetes Medicine, Leicester University, UK) 
  30. Martijn Egas (Associate Professor, Evolution and Behaviour, University of Amsterdam) 
  31. Alejandro Heredia (Professor of Biology, UNAM, Mexico) 
  32. Pedro Quinto (Professor of Physics, UNAM, Mexico) 
  33. Nadia Jacobo (Professor of Biology, National Institute of Nutrition, Mexico) 
  34. Carina Hoorn (Assoc Prof, Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam) 
  35. Göran Kecklund (Professor, Department of Psychology, Stockholm University) 
  36. Tim Bedding (Professor, School of Physics, University of Sydney) 
  37. Joss Bland-Hawthorn (Professor of Physics, University of Sydney) 
  38. John Hammond (Professor of comparative immunology, The Pirbright Institute, UK) 
  39. Julio Alcayaga (Professor, Department of Biology, University of Chile) 
  40. Rajeev Gupta (Professor, Department of Chemistry, University of Delhi) 
  41. Tim Downing (Head of Genomics, The Pirbright Institute, UK) 
  42. Elizabeth le Roux (Assistant Professor, Department of Biology, Aarhus University) 
  43. Jens-Christian Svenning (Professor, Department of Biology, Aarhus University) 
  44. Felix Riede (Professor of Archaeology, Aarhus University) 
  45. Jane Hutton (Professor, Department of Statistics, University of Warwick, UK) 
  46. Kristine Engemann Jensen (Assistant Professor, Department of Biology, Aarhus University) 
  47. Jose Villadangos (Professor, Microbiology and Immunology, University of Melbourne) 
  48. Juan Carlos Letelier (Professor, Biology, Universidad de Chile) 
  49. Henrik Mouritsen (Professor, Biology and Environmental Sciences, University of Oldenburg, Germany)
  50. Alejandro Ordonez Gloria (Assistant Professor, Department of Biology, Aarhus University, Denmark)


Method, including advantages over currently available state-of-the-art methodologies

The vast majority of methodologies currently being used to survey scientists fail to meet any of the following three criteria, all of which will be met by the new method:

  • Emails are individual and personal.
  • Emails are sent from somebody internal to each scientist’s own institution.
  • Overall time-demand is only a couple of minutes: emails are very concise, and the participant responds with two clicks.

In a pre-pilot-project (June 2022) scientists at Durham, UK, were provided with a statement, and merely asked to choose between two options ‘A’ and ‘B’; many responded with nothing more than a letter, either ‘A’ or ‘B’. In the proposed pilot project, scientists will be able to simply click a button (‘agree’ or ‘disagree’) embedded within the email. Their (totally anonymous) answer will be automatically recorded in a database.

Brief comparisons with state-of-the-art methodologies for surveying scientists’ opinions are here considered. First, in the well-known Myers et al (2021) paper reporting a solid consensus on the question of anthropogenic global warming, an 8-page questionnaire was sent to 10,929 scientists, and 2,780 responses were eventually received – a return rate of 25%. To achieve this return rate, three reminder emails were sent. The timeline between sending out the questionnaire and analysing the results was 36 days: 10th September to 16th October 2019. Whilst this methodology is effective up to a point, it is slow, and the slow return rate is a barrier to a significant scale-up. Sending personal, ‘internal’ emails to individual scientists, asking a simple ‘agree or disagree’ question, ensures a much higher return rate, and could in principle be scaled up to quickly access the opinions of many tens of thousands of scientists.

Second, in the Work and Well-Being Survey (Vaidyanathan 2021), out of 22,840 scientists contacted only 3,442 replied, which is a return rate of 15%. To get even that return rate required three months, during which time 13 reminder emails were sent (across two waves of solicitation) as well as reminder snail-mail postcards, and incentives were offered such as gift cards and a raffle to win an iPad.

The new methodology envisioned in this pilot project is quicker, simpler, more repeatable, and has the potential to pool a far greater number of opinions. In the pre-pilot-study Vickers achieved a 62% return rate based on nothing more than one email, indicating that with even just one or two reminder emails a very high return rate (>75%) might well be achieved.


Objections and replies

A few possible objections to the proposed programme, and initial responses, are as follows:

  1. The programme will only appeal to ‘realists’, who are predisposed to accept that science delivers truths/facts about our world. But there are many non-realists, who would reject this.
    • Response: the so-called ‘scientific realism/antirealism debate’ concerns knowledge of unobservables. In fact, nearly all non-realists do accept that there are many established scientific facts. There exists a philosophical consensus that many scientific facts can be identified, including (for example) ‘smoking causes cancer’. This issue is discussed at length in Chapters 1 and 2 of my new monograph, Identifying Future-Proof Science (OUP 2022).


  1. Many scholars reject the link between consensus and truth.
    • First: There are different kinds, and strengths, of ‘consensus’. Few scholars would dispute that truth can be accessed via an absolutely solid scientific consensus (>95%), achieved via bona fide scientific activity, in a large, international, diverse scientific community. This issue is discussed at length in Identifying Future-Proof Science.
    • Second: Undoubtedly, it is sometimes important to quantify scientific opinion. Thus the project developed here may be viewed as an important methodological experiment, even if one rejects any connection between ‘consensus’ and ‘truth’.


  1. Targeted scientists would soon become fatigued, and the process would collapse.
    • The network would constantly be shifting. Fatigue at an institution in the network would be monitored via the return rate of the institution, and spokes would sometimes drop out. But new spokes in the network would perpetually be added.


  1. Often, the pool of ‘relevant scientists’ would be very small.
    • Precisely how ‘relevant scientists’ would be selected is something that would be worked out as the project developed. Initially, the plan would be to select scientists by departmental affiliation, ruling out departments that are quite obviously not sufficiently relevant. Thus, for a question about anthropogenic global warming, it would not be unreasonable to leave out Mathematics and Psychology departments. The basic idea is to ask all ‘relevant scientists’, broadly construed. For any given statement to be tested, the issue of ‘relevant scientists’ would be debated within the network and Advisory Board. A high priority would be to give no impression whatsoever of cherry-picking experts. So, for example, for a statement about climate change, we would not merely invite climate change experts.
    • B. If the relevant pool of scientists really was extremely small, then the results would (probably) fail to be reliable. This would (probably) be a limitation of the approach: it could only be applied reliably to some scientific statements.


  1. Any ‘Institute for Scientific Consensus’ wouldn’t influence the public very much.
    • Research shows that consensus announcements have great power to influence; e.g. Bartoš et al. (2022), ‘Communicating doctors’ consensus persistently increases COVID-19 vaccinations’, Nature, Vol. 606.


[1] Bartoš et al. (2022): Communicating doctors’ consensus persistently increases COVID-19 vaccinations. Nature, Vol. 606.

The Institute for Ascertaining Scientific Consensus

If you interested and would like to learn more about this project - please contact Prof Peter Vickers