Seeing Minds Online

Chatbots are artificially intelligent systems that you can talk to online. Recently, chatbots have been getting smarter and more numerous. Facebook, Apple, Microsoft, and Amazon are betting that chatbots will be a big part of our future. Many jobs currently done by humans will be done by chatbots. As Microsoft CEO Satya Nadella declared in 2016: ‘Bots are the new apps’:

How should we prepare for a future with chatbots? Chatbots blur the line between human and non-human in a new way. They prompt us to ask questions about human nature and how we should interact with non-human agents. In order prepare for a future with chatbots, we need to answer the fundamental question that runs through this project: How do we see minds online?


Research questions

  • How do humans make judgements about the identity of an unknown entity online?
  • Which psychological assumptions or biases (both explicit and implicit) affect the judgements?
  • How might the judgements be manipulated for better or worse?
  • How might the judgements vary by demographic groups (age, location, ethnicity, economic background)?
  • How should we educate children and other vulnerable individuals so that they have safe interactions with unknown agents online?



Seeing Minds Online consists of equal parts research and public engagement.

On the research side, we are conducting experiments to unpick which psychological factors affect how humans see minds online. For our experiments, we use variations of the Turing Test. The Turing Test was proposed as a test for computers in 1950 by Alan Turing in his paper ‘Computing machinery and intelligence’. Turing’s question was: How could we know if a machine were intelligent? Turing suggested that a computer would be intelligent if a human judge would not be able to reliably distinguish a human from the computer during an interaction. So far, the Turing Test has been used for probing the abilities of computers. We use it in a different way: to probe the psychology of humans dealing with unknown agents.

The public engagement strand is essential to the project. The purpose of Seeing Minds Online is to deal with an issue of pressing public interest that has not received adequate attention. We run many public events (see below) introducing the public to the problems posed by chatbots and encouraging reflections and views in open discussion. We invite members of the public to participate in live Turing Tests. In a lively and interactive way, this helps to inform individuals of their own abilities and prompts them to reflect on their strengths, weaknesses, and vulnerability to biased judgements. In line with citizen science, we ask members of the public to contribute their reflections and judgements to our research.

Seeing Minds Online aims to learn from the public about the way in which they make judgements about unknown entities online, and to educate the public to be more discerning and careful in those judgements. Of particular interest to us are the most vulnerable groups: children, the elderly, and others requiring support.


Public events



  • Downs, J., Loughnan, S., et al (2014). Audience experience in social videogaming. Proceedings of the ACM. 
  • Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual Review of Psychology, 65, 399-423. 
  • Sprevak, M., 2017: The Turing Guide: Life, Work, Legacy (with Jack Copeland, Jonathan Bowen and Robin Wilson) (Eds.), Oxford University Press: Oxford, 400 pp.
  • Sprevak, M., 2017: “Turing’s model of the mind”, in J. Copeland, J. Bowen, M. Sprevak and R. Wilson (Eds.) The Turing Guide: Life, Work, Legacy, Oxford University Press: Oxford
  • Weger, U., Loughnan, S., et al (2015). Virtually compliant: Immersive video gaming increase conformity to false computer judgments. Psychonomic Bulletin & Review, 22, 1111-1116.  


Project team: