About the Dive Lab

In the Diverse-ability Interaction Lab (Dive lab), we aim to transform conceptions of assistive technology, accessibility and inclusion in human society.

We do this by investigating how to design, engineer and evaluate multisensory and cross-sensory interactive technologies in ways that make them inclusive of both disabled and non-disabled people.

We work very closely with partner communities, charities, services and user groups, and we use a mixed-methods approach in our research, combining theory with field work and co-design, and controlled studies with in-situ evaluations.

We apply our research across a range of domains and contexts, from education to work and leisure, and across a wide span of age groups from pre-schoolers, to young people and senior adults.

Diverse-ability interaction lab logo

Examples of current, projects include:

  • Inclusive social play technologies for autistic and neurotypical children
  • Inclusive education technologies for blind and sighted children
  • Crossmodal correspondences in HCI
  • Cross-sensory social play for the early development of blind and sighted pre-schoolers
  • Multisensory Shape-Changing Interfaces
  • Joint attention in blind and visually impaired children and their sighted peers
  • Hybrid technologies for community spaces inclusive of people living with dementia and their social groups
  • Olfactory displays in exergames for people with dementia
  • Inclusive VR experiences for people living with limb differences
  • Synaesthesia in audio-visual digital art practice

The Dive lab is part of the Bristol Interaction Group, we are based in the School of Computer Science at the University of Bristol, UK.

The Dive lab receives(ed) funding from:

  • European Research Council (ERC)
  • UK Research and Innovation (UKRI)
  • Engineering & Physical Sciences Research Council (EPSRC)
  • Qatar Foundation (QF)
  • Microsoft Research