Keynote Speeches and Technical/Clinical Presentations

Keynote Speeches

Professor Simon de Lusignan
Primary Care and Clinical Informatics, University of Oxford
Director of the Royal College of General Practitioners (RCGP) Research and Surveillance Centre (RSC)

Title: 
Why we need RoboPatient for medical training?
Abstract:
General practitioners, like me, need physical examination skills to help identify the seriousness of a presentation and the urgency of any action that needs to be taken.  RoboPatient is a project that helps students to learn how to examine the abdomen.  However, it is a holistic approach, taking into account your patient’s appearance, particularly facial expression and the need to time examination with breathing.

We examine the abdomen in primary care in two main circumstances.  Firstly, in acute care, where a tender abdomen combined with the right history might imply this is an “acute abdomen” that needs to be referred to a surgeon, for example in acute appendicitis.  Secondly, there can be common presentations, where prior to examination there are no “red flag symptoms” that require immediate referral.  However, examination may reveal an abnormal finding that required further investigation, for example findings suggestive of gall stones.

Doctors’ training generally includes starting to examine real people very early on.  We are trained about the importance of looking at the hand and face, and of timing examination with breathing.  Doctors inspect (look) before examining (feeling) and then may go on to percuss (tap to feel for example the different note over the liver compared with the bowel) and listen (to see if the gurgling of the bowel – “bowel sounds” – are normal).  In the current COVID-19 emergency much is done remotely.  There was very little feedback. The gaps in our learning are how to calibrate your arm, wrist and had tension so that you can feel without causing discomfort.  We as clinicians have to know what we are feeling.  We need to have confidence in knowing what we have found – as definitive test result are often not available for days/weeks.

What would a robo-patient add would be to help clinicians calibrate how we feeling the patient, and time examination to fit with the respiratory cycle.  Users and practice until they can really feel an abnormality.  The RoboPatient can be switched between normal and normal so that these differences can be learned in near real time. 

Professor Robert D. Howe,
Director of Harvard Biorobotics Lab,
Harvard University

Title: 
Combining Mechanical Models and Virtual Reality to Train Surgeons in Heart Valve Repair
Abstract:
Heart valve repair is a demanding surgical procedure. Working on a stopped and opened heart, the surgeon must predict the effect of surgical modifications when the heart is beating. The valve’s complex morphology makes this a difficult task. We have developed a system that takes 3D medical images of the patient’s heart and converts them to a mechanical FEM model. The surgeon then uses a visual and haptic interface to investigate a variety of possible surgical repair approaches. The mechanical modeling system then predicts the results of each approach, allowing the surgeon to select the best method before the procedure. We have used this system to investigate its efficacy as a pedagogical tool. Simulation-based training demonstrated increased repair success rates: by simultaneously feeling and observing pathologies during valve analysis, users were able to make a connection between valve morphology and predicted valve function.


Technical and Clinical Presentations

Dr Thrishantha Nanyakkara, 
Director of Morph Lab, 
Imperial College London

Title: 
Behavioural Lensing
Abstract:
Physicians choose from a repertoire of physical palpation behaviors when they want to estimate underlying tissue conditions of a patient.  It is interesting to notice that the process of conditioning palpation behaviors not only include force and movement control of fingers, but also their shape and stiffness control. We have been investigating this phenomenon of conditioning the body to improve haptic perception during manual palpation over the past 10 years. So far, our evidence show that such physical conditioning of the body helps to reduce the uncertainty of perceived states in the tissue while magnifying certain target cues. We use the term “behavioral lensing” to call this effect. In this talk I will briefly show some recent evidence.

Dr Fumiya Iida,
Director of Bio-Inspired Robotics Lab, University of Cambridge

Title: 
RoboPatient as Grand Challenge
Abstract:
RoboPatient is a very interesting and important area of research in robotics because it touches many fundamental challenges. In this brief presentation, I will give an overview of the theoretical principles that need to be considered for the research.


Professor Racheal E Jack, 
Professor of Computational Social Cognition, School of Psychology, University of Glasgow

Professor Philippe Schyns
Professor of Psychology,
Centre for Cognitive Neuroimaging, School of Psychology,
University of Glasgow.


Title: 
Designing Social Signals for Artificial Agents Using Psychological Science
Abstract:
Artificial agents are increasingly becoming part of hospitals, educational facilities, and homes to perform various tasks. To engage human users, artificial agents must be equipped with essential social skills such as facial expression communication. However, many agents are limited because they typically have only few, Western-centric facial expressions that lack naturalistic dynamics. We address this gap by equipping artificial agents with a broader repertoire of nuanced and culturally sensitive facial expressions (e.g., complex emotions, conversational messages, social traits). Using novel, data-driven psychology-based methodologies, we reverse-engineer dynamic facial expressions directly from human cultural perception and show that our human user-centered approach produces facial expressions that outperform the existing signals. Further, objective analysis of these facial expression models reveals latent syntactical signalling structures that can inform the design of generative models for culture-specific and universal social signalling. Our results demonstrate the utility of an interdisciplinary approach that applies data-driven, psychology based methods to enhance the social signalling capabilities of artificial agents. We anticipate that our methods will broaden the usability and global marketability of artificial agents and highlight the key role that psychology must play in their design.

Dr Mazdak Ghajari,
Director of Human Experience, Analysis and Design (HEAD) Lab,
Imperial College London

Title: 
Virtual modelling of soft tissue dynamic
Abstract:
Haptic, visual and auditory feedbacks are the key sources of information gain in palpation but learning how to interpret this information requires years of practice. A robotic patient can help to reduce this time by providing a surrogate for the patient, which can simulate various conditions and provide enhanced feedback. The fidelity of the surrogates is key to their success. Current human surrogates have major limitations, for instance, in terms of their anatomy and mechanical properties. With the advances in medical imaging and tissue biomechanics, high fidelity computational models of the human body can be developed, which provide accurate predictions of body response to mechanical forces.

In this talk, I show an example of a computational model of the human head biomechanics for studying traumatic brain injury. The model is built using high resolution MRI images of a healthy subject and nonlinear and rate dependent material properties of the soft tissues, e.g. brain, subarachnoid space and meninges. As a case study, I show the prediction of the model of an American Football head collision, which shows that the model can predict the location of brain pathology in a dementia-type disease associated with repetitive head impacts, chronic traumatic encephalopathy. I will then show how this approach has been used by our group to develop a finite element model of abdominal palpation. The model predicts the fingertip forces and stress distribution within the abdomen during different motions of the finger. We are extending this approach by using a detailed finite element model of the human body. Our final goal is to use the computational surrogate model to determine the optimum depth, motion and frequency of the fingers in palpation of different abnormalities and to inform the design of the physical RoboPatient.


Dr Angela Faragasso,
Assistant Professor, Department of Precision Engineering,
University of Tokyo

Title: 
Replicability and Reproducibility in Medical Robotics: How accurately can we train our doctors?
Abstract:
The advent of robotics technologies has consistently improved the outcome of medical intervention by providing more effective and precise medical devices. Innovative systems are nowadays not only used in the operating theater but also for diagnosis and training of medical
practitioners. However, a number of important challenges must be overcome in order to improve the delivery of healthcare and prepare the future physicians to deal with medical innovations. Those challenges are not only related to regulations, security and costs but also to product quality and high recall rates.
In this talk I will discuss my personal experience in the area of artificial tactile feedback, an active field of applied research which aims to retrieve the human sense of touch and feedback information on external objects by employing sensing mechanism with robotics devices. I will present the step toward the realisation of effective sensory mechanisms underlining the challenge which arise in Replicability and Reproducibility (RR) of innovative medical systems.

Dr Lauren Riek,
Professor in Computer Science and Engineering, University of California, San Diego

Title: 
Expressive Patient Simulators for Clinical Education
Abstract:
Preventable patient harm is a leading cause of worldwide morbidity and mortality. One way address this is through career-long clinical education. This often delivered via the use of robotic patient simulator (RPS) systems, allowing clinicians to practice skills on lifelike robots before treating real patients. However, the majority of commercial RPS systems are inexpressive, leading to a lack of learner immersion and higher likelihood of incorrect skill transfer. They also lack any degree of autonomy, which can cause clinical educators high cognitive overload. Over the past eight years, my team has been building expressive, autonomous RPS systems to address these gaps, which are based entirely on real patients. We have built models of multiple pathologies, including acute and chronic pain, Bell’s Palsy, and Stroke, and successfully synthesized them on robotic and virtual patient simulators. This talk will describe our recent efforts in these areas, and plans for future work.


Dr Nejra van Zalk,
Director of Design Psychology Lab,
Imperial College London

Title: 
Pain Perception from Animated Faces
Abstract:
Introduction: Little is known about the effect of pain perception on different types of faces by different people. As part of a larger study (“RoboPatient–Robot-Assisted Learning of Constrained Haptic Information Gain”), we are conducting a subproject focusing on how individuals with different cultural backgrounds and variations in personality and behavioural traits perceive pain expressions on animated robotic faces. 

Method: Using an online survey, we will ask 250 participants (50% female) from different UK ethnic groups to watch short video clips of animated faces (9 male and 9 female White, Black, and Asian faces expressing pain of low, medium and high intensity). We will also ask them about their own levels of anxiety, personality traits, and demographics. 

Expected Results & Discussion: We expect to find differences in pain perception between people of different ethnicities and a bias toward emotion perception of pain on faces from one’s own ethnic group. We also expect that high levels of participant anxiety and personality traits such as emotional instability will interfere with pain perception. These results will help to inform the future design of a robotic patient within the project at large and help to push the boundary on research on pain perception.