A new study published in Translational Psychiatry suggests that combining virtual reality, eye tracking, head movement data, and self-reported symptoms may help identify attention-deficit/hyperactivity disorder (ADHD) in adults with improved accuracy. In a diagnostic task designed to mimic real-world distractions, researchers found that their machine learning model could distinguish adults with ADHD from those without the condition 81% of the time when tested on an independent sample.
ADHD is a neurodevelopmental condition marked by inattention, impulsivity, and hyperactivity. While it is often diagnosed in childhood, it also affects millions of adults. Diagnosing the disorder in adults can be especially difficult because it typically relies on clinical interviews and retrospective self-reports. These methods are prone to error due to biased memory or intentional misreporting. Unlike some medical conditions, there are no established biomarkers or lab tests that can confirm an ADHD diagnosis. As a result, misdiagnosis remains a serious problem.
To address these challenges, the research team aimed to improve diagnostic accuracy by using a multimodal assessment approach that mirrors the real-world experience of people with ADHD. They combined performance on a sustained attention task with eye tracking, head motion measurements, electroencephalography (EEG), and real-time self-reports. Participants completed the task in a simulated seminar room using virtual reality, where distractions like noise or movement were introduced to mimic everyday interruptions.
“ADHD is a complex and heterogeneous disorder and, to date, no cognitive tests or (bio)markers exist that can accurately and reliably detect it. Nonetheless, such objective measures would significantly facilitate the diagnostic process,” said co-first author Benjamin Selaskowski, who is affiliated with the Department of Psychiatry and Psychotherapy at the University Hospital Bonn.
“Preliminary evidence suggests combining multiple assessment modalities may improve diagnostic accuracy. Further, there is evidence that virtual reality (VR) can improve the diagnostic accuracy of cognitive tests in ADHD by providing a realistic, ecologically valid test environment.”
“Therefore, in the present study we aimed to investigate whether integrating a VR-based cognitive test with tracking of head and eye movements, assessment of brain activity (via EEG), and real-time self-assessment of symptoms during the task could yield high accuracy in distinguishing adults with and without ADHD.”
The study was conducted in two phases. In the first phase, the researchers collected training data from 50 adults—25 with ADHD and 25 without. In the second, they tested the predictive accuracy of their model on a separate group of 36 participants—18 with ADHD and 18 without. This step was critical to ensure the model could generalize beyond the group it was trained on. Each participant wore a VR headset and completed a continuous performance task (CPT), which required them to press a button in response to certain letter sequences while ignoring distractors. Their responses, head and eye movements, brain activity, and symptom self-ratings were recorded during the task.
The machine learning model was trained to identify patterns across these different types of data that were most predictive of ADHD. To ensure that the model focused on the most informative features, the researchers used a statistical method known as maximal relevance and minimal redundancy (MRMR), which selects variables that are both strongly related to the diagnosis and relatively uncorrelated with each other. Out of 76 total features, the optimal model used only 11 to achieve the highest performance.
These features came from four of the five data sources: self-reported symptoms, eye tracking, task performance, and head movement. Among the most important predictors were how much a participant’s gaze wandered, how variable their reaction times were, and how much they moved their head during the task. Self-reported ratings of inattention, hyperactivity, and impulsivity also played a key role, although the researchers caution that self-assessments have known limitations in people with ADHD.
When applied to the independent test set, the model achieved an overall accuracy of 81%, with a sensitivity of 78% and specificity of 83%. This means it correctly identified 78% of ADHD cases and 83% of non-ADHD cases. These numbers are similar to those found in earlier machine learning studies of ADHD, but with a key difference: most prior research did not test their models on separate, independent data. This step is essential to avoid overestimating how well a model will perform in real-world settings.
“This study shows that combining multiple types of information can effectively help identify adults with ADHD,” explained co-first author Annika Wiebe. “Based on data from a group of adults with and without ADHD, we identified performance in a virtual attention task, eye movements, head motion, and self-reported symptoms during the VR scenario as most relevant for distinguishing individuals with ADHD. These findings highlight the potential of using a multi-method assessment to improve the accuracy and objectivity of ADHD diagnosis in adults.”
The use of a virtual reality setting was especially important. Traditional attention tests are often done in quiet, sterile lab environments, which do not reflect the noisy, distracting situations in which people with ADHD often struggle. By placing participants in a more realistic environment and introducing distractions, the researchers were able to capture behaviors that may not emerge in standard tests. This approach increases what’s known as ecological validity—the extent to which a test resembles real-life situations.
The study also sheds light on the relative value of different data sources. While EEG is often considered a promising avenue for identifying biomarkers of mental health conditions, it did not improve classification accuracy in this case.
“We found it interesting that our investigated EEG parameters did not contribute to the predictive accuracy of our model,” Selaskowski told PsyPost. “Despite EEG’s common use in ADHD research, our results suggest that other measures – such as eye tracking, head movement, and self-reported symptoms during VR tasks – are more informative for distinguishing ADHD in adults.”
Despite its promising findings, the study does have limitations. The sample size was relatively small, with only 86 participants across both the training and test sets. This limits the generalizability of the results, although the use of a separate validation sample does strengthen the conclusions. “Further research with larger and more diverse populations is necessary to validate and refine this diagnostic approach,” Wiebe said.
“We aim to develop a standardized, efficient, and ecologically valid diagnostic tool for adult ADHD that can be easily implemented in clinical settings,” Selaskowski explained. “By refining our VR-based assessment and validating it across larger and more diverse populations, we hope to enhance the accuracy and reliability of ADHD diagnoses and potentially apply this approach to other neurodevelopmental disorders. With our multi-method approach, we hope to be able to capture a more comprehensive picture of an individual’s cognitive and behavioral functioning, leading to more accurate and personalized diagnoses.”
“Our findings highlight the importance of integrating multiple assessment modalities when diagnosing complex conditions such as ADHD,” Wiebe added. “Importantly, unlike most previous machine learning research in ADHD, our study validated the predictive model on an independent test dataset, which strengthens the reliability and potential clinical relevance of our findings.”
The study, “Virtual reality-assisted prediction of adult ADHD based on eye tracking, EEG, actigraphy and behavioral indices: a machine learning analysis of independent training and test samples,” was authored by Annika Wiebe, Benjamin Selaskowski, Martha Paskin, Laura Asché, Julian Pakos, Behrem Aslan, Silke Lux, Alexandra Philipsen, and Niclas Braun.