Academic Research
I received my PhD in experimental psychology from the University of Texas at Arlington in 2017. The research practice that I developed during this time consists of various psychology-rooted frameworks, with a focused on cognitive psychology and behavioral neuroscience. This is the expertise that shapes my UX Research work, and why I center everything I do around my 3 Cs: Cognition, Culture, and Creativity.
My research focus was in associative memory. This is the study of how we form memories that link information together, and how we recall that information at a later time. Our Cognitive Neuroscience of Memory lab ran several fMRI, EEG, and tDCS studies to understand how different neural substrates were involved in these memory processes, and how that activity changes over time and across contexts. Click here to read more about the article referenced above! (And check out my Publications page for links to our foundational neuroscience studies!)
Throughout my academic career, I became interested in how to approach research with user experience in mind. Happily, I was able to translate the foundational research practices and approaches to research design that I learned in this lab to more user experienced-focused projects. Here are some of my UX projects completed during my academic career!
Projects
Brain-EE: A project investigating user engagement and enjoyment
The goal for this project was to develop a way to use an off-the-shelf EEG recording device (in this case, Muse by Interaxon) to measure user engagement and enjoyment while playing two types of games. This method could be implemented in things like A/B testing!
In this project, grad students Maher Abujelala, Aayush Sharma, and I were able to use statistically-derived machine-learning algorithms to determine which of two mobile games a user preferred based on brain activity as measured by an off-the-shelf EEG sensor.
We started with pilot testing to determine the type of brain activity we needed to measure. We also gave surveys to our users to see what they thought of the two games. We ran regressions to examine what type of brain activity served as the strongest predictors of the users' responses. Our findings were:
Change in theta activity was the strongest predictor of user preference
Changes in theta, alpha, and beta activity produced consistent measurements of user preference
Users found the overall Brain-EE process to be simple, comfortable, enjoyable, and easy to understand
Here is a demo of how Brain-EE works!
Here’s a guide and technical report on Brain-EE!
We also created user personas and published our findings!
Blackboard: A project reviewing end-of-year feedback for a new statistics module!
While working as the Psychology statistics coordinator, I created a new system for students to submit and review their assignments. My goal was to increase efficiency and decrease paper use without sacrificing student productivity and progress.
I created a multi-faceted, online module in Blackboard that consisted of the following:
Assignment uploads with semi-automated grading (TAs could still review assignments to make sure students were being graded fairly and appropriately)
Test reviews that offered hints, showed feedback, and could be accessed multiple times
Project workspaces that centralized student collaboration
Links to additional resources (including YouTube demos and walk-throughs specifically designed for the course)
Surveys that gave students the opportunity to give more detailed feedback about the course, materials, and their instructors
The module is still being used, in conjunction with a statistics manual that I also helped to create!
Here’s a user persona!
Here’s a user experience map!
Here's some user feedback!
This was collected from students who used Blackboard to manage their work!
And here are the YouTube demos/walk-throughs that I created for both statistics courses!
Rewind-Remind: How cognitive games can support memory
My dissertation project focused on examining older adults’ experiences with memory problems. The goal of this project was to examine whether study techniques learned through cognitive training can help older adults remember more information. Another goal of this project was to examine what users thought of our memory game. My teammates Dora Toutountzi and Arnav Garg helped with the game’s visual and practical design, and I served as the subject matter expert and assessed performance on the game in relation to common memory assessments.
This was a study design that used A/B testing! First, users visited UT Arlington to take some memory tests. Then, users played different versions of Rewind-Remind from home. Finally, users returned to UT Arlington to take some memory tests. This allowed researchers to compare progress.
Here’s a user persona and scenario map!
Here’s the Rewind-Remind Usability Report!
And here’s a demo of the game!
We presented some preliminary data on Rewind-Remind's effectiveness at Petra 2017 -- click here to read our publication! And click here to read my dissertation with full findings!
Reinforcement Learning: How socially-assistive robotics can help with learning
The goal of this project was to examine how socially assistive robotics (SAR) can be used to design and evaluate a cognitive training session. Special thanks to my teammates: Kostas Tsiakas, Maher Abujelala, Michalis Papakostas, and Tasnim Makada!
For this study, a NAO robot served as a personalized cognitive trainer and gave positive and negative feedback to the user based on performance. The cognitive task was a sequence learning task. Users learned 3-letter, 5-letter, 7-letter, and 9-letter sequences and were asked to respond in one of three ways:
Speaking (verbally repeating the sequence)
Pressing buttons (touching the labeled buttons in the same order)
Flanker test (pressing the button corresponding to the letter in the middle of the given sequence)
Though feedback did not have a significant effect on user performance, we found several different user abilities:
Users were faster and more accurate on the Flanker test
Users may benefit from feedback moreso during mid-levels of difficulty (no help is needed for easy tasks, and robot feedback may have been more distracting than helpful for hard tasks)
These findings helped us learn more about the conditions in which users would most benefit and most prefer assistance.
This research is increasingly relevant, as the field of socially assistive robotics is on the rise. Protocols like ours may be helpful in vocational training situations. These systems could identify best practices for different types of jobs, especially increasingly automated jobs. Further, these systems could identify best placement for workers regarding their strengths in different job-related tasks. It could also help identify pain points in those tasks. An SAR system like ours could answer questions like:
When/under what conditions do users get tired?
When/under what conditions do users make more mistakes?
How heavy of a cognitive load will lead to user stress?
Where is the system inefficient?
What steps are unnecessary?
What kinds of prompts get workers' attention?
How can users interact with SARs to increase productivity and output?
For more information, read our publication from the Human-Robot Interaction 2017 conference!
Summary
The projects above are clear examples of how to apply foundational research practices, namely those within traditional neuroscience research, to industry practices. Reach out or check out my Industry Experience page if you’re interested in chatting more about the work I do now, and how to further apply neuroscience methods to UX research!