Student Scholar Symposium Abstracts and Posters

Document Type


Publication Date

Fall 11-29-2023

Faculty Advisor(s)

Trudi Qi


Understanding human behavior in virtual reality (VR) is a key component for developing intelligent systems to enhance human focused VR experiences. The ability to annotate human motion data proves to be a very useful way to analyze and understand human behavior. However, due to the complexity and multi-dimensionality of human activity data, it is necessary to develop software that can display the data in a comprehensible way and can support intuitive data annotation for developing machine learning models able recognize and assist human motions in VR (e.g., remote physical therapy). Although past research has been done to improve VR data visualization, no emphasis has been put into VR data annotation specifically for future machine learning applications. To fill this gap, we have developed a data annotation tool capable of displaying complex VR data in an expressive 3D animated format as well as providing an easily-understandable user interface that allows users to annotate and label human activity efficiently. Specifically, it can convert multiple motion data files into a watchable 3D video, and effectively demonstrate body motion: including eye tracking of the player in VR using animations as well as showcasing hand-object interactions with level-of-detail visualization features. The graphical user interface allows the user to interact and annotate VR data just like they do with other video playback tools. Our next step is to develop and integrate machine learning based clusters to automate data annotation. A user study is being planned to evaluate the tool in terms of user-friendliness and effectiveness in assisting with visualizing and analyzing human behavior along with the ability to easily and accurately annotate real-world datasets.


Presented at the Fall 2023 Student Scholar Symposium at Chapman University.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.