Student Scholar Symposium Abstracts and Posters
Document Type
Chapman access only poster or presentation
Publication Date
Fall 12-3-2025
Faculty Advisor(s)
Dr. Trudi Qi
Abstract
Human-object interactions, such as translation and rotation, are fundamental to extended reality (XR) applications, which blend physical and virtual environments. XR is increasingly used in immersive design, virtual prototyping, and digital twins, where high-precision object manipulation is crucial. Gesture-based 3D interaction offers a more intuitive and immersive experience than keyboard-and-mouse interfaces but lacks precision. This project introduces Context AI for Object Positioning (CAOP), a gesture-based interface that leverages artificial intelligence (AI) to infer user intent, integrate high-level design principles, and enable precise object manipulation. The system has been integrated into VRMoVi, a virtual reality (VR) environment to support immersive, real-time 3D interaction. Automatically generating aesthetically pleasing and functional layouts for a room in VR. CAOP uses a Markov Chain Monte Carlo model to iteratively refine room layouts according to 11 high-level interior design principles, such as alignment, symmetry, and balance. The layout is optimized by minimizing a weighted cost function. Each term evaluates how well the layout adheres to a specific design rule. A lower cost indicates better overall compliance with the combined guidelines. The weights of each design principle can be adjusted to prioritize different layout aesthetics. The algorithm iteratively adjusts each object’s position and rotation, then recalculates the layout cost after each one, keeping the configuration with the lowest cost. CAOP was developed in C# within Unity to support real-time, interactive 3D layout manipulation. By combining AI-driven layout synthesis with intuitive gesture-based interaction, CAOP allows users to rapidly transform messy furniture arrangements into organized, visually appealing layouts while maintaining precise object control. After the system organizes the furniture, the gesture recognition, which is currently part of VRMoVi, will be integrated so that users can adjust furniture with hand gestures, and the optimal layout will be recalculated.
Recommended Citation
Brown, Annika, "Developing an Intuitive Gesture-Based Interface and AI for Precise Object Positioning in Extended Reality" (2025). Student Scholar Symposium Abstracts and Posters. 772.
https://digitalcommons.chapman.edu/cusrd_abstracts/772
Comments
Presented at the Fall 2025 Student Scholar Symposium at Chapman University.