Exploring Free-Sketch Stylus-Based Text Input for Mixed Reality Interfaces
his project investigates free-sketch stylus-based text input as a novel interaction technique for mixed reality (MR) environments, addressing the limitations of existing input methods such as virtual keyboards and speech. While physical keyboards can achieve reasonable typing speeds, they are impractical and immersion-breaking in MR, and virtual keyboards remain slow and error-prone. Speech input, although fast, is unreliable in noisy or privacy-sensitive environments. These challenges motivate the exploration of stylus-based handwriting as a more natural, intuitive, and context-appropriate alternative for text entry in immersive systems.
Building on recent advances in hardware and artificial intelligence, the project explores how pen-like stylus devices can be used to capture 3D writing trajectories in space. Unlike prior approaches that rely on constrained input surfaces or simplified gesture systems, this work focuses on free-form, in-air handwriting that aligns with familiar pen-and-paper interaction. The system aims to leverage modern AI models to interpret both spatial and temporal characteristics of handwriting trajectories, enabling accurate and flexible text recognition in dynamic MR environments.
A core contribution of the project is the design and implementation of an MR prototype that integrates stylus input with machine learning-based handwriting recognition. The proposed pipeline captures 3D trajectory data, normalises it onto a writing plane, and converts it into a 2D representation suitable for established handwriting recognition models. At the same time, temporal information is retained to support sequence-aware modelling approaches, potentially improving recognition accuracy and robustness across different writing styles and conditions.
To evaluate the effectiveness of the proposed approach, the project includes a user study comparing stylus-based input against baseline methods such as virtual keyboards and voice input. Key performance metrics include text entry speed, error rate, workload, and overall user experience. This evaluation aims to provide empirical evidence on whether stylus-based input can offer a more efficient, natural, and cognitively aligned interaction technique for MR users, particularly in scenarios where traditional methods are impractical.
Ultimately, this research contributes to the broader field of spatial computing by exploring how emerging input technologies can better support interaction in immersive environments. By bridging advances in hardware, AI-driven recognition, and human-centred design, the project seeks to establish design guidelines and practical insights for next-generation text input systems, with potential applications in productivity, education, and collaborative MR workflows.

Fig: MR Stylus for Meta Quest (website)
