Public Defense of a Doctoral Dissertation in Computer Science - Thibaut Septon
Mixed Reality: Design and Implementation of Multimodal User Interfaces for Pervasive Use
Mixed Reality: Design and Implementation of Multimodal User Interfaces for Pervasive Use
The past decade has seen the release of numerous mixed reality headsets. Some are aimed at casual recreational use (for example, the Meta Quest 3), while others are marketed as next-generation computing platforms (for example, the Apple Vision Pro). As these devices become integrated into our daily lives, they are redefining the way we—as human beings—interact with them.
Their nature differs significantly from traditional computing devices (e.g., computers or smartphones), introducing multiple paradigm shifts driven by several factors. On the one hand, they integrate and democratize various sensors that enable the use of gaze, hand gestures, and speech as means of interaction, thereby serving as effective vectors for the adoption of multimodal user interfaces. On the other hand, their portable nature implies continuous contextual changes that fundamentally alter interface design and redefine human-computer interaction as their use becomes ubiquitous.
To better understand such systems, this research is divided into three areas. First, we immerse users in a deliberately constructed pervasive environment to explore their perceptions while examining their attitudes toward managing intrusive content through manual interventions, thereby highlighting needs emerging from such contexts. Second, we explore new communication channels by leveraging metaphors and designing interaction techniques that use multiple modalities to enable more natural communication, thereby addressing the needs arising from pervasive use. Finally, after designing multimodal interaction techniques, we examine the technical requirements and review existing tools that support the development of multimodal user interfaces, identify the limitations of some of these tools, and address them by introducing a new tool called Ummi.
Through these three complementary perspectives, this thesis addresses six research questions and contributes to the fields of mixed reality and multimodal interaction.
Free event; registration required.