Please use this identifier to cite or link to this item:
Title: Multimodal Human Machine Interactions in Virtual and Augmented Reality
Authors: Chollet, Gérard
Esposito, Anna
Gentes, Annie
Horain, Patrick
Karam, Walid 
Li, Zhenbo
Pelachaud, Catherine
Perrot, Patrick
Petrovska-Delacretaz, Dijana
Zhou, Dianle
Zouari, Leila
Affiliations: Department of Computer Engineering 
Keywords: Human Machine Interactions (HMI)
Virtual Words
Subjects: Speech
Issue Date: 2008
Publisher: Springer
Part of: Multimodal Signals: Cognitive and Algorithmic Issues
Start page: 1
End page: 23
Conference: COST Action 2102 and euCognition International School Vietri sul Mare (21-26 April 2008 : Italy) 
Virtual worlds are developing rapidly over the Internet. They are visited by avatars and staffed with Embodied Conversational Agents (ECAs). An avatar is a representation of a physical person. Each person controls one or several avatars and usually receives feedback from the virtual world on an audio-visual display. Ideally, all senses should be used to feel fully embedded in a virtual world. Sound, vision and sometimes touch are the available modalities. This paper reviews the technological developments which enable audio-visual interactions in virtual and augmented reality worlds. Emphasis is placed on speech and gesture interfaces, including talking face analysis and synthesis.
Ezproxy URL: Link to full text
Type: Conference Paper
Appears in Collections:Department of Computer Engineering

Show full item record

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.