Please use this identifier to cite or link to this item:
|Title:||Multimodal human machine interactions in virtual and augmented reality||Authors:||Chollet, Gérard
|Affiliations:||Department of Computer Engineering||Keywords:||Human Machine Interactions (HMI)
|Issue Date:||2008||Publisher:||Springer||Part of:||Multimodal Signals: Cognitive and Algorithmic Issues||Start page:||1||End page:||23||Conference:||COST Action 2102 and euCognition International School Vietri sul Mare (21-26 April 2008 : Italy)||Abstract:||
Virtual worlds are developing rapidly over the Internet. They are visited by avatars and staffed with Embodied Conversational Agents (ECAs). An avatar is a representation of a physical person. Each person controls one or several avatars and usually receives feedback from the virtual world on an audio-visual display. Ideally, all senses should be used to feel fully embedded in a virtual world. Sound, vision and sometimes touch are the available modalities. This paper reviews the technological developments which enable audio-visual interactions in virtual and augmented reality worlds. Emphasis is placed on speech and gesture interfaces, including talking face analysis and synthesis.
|URI:||https://scholarhub.balamand.edu.lb/handle/uob/697||Ezproxy URL:||Link to full text||Type:||Conference Paper|
|Appears in Collections:||Department of Computer Engineering|
Show full item record
checked on Jul 1, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.