Please use this identifier to cite or link to this item:
https://scholarhub.balamand.edu.lb/handle/uob/175
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chollet, Gérard | en_US |
dc.contributor.author | Amehraye, Asmaa | en_US |
dc.contributor.author | Razik, Joseph | en_US |
dc.contributor.author | Zouari, Leila | en_US |
dc.contributor.author | Khemiri, Houssemeddine | en_US |
dc.contributor.author | Mokbel, Chafic | en_US |
dc.date.accessioned | 2020-12-23T08:26:34Z | - |
dc.date.available | 2020-12-23T08:26:34Z | - |
dc.date.issued | 2010 | - |
dc.identifier.uri | https://scholarhub.balamand.edu.lb/handle/uob/175 | - |
dc.description.abstract | Human-computer conversations have attracted a great deal of interest especially in virtual worlds. In fact, research gave rise to spoken dialogue systems by taking advantage of speech recognition, language understanding and speech synthesis advances. This work surveys the state of the art of speech dialogue systems. Current dialogue system technologies and approaches are first introduced emphasizing differences between them, then, speech recognition and synthesis and language understanding are introduced as complementary and necessary modules. On the other hand, as the development of spoken dialogue systems becomes more complex, it is necessary to define some processes to evaluate their performance. Wizard-of-Oz techniques play an important role to achieve this task. Thanks to this technique is obtained a suitable dialogue corpus necessary to achieve good performance. A description of this technique is given in this work together with perspectives on multimodal dialogue systems in virtual worlds. | en_US |
dc.format.extent | 20 p. | en_US |
dc.language.iso | eng | en_US |
dc.subject | Speech recognition | en_US |
dc.subject | Virtual World | en_US |
dc.subject | Dialogue System | en_US |
dc.subject.lcsh | Virtual reality | en_US |
dc.subject.lcsh | Automatic speech recognition | en_US |
dc.title | Spoken dialogue in virtual worlds | en_US |
dc.type | Book Chapter | en_US |
dc.contributor.affiliation | Department of Electrical Engineering | en_US |
dc.description.startpage | 423 | en_US |
dc.description.endpage | 443 | en_US |
dc.date.catalogued | 2019-05-27 | - |
dc.description.status | Published | en_US |
dc.identifier.OlibID | 192109 | - |
dc.relation.ispartoftext | A. Esposito, N. Campbell, C. Vogel, A. Hussain & A. Nijholt (Eds), Development of Multimodal Interfaces: Active Listening and Synchrony. Springer. | en_US |
dc.provenance.recordsource | Olib | en_US |
Appears in Collections: | Department of Electrical Engineering |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.