Please use this identifier to cite or link to this item: https://scholarhub.balamand.edu.lb/handle/uob/175
DC FieldValueLanguage
dc.contributor.authorChollet, Gérarden_US
dc.contributor.authorAmehraye, Asmaaen_US
dc.contributor.authorRazik, Josephen_US
dc.contributor.authorZouari, Leilaen_US
dc.contributor.authorKhemiri, Houssemeddineen_US
dc.contributor.authorMokbel, Chaficen_US
dc.date.accessioned2020-12-23T08:26:34Z-
dc.date.available2020-12-23T08:26:34Z-
dc.date.issued2010-
dc.identifier.urihttps://scholarhub.balamand.edu.lb/handle/uob/175-
dc.description.abstractHuman-computer conversations have attracted a great deal of interest especially in virtual worlds. In fact, research gave rise to spoken dialogue systems by taking advantage of speech recognition, language understanding and speech synthesis advances. This work surveys the state of the art of speech dialogue systems. Current dialogue system technologies and approaches are first introduced emphasizing differences between them, then, speech recognition and synthesis and language understanding are introduced as complementary and necessary modules. On the other hand, as the development of spoken dialogue systems becomes more complex, it is necessary to define some processes to evaluate their performance. Wizard-of-Oz techniques play an important role to achieve this task. Thanks to this technique is obtained a suitable dialogue corpus necessary to achieve good performance. A description of this technique is given in this work together with perspectives on multimodal dialogue systems in virtual worlds.en_US
dc.format.extent20 p.en_US
dc.language.isoengen_US
dc.subjectSpeech recognitionen_US
dc.subjectVirtual Worlden_US
dc.subjectDialogue Systemen_US
dc.subject.lcshVirtual realityen_US
dc.subject.lcshAutomatic speech recognitionen_US
dc.titleSpoken dialogue in virtual worldsen_US
dc.typeBook Chapteren_US
dc.contributor.affiliationDepartment of Electrical Engineeringen_US
dc.description.startpage423en_US
dc.description.endpage443en_US
dc.date.catalogued2019-05-27-
dc.description.statusPublisheden_US
dc.identifier.OlibID192109-
dc.relation.ispartoftextA. Esposito, N. Campbell, C. Vogel, A. Hussain & A. Nijholt (Eds), Development of Multimodal Interfaces: Active Listening and Synchrony. Springer.en_US
dc.provenance.recordsourceOliben_US
Appears in Collections:Department of Electrical Engineering
Show simple item record

Record view(s)

50
checked on Nov 21, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.