Please use this identifier to cite or link to this item: https://scholarhub.balamand.edu.lb/handle/uob/855
Title: A text-dependent speaker-recognition system
Authors: Ishac, Dany
Abche, Antoine 
Karam, Elie 
Callens, Dorothée
Affiliations: Department of Electrical Engineering 
Department of Electrical Engineering 
Keywords: Feature extraction
Fourier transforms
Piezoelectric transducers
Speaker recognition
Subjects: Signal detection
Issue Date: 2017
Publisher: IEEE
Part of: 2017 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)
Start page: 1
End page: 6
Conference: IEEE International Instrumentation and Measurement Technology Conference (I2MTC) (22-25 May 2017 : Turin, Italy) 
Abstract: 
In this work, a voice recognition approach is developed and presented. It is a based on acquiring the signal of vocal cords' vibrations of a person using a piezoelectric transducer element attached on a collar wrapped around the neck. The recognition is then based on the vocal cords vibrations' pressure of the individuals and not their normal voices. Due to the varying nature of the collected signal, the analysis was performed by applying the Short Term Fourier Transform technique to decompose the signal into its frequency components. These frequencies represent the vocal folds vibrations' frequencies (100-1000 Hz). The features in terms of frequencies' interval are extracted from the resulting spectrogram. Then, 1-D vector is formed for identification purposes. The person's identification is performed using the correlation coefficient as a similarity measure. The results show that a high percentage of recognition is achieved and the performance is much better than many existing techniques in the literature.
URI: https://scholarhub.balamand.edu.lb/handle/uob/855
Ezproxy URL: Link to full text
Type: Conference Paper
Appears in Collections:Department of Electrical Engineering

Show full item record

Record view(s)

67
checked on Nov 24, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.