Please use this identifier to cite or link to this item:
Title: Transcription of spanish historical handwritten documents with deep neural networks
Authors: Granell, Emilio
Chammas, Edgar
Likforman-Sulem, Laurence
Martínez-Hinarejos, Carlos-D.
Mokbel, Chafic 
Cîrstea, Bogdan-Ionut
Affiliations: Department of Electrical Engineering 
Keywords: Historical handwritten transcription
Out-of-vocabulary word recognition
Character-level language model
Word structure retrieval
Issue Date: 2018
Part of: Imaging journal
Volume: 4
Issue: 1
The digitization of historical handwritten document images is important for the preservation of cultural heritage. Moreover, the transcription of text images obtained from digitization is necessary to provide efficient information access to the content of these documents. Handwritten Text Recognition (HTR) has become an important research topic in the areas of image and computational language processing that allows us to obtain transcriptions from text images. State-of-the-art HTR systems are, however, far from perfect. One difficulty is that they have to cope with image noise and handwriting variability. Another difficulty is the presence of a large amount of Out-Of-Vocabulary (OOV) words in ancient historical texts. A solution to this problem is to use external lexical resources, but such resources might be scarce or unavailable given the nature and the age of such documents. This work proposes a solution to avoid this limitation. It consists of associating a powerful optical recognition system that will cope with image noise and variability, with a language model based on sub-lexical units that will model OOV words. Such a language modeling approach reduces the size of the lexicon while increasing the lexicon coverage. Experiments are first conducted on the publicly available Rodrigo dataset, which contains the digitization of an ancient Spanish manuscript, with a recognizer based on Hidden Markov Models (HMMs). They show that sub-lexical units outperform word units in terms of Word Error Rate (WER), Character Error Rate (CER) and OOV word accuracy rate. This approach is then applied to deep net classifiers, namely Bi-directional Long-Short Term Memory (BLSTMs) and Convolutional Recurrent Neural Nets (CRNNs). Results show that CRNNs outperform HMMs and BLSTMs, reaching the lowest WER and CER for this image dataset and significantly improving OOV recognition.
DOI: 10.3390/jimaging4010015
Open URL: Link to full text
Type: Journal Article
Appears in Collections:Department of Electrical Engineering

Show full item record


checked on Jun 15, 2024

Record view(s)

checked on Jun 21, 2024

Google ScholarTM


Dimensions Altmetric

Dimensions Altmetric

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.