Investigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level

dc.contributor.advisorBradshaw, Karen
dc.contributor.advisorConnan, James
dc.contributor.authorBrown, Dane L
dc.date.accessioned2026-03-04T15:33:28Z
dc.date.issued2018
dc.description.abstractMultimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
dc.description.degreeDoctoral thesis
dc.description.degreePhD
dc.format.extent264 pages
dc.format.mimetypeapplication/pdf
dc.identifier.otherhttp://hdl.handle.net/10962/63470
dc.identifier.urihttps://researchrepository.ru.ac.za/handle/123456789/8190
dc.languageEnglish
dc.publisherRhodes University, Faculty of Science, Department of Computer Science
dc.rightsBrown, Dane
dc.subjectUncatalogued
dc.titleInvestigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level
dc.typeAcademic thesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Investigating_combinations_of_feature_extraction_a_vital_28414.pdf
Size:
9.23 MB
Format:
Adobe Portable Document Format