Pattern Recognition and Classification An Introduction /
The use of pattern recognition and classification is fundamental to many of the automated electronic systems in use today. However, despite the existence of a number of notable books in the field, the subject remains very challenging, especially for the beginner. Pattern Recognition and Classificati...
Main Author: | |
---|---|
Corporate Author: | |
Format: | Electronic |
Language: | English |
Published: |
New York, NY :
Springer New York : Imprint: Springer,
2013.
|
Subjects: | |
Online Access: | https://ezaccess.library.uitm.edu.my/login?url=http://dx.doi.org/10.1007/978-1-4614-5323-9 |
Table of Contents:
- Preface
- Acknowledgments
- Chapter 1 Introduction
- 1.1 Overview
- 1.2 Classification
- 1.3 Organization of the Book
- Bibliography
- Exercises
- Chapter 2 Classification
- 2.1 The Classification Process
- 2.2 Features
- 2.3 Training and Learning
- 2.4 Supervised Learning and Algorithm Selection
- 2.5 Approaches to Classification
- 2.6 Examples
- 2.6.1 Classification by Shape
- 2.6.2 Classification by Size
- 2.6.3 More Examples
- 2.6.4 Classification of Letters
- Bibliography
- Exercises
- Chapter 3 Non-Metric Methods
- 3.1 Introduction
- 3.2 Decision Tree Classifier
- 3.2.1 Information, Entropy and Impurity
- 3.2.2 Information Gain
- 3.2.3 Decision Tree Issues
- 3.2.4 Strengths and Weaknesses
- 3.3 Rule-Based Classifier
- 3.4 Other Methods
- Bibliography
- Exercises
- Chapter 4 Statistical Pattern Recognition
- 4.1 Measured Data and Measurement Errors
- 4.2 Probability Theory
- 4.2.1 Simple Probability Theory
- 4.2.2 Conditional Probability and Bayes<U+0019> Rule
- 4.2.3 Nav̐e Bayes classifier
- 4.3 Continuous Random Variables
- 4.3.1 The Multivariate Gaussian
- 4.3.2 The Covariance Matrix
- 4.3.3 The Mahalanobis Distance
- Bibliography
- Exercises
- Chapter 5 Supervised Learning
- 5.1 Parametric and Non-Parametric Learning
- 5.2 Parametric Learning
- 5.2.1 Bayesian Decision Theory
- 5.2.2 Discriminant Functions and Decision Boundaries
- 5.2.3 MAP (Maximum A Posteriori) Estimator
- Bibliography
- Exercises
- Chapter 6 Non-Parametric Learning
- 6.1 Histogram Estimator and Parzen Windows
- 6.2 k-Nearest Neighbor (k-NN) Classification
- 6.3 Artificial Neural Networks (ANNs)
- 6.4 Kernel Machines
- Bibliography
- Exercises
- Chapter 7 Feature Extraction and Selection
- 7.1 Reducing Dimensionality
- 7.1.1 Pre-Processing
- 7.2 Feature Selection
- 7.2.1 Inter/Intra-Class Distance
- 7.2.2 Subset Selection
- 7.3 Feature Extraction
- 7.3.1 Principal Component Analysis (PCA)
- 7.3.2 Linear Discriminant Analysis (LDA)
- Bibliography
- Exercises
- Chapter 8 Unsupervised Learning
- 8.1 Clustering
- 8.2 k-Means Clustering
- 8.2.1 Fuzzy c-Means Clustering
- 8.3 (Agglomerative) Hierarchical Clustering
- Bibliography
- Exercises
- Chapter 9 Estimating and Comparing Classifiers
- 9.1 Comparing Classifiers and the No Free Lunch Theorem
- 9.1.2 Bias and Variance
- 9.2 Cross-Validation and Resampling Methods
- 9.2.1 The Holdout Method
- 9.2.2 k-Fold Cross-Validation
- 9.2.3 Bootstrap
- 9.3 Measuring Classifier Performance
- 9.4 Comparing Classifiers
- 9.4.1 ROC curves
- 9.4.2 McNemar<U+0019>s Test
- 9.4.3 Other Statistical Tests
- 9.4.4 The Classification Toolbox
- 9.5 Combining classifiers
- Bibliography
- Chapter 10 Projects
- 10.1 Retinal Tortuosity as an Indicator of Disease
- 10.2 Segmentation by Texture
- 10.3 Biometric Systems
- 10.3.1 Fingerprint Recognition
- 10.3.2 Face Recognition
- Bibliography
- Index.