A New Roadmap for Evaluating Descriptive Handwritten Answer Script
- Publisher:
- World Scientific
- Publication Type:
- Chapter
- Citation:
- Frontiers in Pattern Recognition and Artificial Intelligence, 2019, pp. 83-96
- Issue Date:
- 2019-06-12
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
20040566_9796182580005671.pdf | Published version | 18.32 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In computational pedagogy, a relatively simple Optical Character Recognizer system can robustly evaluate objective response types automatically without human intervention. This saves time, cost and man-hours. Thus, the next question becomes whether it is possible to develop an automated system for evaluating descriptive handwritten answer types. Recent experiences show that human evaluation of long examination responses is quite subjective and prone to challenges like inexperience, negligence, lack of uniformity in the case of several evaluators, etc. In this work, we present the roadmap for an automated vision system that evaluates descriptive answers based on extracting relevant words and finding the relationship between words according to weights. We introduce context features to handle variations of words written by different users to estimate the final score.
Please use this identifier to cite or link to this item: