Comparing automatically detected reflective texts with human judgements

Publication Type:
Conference Proceeding
Citation:
CEUR Workshop Proceedings, 2012, 931 pp. 101 - 116
Issue Date:
2012-01-01
Full metadata record
Files in This Item:
Filename Description Size
paper8.pdfPublished version371.3 kB
Adobe PDF
This paper reports on the descriptive results of an experiment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing assessment and their operationalisation five elements of reflection were defined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or notreflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analysis indicates the expected difference between reflective and not-reflective texts.
Please use this identifier to cite or link to this item: