Diane Litman
2020
Automated Topical Component Extraction Using Neural Network Attention Scores from Source-based Essay Scoring
Haoran Zhang
|
Diane Litman
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting Topical Components (TCs) representing evidence from a source text using the intermediate output of attention layers. We evaluate performance using a feature-based AES requiring TCs. Results show that performance is comparable whether using automatically or manually constructed TCs for 1) representing essays as rubric-based features, 2) grading essays.
The Discussion Tracker Corpus of Collaborative Argumentation
Christopher Olshefski
|
Luca Lugini
|
Ravneet Singh
|
Diane Litman
|
Amanda Godley
Proceedings of The 12th Language Resources and Evaluation Conference
Although NLP research on argument mining has advanced considerably in recent years, most studies draw on corpora of asynchronous and written texts, often produced by individuals. Few published corpora of synchronous, multi-party argumentation are available. The Discussion Tracker corpus, collected in high school English classes, is an annotated dataset of transcripts of spoken, multi-party argumentation. The corpus consists of 29 multi-party discussions of English literature transcribed from 985 minutes of audio. The transcripts were annotated for three dimensions of collaborative argumentation: argument moves (claims, evidence, and explanations), specificity (low, medium, high) and collaboration (e.g., extensions of and disagreements about others’ ideas). In addition to providing descriptive statistics on the corpus, we provide performance benchmarks and associated code for predicting each dimension separately, illustrate the use of the multiple annotations in the corpus to improve performance via multi-task learning, and finally discuss other ways the corpus might be used to further NLP research.
Annotation and Classification of Evidence and Reasoning Revisions in Argumentative Writing
Tazin Afrin
|
Elaine Lin Wang
|
Diane Litman
|
Lindsay Clare Matsumura
|
Richard Correnti
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
Automated writing evaluation systems can improve students’ writing insofar as students attend to the feedback provided and revise their essay drafts in ways aligned with such feedback. Existing research on revision of argumentative writing in such systems, however, has focused on the types of revisions students make (e.g., surface vs. content) rather than the extent to which revisions actually respond to the feedback provided and improve the essay. We introduce an annotation scheme to capture the nature of sentence-level revisions of evidence use and reasoning (the ‘RER’ scheme) and apply it to 5th- and 6th-grade students’ argumentative essays. We show that reliable manual annotation can be achieved and that revision annotations correlate with a holistic assessment of essay improvement in line with the feedback provided. Furthermore, we explore the feasibility of automatically classifying revisions according to our scheme.
Search
Co-authors
- Haoran Zhang 1
- Christopher Olshefski 1
- Luca Lugini 1
- Ravneet Singh 1
- Amanda Godley 1
- show all...