Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and beyond
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017, 2017, pp. 570 - 573
- Issue Date:
- 2017-09-22
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
08049186.pdf | Published version | 148.89 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2017 IEEE. Context and Motivation Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools. Problem Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used. Principal Ideas This panel discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. Contribution The discussion is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool's task.
Please use this identifier to cite or link to this item: