Quality in Engineering Education Research: arriving at consensus

Publisher:
Swinburne University of Technology
Publication Type:
Conference Proceeding
Citation:
Proceedings of the 23rd Annual Conference for the Australasian Association for Engineering Education - The Profession of Engineering Education: Advancing Teaching, Research and Careers, 2012, pp. 1 - 8
Issue Date:
2012-01
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2012000536OK.pdf Published version940.3 kB
Adobe PDF
BACKGROUND Arguably the most important opportunity to acquire the standards and norms of any discipline and develop researchers' judgement is the peer review process â and this is probably particularly true in an emerging discipline such as engineering education. Ironically, research in many disciplines has established that the review process is deeply flawed in conception as well as (often) in operation, with the American Medical Association asserting that if peer review were a drug it would never be allowed on to the market. And yet university ranking systems for published research, on which all of our careers depend, rely on this flawed instrument. With this in mind we have been examining how members of our community (AAEE) give and respond to reviews with a view to making the process more useful. PURPOSE Reviewing is an inexact and subjective process so it would be misguided to think that somehow inter-rater reliability or some notion of objective âtruthâ may be attained. Instead, we ask what reviewers need to do to provide helpful advice that can help shape norms and standards in the field. DESIGN/METHOD In previous work (Willey et. al. 2011; Jolly et.al. 2011) there appeared to be a need for well-expressed criteria that would guide authors on what a publication should contain and guide reviewers in how judgements should be made. With the help of a Delphi panel made up of 12 international researchers in the field a set of criteria were developed. Volunteers were then sought to apply the criteria to sample texts in an online tool (SPARKplus). Individual interviews with some respondents were then used to clarify participantâs understandings and goals. RESULTS The criteria developed by the Delphi panel are those being used for this conference. The members of the panel particularly approved the âcommentsâ accompanying the criteria per se which were intended primarily as guidance to authors about acceptable practice. Anecdotal evidence to date suggests that authors should find the criteria and comments clarify expectations but the matter of standards will remain. The use of the criteria in the second stage and analysis of the discussions in particular will produce information both about present expectations and practices and visions of future growth and improvement. CONCLUSIONS Our analysis of the stage 2 data will aim to describe consensus on research quality and how to use the peer review process to help attain it, in the form of recommendations for future application of the criteria, in journals as well as at conferences. Our international experts from the Delphi panel have expressed an interest in being involved in stage 2 and informed about the outcomes so the potential also exists for this community to develop best practice peer review in engineering education thorough the sharing of their expertise in this way.
Please use this identifier to cite or link to this item: