Improving diagnostic reliability in Chinese medicine
- Publication Type:
- Issue Date:
The aim of this thesis was to assess current levels of inter-rater diagnostic agreement in the Chinese Medical (CM) profession and to propose strategies that might improve these levels. Researchers have generally used inappropriate statistical constructs to evaluate inter-rater agreement. A more appropriate weighted chance-removed statistic is employed to determine inter-rater diagnostic agreement with ordinal data. Further, the largest number of raters which have been used in any past study was three. Similarly, no study was located which involved inter-rater diagnostic agreement with subjects drawn from an open population. This is a deficiency in understanding CM inter-rater agreement in a clinical setting. The Diagnostic System of Oriental Medicine (DSOM) format was identified as suitable for use in CM diagnosis by practitioners. This format also enables appropriate statistics to be employed. An experiment was performed in which five experienced practitioners of CM diagnosed 42 subjects using the DSOM as the diagnostic format. Each of the sixteen diagnostic descriptors used to describe a diagnosis with the DSOM were scored 0-5. Substantial chance-removed weighted agreement of 0.60 ±0.02 was found. The descriptors of DSOM format were edited after examining 60,000 clinical records at the UTS CM outpatient clinic to arrive at the Chinese Medicine Diagnostic Descriptor format, the (CMDD). Conventional CM diagnostic formats can be directly mapped to CMDD, thereby making this system as subtle as conventional systems. A second experiment was performed to evaluate inter-rater agreement with CMDD and contemporary CM diagnostic formats respectively. Groups of CM practitioners, one group utilising the CMDD and the other, the CM diagnostic formats, diagnosed 35 subjects over two days. Each of the fifteen CMDD diagnostic descriptors was scored 0-5, while three selected CM patterns were scored 1-5. The subjects were again drawn from an open population. A weighted simple agreement of only 19% was found between practitioners who employed the CM format. This is not an appropriate foundation for application or assessment of treatment. Further, chance-removed statistics or error estimates cannot be evaluated when the CM format is used with unrestricted diagnostic possibilities. The possibility that bias was present in raters’ scores was also investigated. No significant bias was present in the raters’ scores. This should be used as a guide for the adoption of appropriate rater training to improve agreements. Guiding questionnaires for each descriptor whilst utilising the CMDD format, would also appear to hold potential to further improve agreement. The CMDD seems to clearly facilitate superior inter-rater agreement compared with the CM format. The raters using the CMDD format achieved substantial chance-removed agreements of 0.67 ±0.03 on both days. Mapping diagnoses made by raters in the CM to the CMDD format enabled chance-removed inter-rater agreements of 0.65 ±0.03 on day one and 0.73 ±0.03 on day two to be calculated, significantly larger than when using the CM format. This suggests that the structure of the CMDD allows the correct inter-rater agreement to be calculated, something very difficult to achieve with the contemporary CM format. It is therefore suggested that the CMDD format be used in contemporary clinical and research settings and is also proposed that it be incorporated into the internationally recognised CONSORT and STRICTA research guidelines.
Please use this identifier to cite or link to this item: