Multimodal fusion for anticipating human decision performance.
- Publisher:
- NATURE PORTFOLIO
- Publication Type:
- Journal Article
- Citation:
- Sci Rep, 2024, 14, (1), pp. 13217
- Issue Date:
- 2024-06-08
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Full metadata record
Field | Value | Language |
---|---|---|
dc.contributor.author | Tran, X-T | |
dc.contributor.author | Do, T | |
dc.contributor.author | Pal, NR | |
dc.contributor.author | Jung, T-P | |
dc.contributor.author | Lin, C-T | |
dc.date.accessioned | 2025-01-13T03:26:04Z | |
dc.date.available | 2024-05-30 | |
dc.date.available | 2025-01-13T03:26:04Z | |
dc.date.issued | 2024-06-08 | |
dc.identifier.citation | Sci Rep, 2024, 14, (1), pp. 13217 | |
dc.identifier.issn | 2045-2322 | |
dc.identifier.issn | 2045-2322 | |
dc.identifier.uri | http://hdl.handle.net/10453/183351 | |
dc.description.abstract | Anticipating human decisions while performing complex tasks remains a formidable challenge. This study proposes a multimodal machine-learning approach that leverages image features and electroencephalography (EEG) data to predict human response correctness in a demanding visual searching task. Notably, we extract a novel set of image features pertaining to object relationships using the Segment Anything Model (SAM), which enhances prediction accuracy compared to traditional features. Additionally, our approach effectively utilizes a combination of EEG signals and image features to streamline the feature set required for the Random Forest Classifier (RFC) while maintaining high accuracy. The findings of this research hold substantial potential for developing advanced fault alert systems, particularly in critical decision-making environments such as the medical and defence sectors. | |
dc.format | Electronic | |
dc.language | eng | |
dc.publisher | NATURE PORTFOLIO | |
dc.relation | http://purl.org/au-research/grants/arc/DP210101093 | |
dc.relation | United States Department of the NavyN629091912058 | |
dc.relation | http://purl.org/au-research/grants/arc/DP220100803 | |
dc.relation | http://purl.org/au-research/grants/nhmrc/APP2021183 | |
dc.relation.ispartof | Sci Rep | |
dc.relation.isbasedon | 10.1038/s41598-024-63651-2 | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject.mesh | Humans | |
dc.subject.mesh | Electroencephalography | |
dc.subject.mesh | Decision Making | |
dc.subject.mesh | Machine Learning | |
dc.subject.mesh | Male | |
dc.subject.mesh | Female | |
dc.subject.mesh | Adult | |
dc.subject.mesh | Young Adult | |
dc.subject.mesh | Algorithms | |
dc.subject.mesh | Humans | |
dc.subject.mesh | Electroencephalography | |
dc.subject.mesh | Decision Making | |
dc.subject.mesh | Algorithms | |
dc.subject.mesh | Adult | |
dc.subject.mesh | Female | |
dc.subject.mesh | Male | |
dc.subject.mesh | Young Adult | |
dc.subject.mesh | Machine Learning | |
dc.subject.mesh | Humans | |
dc.subject.mesh | Electroencephalography | |
dc.subject.mesh | Decision Making | |
dc.subject.mesh | Machine Learning | |
dc.subject.mesh | Male | |
dc.subject.mesh | Female | |
dc.subject.mesh | Adult | |
dc.subject.mesh | Young Adult | |
dc.subject.mesh | Algorithms | |
dc.title | Multimodal fusion for anticipating human decision performance. | |
dc.type | Journal Article | |
utslib.citation.volume | 14 | |
utslib.location.activity | England | |
pubs.organisational-group | University of Technology Sydney | |
pubs.organisational-group | University of Technology Sydney/Faculty of Engineering and Information Technology | |
pubs.organisational-group | University of Technology Sydney/Faculty of Engineering and Information Technology/School of Computer Science | |
pubs.organisational-group | University of Technology Sydney/UTS Groups | |
pubs.organisational-group | University of Technology Sydney/UTS Groups/Australian Artificial Intelligence Institute (AAII) | |
pubs.organisational-group | University of Technology Sydney/UTS Groups/Centre for Technology in Water and Wastewater (CTWW) | |
utslib.copyright.status | open_access | * |
dc.rights.license | This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/ | |
dc.date.updated | 2025-01-13T03:26:02Z | |
pubs.issue | 1 | |
pubs.publication-status | Published online | |
pubs.volume | 14 | |
utslib.citation.issue | 1 |
Abstract:
Anticipating human decisions while performing complex tasks remains a formidable challenge. This study proposes a multimodal machine-learning approach that leverages image features and electroencephalography (EEG) data to predict human response correctness in a demanding visual searching task. Notably, we extract a novel set of image features pertaining to object relationships using the Segment Anything Model (SAM), which enhances prediction accuracy compared to traditional features. Additionally, our approach effectively utilizes a combination of EEG signals and image features to streamline the feature set required for the Random Forest Classifier (RFC) while maintaining high accuracy. The findings of this research hold substantial potential for developing advanced fault alert systems, particularly in critical decision-making environments such as the medical and defence sectors.
Please use this identifier to cite or link to this item:
Download statistics for the last 12 months
Not enough data to produce graph