Dimension-Prompts Boost Commonsense Consolidation

Publisher:
ACM
Publication Type:
Conference Proceeding
Citation:
SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, pp. 1934-1938
Issue Date:
2023-01-01
Filename Description Size
3539618.3591973.pdfPublished version957.76 kB
Adobe PDF
Full metadata record
Neural knowledge models emerged and advanced common-sense-centric knowledge grounding. They parameterize a small seed curated commonsense knowledge graph (CS-KG) in a language model to generalize more. A current trend is to scale the seed up by directly mixing multiple sources of CS-KG (e.g., ATOMIC, ConceptNet) into one model. But, such brute-force mixing inevitably hinders effective knowledge consolidation due to i) ambiguous, polysemic, and/or inconsistent relations across sources and ii) knowledge learned in an entangled manner despite distinct types (e.g., causal, temporal). To mitigate this, we adopt a concept of commonsense knowledge dimension and propose a brand-new dimension-disentangled knowledge model ((DKM)-K-2) learning paradigm with multiple sources. That is, a generative language model with dimension-specific soft prompts is trained to disentangle knowledge acquisitions along with different dimensions and facilitate potential intra-dimension consolidation across CS-KG sources. Experiments show our knowledge model outperforms its baselines in both standard and zero-shot scenarios.
Please use this identifier to cite or link to this item: