Uncovering Limitations in Text-to-Image Generation: A Contrastive Approach with Structured Semantic Alignment

Publication Type:
Conference Proceeding
Citation:
Findings of the Association for Computational Linguistics: EMNLP 2023, 2023, pp. 8876-8888
Issue Date:
2023-01-01
Filename Description Size
2023.findings-emnlp.595.pdfPublished version7.36 MB
Adobe PDF
Full metadata record
Despite significant advancement, text-to-image generation models still face challenges when producing highly detailed or complex images based on textual descriptions. In this work, we propose a Structured Semantic Alignment (SSA) method for evaluating text-to-image generation models. SSA focuses on learning structured semantic embeddings across different modalities and aligning them in a joint space. The method employs the following steps to achieve its objective: (i) Generating mutated prompts by substituting words with semantically equivalent or nonequivalent alternatives while preserving the original syntax; (ii) Representing the sentence structure through parsing trees obtained via syntax parsing; (iii) Learning fine-grained structured embeddings that project semantic features from different modalities into a shared embedding space; (iv) Evaluating the semantic consistency between the structured text embeddings and the corresponding visual embeddings. Through experiments conducted on various benchmarks, we have demonstrated that SSA offers improved measurement of semantic consistency of text-to-image generation models. Additionally, it unveils a wide range of generation errors including under-generation, incorrect constituency, incorrect dependency, and semantic confusion. By uncovering these biases and limitations embedded within the models, our proposed method provides valuable insights into their shortcomings when applied to real-world scenarios.
Please use this identifier to cite or link to this item: