HEIE: MLLM-Based Hierarchical Explainable AIGC Image Implausibility Evaluator

Tsinghua University, BNRist, OPPO AI Center, Peking University
CVPR 2025

Abstract

AIGC images are prevalent across various fields, yet they frequently suffer from quality issues like artifacts and unnatural textures. Specialized models aim to predict defect region heatmaps but face two primary challenges: (1) lack of explainability, failing to provide reasons and analyses for subtle defects, and (2) inability to leverage common sense and logical reasoning, leading to poor generalization. Multimodal large language models (MLLMs) promise better comprehension and reasoning but face their own challenges: (1) difficulty in fine-grained defect localization due to the limitations in capturing tiny details; and (2) constraints in providing pixel-wise outputs necessary for precise heatmap generation. To address these challenges, we propose HEIE: a novel MLLM-Based Hierarchical Explainable image Implausibility Evaluator. We introduce the CoT-Driven Explainable Trinity Evaluator, which integrates heatmaps, scores, and explanation outputs, using CoT to decompose complex tasks into subtasks of increasing difficulty and enhance interpretability. Our Adaptive Hierarchical Implausibility Mapper synergizes low-level image features with high-level mapper tokens from LLMs, enabling precise local-to-global hierarchical heatmap predictions through an uncertainty-based adaptive token approach. Moreover, we propose a new dataset: Expl-AIGI-Eval, designed to facilitate interpretable implausibility evaluation of AIGC images. Our method demonstrates state-of-the-art performance through extensive experiments.

MY ALT TEXT

(a) Specialized models lack the ability to explain and analyze subtle implausibility regions, hindering understanding for general users. (b) MLLMs struggle with precise localization of local defects and cannot directly output pixel-level implausibility areas. (c) Our CoT-Driven Explainable Trinity Evaluator can generate heatmaps, analyses, and scores. In our Adaptive Hierarchical Implausibility Mapper, local and global heatmaps are predicted separately, improving the localization of tiny implausibilities

MY ALT TEXT

The Implausibility Mapper processes a dynamic number of special [MAP] tokens and the image features to enhance detailed implausibility localization. We implement the Adaptive Hierarchical Implausibility Mapper through the local and global heatmaps and the uncertainty-based adaptive fusion. In the verisimilitude scorer, features from the heatmap and the special [SCORE] token are integrated for score prediction. Furthermore, our Cot-Driven Explainable System guides the LLM to decompose complex issues into progressive subproblems, facilitating the mutual enhancement of heatmap, analysis, and score prediction, thus improving explainability.

MY ALT TEXT

Three stages in Expl-AIGI-Eval dataset construction: Stage 1 - Visual Prompting: Defect regions are circled on images to aid Claude-3.5-sonnet in accurately locating problem areas. Stage 2 - LLM Free-Form Output: Claude-3.5-sonnet generates free-form defect location and analysis. Stage 3 - In-Context Learning-Based Formatting: GPT-4o is used for format standardization.

MY ALT TEXT

Model outputs of HEIE. HEIE not only predicts implausibility heatmaps but also provides image descriptions, problematic regions identification, analysis of issues, and score, achieving reliable and explainable implausibility evaluation. Note that for the last AIGC image, which has no evident defects, our model avoids false positives.

MY ALT TEXT

Comparison with Baselines. Each set of images, from left to right, includes: (a) Input Image, (b) Output of InternViT-300M-448px, (c) Output of CLIP-ViT-Base-Patch16, (d) Output of ours HEIE, (e) Ground Truth.

MY ALT TEXT

Results of Hierarchical Implausibility Mapper. Columns (c) and (d) are the global and local heatmaps from our Hierarchical Implausibility Mapper. Column (e) illustrates the final heatmap after adaptively fusing the global and local heatmaps.

BibTeX


        @article{yang2024heie,
          title={HEIE: MLLM-Based Hierarchical Explainable AIGC Image Implausibility Evaluator},
          author={Yang, Fan and Zhen, Ru and Wang, Jianing and Zhang, Yanhao and Chen, Haoxiang and Lu, Haonan and Zhao, Sicheng and Ding, Guiguang},
          journal={arXiv preprint arXiv:2411.17261},
          year={2024}
        }