🤖 AI Summary
This study addresses the limited robustness of current natural language processing models in detecting "greenwashing" in corporate sustainability reports, particularly their difficulty in distinguishing substantive actions from vague or unsubstantiated claims. To this end, the authors propose a parameter-efficient framework that structures the latent representation space of large language models by integrating contrastive learning with an ordinal ranking objective. The approach further incorporates a gated feature modulation mechanism to filter disclosure noise and employs MetaGradNorm to stabilize multi-objective optimization during training. Experimental results demonstrate that the method significantly outperforms baseline models under cross-category settings, enhancing robustness against semantic noise while uncovering a critical trade-off between representational rigidity and generalization capability.
📝 Abstract
Sustainability reports are critical for ESG assessment, yet greenwashing and vague claims often undermine their reliability. Existing NLP models lack robustness to these practices, typically relying on surface-level patterns that generalize poorly. We propose a parameter-efficient framework that structures LLM latent spaces by combining contrastive learning with an ordinal ranking objective to capture graded distinctions between concrete actions and ambiguous claims. Our approach incorporates gated feature modulation to filter disclosure noise and utilizes MetaGradNorm to stabilize multi-objective optimization. Experiments in cross-category settings demonstrate superior robustness over standard baselines while revealing a trade-off between representational rigidity and generalization.