🤖 AI Summary
Multimodal generative tasks typically rely on large-scale training and implicit neural representations, resulting in high computational costs, poor editability, and limited interruptibility. Method: This paper proposes an explicit symbolic task modeling paradigm. It introduces a modality-agnostic symbolic task description language and a structured reasoning engine that zero-shot parses natural language instructions into symbolic streams—comprising functions, parameters, and topological logic—without fine-tuning or task-specific training, leveraging only pre-trained large language models for instruction understanding and workflow orchestration. Contribution/Results: The approach significantly enhances interpretability, real-time editability, and interruption recovery in cross-modal generation. Evaluated on 12 heterogeneous multimodal generation tasks, it achieves or surpasses state-of-the-art performance while reducing computational overhead.
📝 Abstract
We propose a symbolic generative task description language and a corresponding inference engine capable of representing arbitrary multimodal tasks as structured symbolic flows. Unlike conventional generative models that rely on large-scale training and implicit neural representations to learn cross-modal mappings, often at high computational cost and with limited flexibility, our framework introduces an explicit symbolic representation comprising three core primitives: functions, parameters, and topological logic. Leveraging a pre-trained language model, our inference engine maps natural language instructions directly to symbolic workflows in a training-free manner. Our framework successfully performs over 12 diverse multimodal generative tasks, demonstrating strong performance and flexibility without the need for task-specific tuning. Experiments show that our method not only matches or outperforms existing state-of-the-art unified models in content quality, but also offers greater efficiency, editability, and interruptibility. We believe that symbolic task representations provide a cost-effective and extensible foundation for advancing the capabilities of generative AI.