HapticLLaMA: A Multimodal Sensory Language Model for Haptic Captioning

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the multimodal research gap arising from the absence of natural language descriptions for haptic signals (e.g., vibrations), this work formally defines the *haptic description task*—the first such formulation. Methodologically, it introduces a novel paradigm that discretizes vibration signals into learnable tokens via dual tokenizers: one operating in the frequency domain and the other based on EnCodec. These discrete sequences are integrated into the LLaMA architecture. Training proceeds in two stages: LoRA-based fine-tuning followed by reinforcement learning from human feedback (RLHF) to enhance semantic alignment with human haptic perception. Evaluation shows strong performance: 59.98 METEOR and 32.06 BLEU-4 scores in automatic metrics; human evaluation (7-point scale) reveals 61% of generated descriptions scoring >3.5, with RLHF improving the overall score distribution by 10%. This work establishes the first scalable, semantically aligned multimodal generative framework for haptic understanding—enabling applications in virtual reality, accessible interaction, and rehabilitation.

Technology Category

Application Category

📝 Abstract
Haptic captioning is the task of generating natural language descriptions from haptic signals, such as vibrations, for use in virtual reality, accessibility, and rehabilitation applications. While previous multimodal research has focused primarily on vision and audio, haptic signals for the sense of touch remain underexplored. To address this gap, we formalize the haptic captioning task and propose HapticLLaMA, a multimodal sensory language model that interprets vibration signals into descriptions in a given sensory, emotional, or associative category. We investigate two types of haptic tokenizers, a frequency-based tokenizer and an EnCodec-based tokenizer, that convert haptic signals into sequences of discrete units, enabling their integration with the LLaMA model. HapticLLaMA is trained in two stages: (1) supervised fine-tuning using the LLaMA architecture with LoRA-based adaptation, and (2) fine-tuning via reinforcement learning from human feedback (RLHF). We assess HapticLLaMA's captioning performance using both automated n-gram metrics and human evaluation. HapticLLaMA demonstrates strong capability in interpreting haptic vibration signals, achieving a METEOR score of 59.98 and a BLEU-4 score of 32.06 respectively. Additionally, over 61% of the generated captions received human ratings above 3.5 on a 7-point scale, with RLHF yielding a 10% improvement in the overall rating distribution, indicating stronger alignment with human haptic perception. These findings highlight the potential of large language models to process and adapt to sensory data.
Problem

Research questions and friction points this paper is trying to address.

Generating natural language descriptions from haptic signals
Exploring underexplored haptic signals for touch sense
Integrating haptic signals with LLaMA model for captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal model for haptic signal interpretation
Frequency and EnCodec-based haptic tokenizers
Two-stage training with LoRA and RLHF
🔎 Similar Papers
No similar papers found.