EyeCoD: eye tracking system acceleration via flatcam-based algorithm & accelerator co-design

📅 2022-06-02
🏛️ International Symposium on Computer Architecture
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in VR/AR eye-tracking systems—including large form factor, high communication overhead, and poor privacy—the paper proposes an algorithm-hardware co-design framework based on lensless FlatCam. Methodologically, it introduces a novel two-stage eye-tracking algorithm comprising prediction and focus refinement; establishes a tightly coupled sensing–processing architecture leveraging lensless imaging; and designs a domain-specific AI accelerator optimized for eye-tracking tasks, featuring ROI-driven computation, depthwise layer reuse, feature tiling with block-wise storage, and a sequential-write/parallel-read buffer scheme. Experimental evaluation demonstrates that the system achieves 10.95×, 3.21×, and 12.85× speedup over CPU, GPU, and CIS-GEP baselines, respectively, while significantly reducing both communication and computational overhead—without compromising original tracking accuracy.
📝 Abstract
Eye tracking has become an essential human-machine interaction modality for providing immersive experience in numerous virtual and augmented reality (VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and enhanced visual privacy. However, existing eye tracking systems are still limited by their: (1) large form-factor largely due to the adopted bulky lens-based cameras; (2) high communication cost required between the camera and backend processor; and (3) potentially concerned low visual privacy, thus prohibiting their more extensive applications. To this end, we propose, develop, and validate a lensless FlatCambased eye tracking algorithm and accelerator co-design framework dubbed EyeCoD to enable eye tracking systems with a much reduced form-factor and boosted system efficiency without sacrificing the tracking accuracy, paving the way for next-generation eye tracking solutions. On the system level, we advocate the use of lensless FlatCams instead of lens-based cameras to facilitate the small form-factor need in mobile eye tracking systems, which also leaves rooms for a dedicated sensing-processor co-design to reduce the required camera-processor communication latency. On the algorithm level, EyeCoD integrates a predict-then-focus pipeline that first predicts the region-of-interest (ROI) via segmentation and then only focuses on the ROI parts to estimate gaze directions, greatly reducing redundant computations and data movements. On the hardware level, we further develop a dedicated accelerator that (1) integrates a novel workload orchestration between the aforementioned segmentation and gaze estimation models, (2) leverages intra-channel reuse opportunities for depth-wise layers, (3) utilizes input feature-wise partition to save activation memory size, and (4) develops a sequential-write-parallel-read input buffer to alleviate the bandwidth requirement for the activation global buffer. On-silicon measurement and extensive experiments validate that our EyeCoD consistently reduces both the communication and computation costs, leading to an overall system speedup of 10.95×, 3.21×, and 12.85× over general computing platforms including CPUs and GPUs, and a prior-art eye tracking processor called CIS-GEP, respectively, while maintaining the tracking accuracy. Codes are available at https://github.com/RICE-EIC/EyeCoD.
Problem

Research questions and friction points this paper is trying to address.

Reduces form-factor of eye tracking systems using lensless FlatCams.
Decreases communication and computation costs in eye tracking.
Maintains tracking accuracy while boosting system efficiency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lensless FlatCam enables compact eye tracking systems.
Predict-then-focus algorithm reduces redundant computations.
Dedicated accelerator optimizes workload and memory usage.
🔎 Similar Papers
No similar papers found.