Steven Abreu
Scholar

Steven Abreu

Google Scholar ID: CqbIOvMAAAAJ
University of Groningen (Previously: Intel Labs, Google)
brain-inspired computingrecurrent neural networksmechanistic interpretability
Citations & Impact
All-time
Citations
429
 
H-index
9
 
i10-index
8
 
Publications
20
 
Co-authors
9
list available
Resume (English only)
Academic Achievements
  • May 2025: Presented “Large Language Models on a Tiny Power Budget” at ISCAS 2025
  • Apr 2025: Presented four workshop papers at ICLR 2025 in Singapore
  • Dec 2024: Co-authored paper “Steering Large Language Models using Conceptors” presented at the MINT Workshop, NeurIPS 2024
  • Jul 2024: Presented work on quantized SSMs at ICML 2024 (Vienna) and neuromorphic programming at ICONS 2024 (Virginia)
  • Jun 2023: Presented neuromorphic cytometry paper at CVPR 2023 (event-based vision workshop, Vancouver)
  • Oct 2023: Poster presentation on neuromorphic programming at NNPC 2023 (Hannover)
  • Apr 2023: Gave a talk on bio-inspired hardware interfaces at CHI 2023 (Hamburg)
Research Experience
  • Visiting researcher at ETH Zurich and University of Gent during PhD studies
  • Research intern at Google (Waterloo, Canada), working on efficient and adaptive AR user interfaces with multimodal LLMs
  • Research intern at Intel’s Neuromorphic Computing Lab, developing neuromorphic transformer-like architectures based on state space models
  • Three-month research stay at the Institute of Neuroinformatics (ETH/UZH, Zurich) in 2022, focusing on mixed-signal neuromorphic hardware
  • Three-month research stay at the Photonics Research Group, University of Gent (2022), working on photonic computing and neuromorphic flow cytometry
  • Multiple participations in the Telluride Neuromorphic and Cognitive Computing Summer Research Program
Background
  • PhD student in the AI Department at the University of Groningen, working in the MINDS group under Prof. Herbert Jaeger
  • Affiliated with CogniGron and partially funded by Post-Digital
  • Maintains a dual research focus on advancing machine learning and exploring brain-inspired computing paradigms and AI hardware
  • Enjoys interdisciplinary research spanning AI, computer science, mathematics, neuroscience, physics, and cognitive science
  • Within ML, interested in efficiency (e.g., model compression, low-power hardware), continual learning (e.g., brain-inspired neuromodulation or plasticity rules), meta-learning, and automated machine learning
  • Works on representation engineering and mechanistic interpretability of large language models to advance aligned and interpretable AI
  • Researches physical and brain-inspired computing, developing theories that align computation with physics to better utilize neuromorphic chips, photonic devices, and other physical computing systems
  • Aims to develop computational abstractions in physical substrates (e.g., via the Neuromorphic Intermediate Representation, NIR), hardware-compatible efficient learning algorithms, and principled methods for programming novel AI hardware