Currently an Applied Scientist at Amazon Science, focusing on Large Language Models (LLMs) for Alexa+, particularly in the areas of self-learning and speculative decoding for efficient inference. Previously, a Machine Learning Researcher at Brave Software and a Visiting Researcher at the University of Cambridge, where he focused on on-device GenAI, privacy-preserving machine learning as well as modular LLM architectures. Before that, spent 4.5 years as a research scientist at Samsung AI Center in Cambridge, UK, working in the areas of distributed, collaborative, and efficient edge AI.
Education
Graduated from the University of Cambridge; Previously worked at CERN, contributing to large-scale distributed storage systems.
Background
Machine Learning Research Scientist specializing in Machine Learning (ML), Distributed & Mobile Systems, and Efficient ML algorithms. Research interests revolve around dynamic network architectures, federated/collaborative learning, on-device AI as well as resource and energy-efficient deep learning.
Miscellany
In free time, loves travelling, motorsport, and photography. Passionate about open-source, AGI, and privacy, and likes partaking in hackathons from time to time. Also enjoys coffee ☕️.