Primarily works with PyTorch, HuggingFace, and vLLM, focusing on understanding the mathematical and intuitive foundations of models, the structure of the data, and the demands of the pre-training and downstream tasks, using that insight to design methods that are more efficient, robust, and interpretable in real-world settings.
Education
PhD in Artificial Intelligence, Universidad Politécnica de Madrid, Supervisors: Alejandro Martín and Javier Huertas-Tato; MSc in Machine Learning and Big Data, Universidad Politécnica de Madrid; Double BSc in Mathematics and Computer Science, University of Murcia.
Background
Research interests: Applying deep learning to natural language processing (NLP), including AI-generated text detection, authorship attribution, natural language inference (NLI), and efficient Transformer architectures. Currently interested in how large language models (LLMs) can be adapted to downstream tasks, especially when data are scarce or affected by spurious correlations.
Miscellany
Contact: Email pablo.miralles [at] upm.es, GitHub: pablomiralles22, X (Twitter): p_miralles_