Scholar
Pratinav Seth
Google Scholar ID: DwBn1fcAAAAJ
AryaXAI Alignment Lab, Arya.ai (An Aurionpro Company)
Deep Learning
Explainable AI
AI for Risk
AI for Social Good
Generative AI
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
73
H-index
5
i10-index
2
Publications
20
Co-authors
17
list available
Contact
Email
seth.pratinav@ieee.org
Twitter
Open ↗
GitHub
Open ↗
LinkedIn
Open ↗
Publications
13 items
AlignTune: Modular Toolkit for Post-Training Alignment of Large Language Models
2026
Cited
0
$C$-$\Delta\Theta$: Circuit-Restricted Weight Arithmetic for Selective Refusal
2026
Cited
0
Exploring Fine-Tuning for Tabular Foundation Models
2026
Cited
0
Orion-Bix: Bi-Axial Attention for Tabular In-Context Learning
2025
Cited
0
Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning
2025
Cited
0
TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
2025
Cited
0
Interpretability as Alignment: Making Internal Understanding a Design Principle
2025
Cited
0
Interpretability-Aware Pruning for Efficient Medical Image Analysis
2025
Cited
0
Load more
Resume (English only)
Academic Achievements
Selected as AAAI Undergraduate Consortium Scholar in 2023
Paper 'Interpretability-aware pruning for efficient medical image analysis' accepted at MICCAI Workshop 2025
Paper 'SELF-PERCEPT: Mental Manipulation Detection' accepted at ACL 2025
'Alberta Wells Dataset' accepted at ICML 2025
Paper 'Obscure to Observe: A Lesion-Aware MAE for Glaucoma Detection' accepted at MIDL 2025 (Short Paper Track)
Paper 'DL-Backtrace' accepted at IJCNN 2025
Program Committee member for AAAI 2026
Reviewer for multiple venues: NeurIPS 2025 (RegML Workshop), ICML 2025 (Actionable Interpretability Workshop), ICCV 2025, IJCNN 2025, ICLR 2025 (Advances in Financial AI Workshop), CVPR 2025
Research Experience
Research Scientist at AryaXAI Alignment Labs (Arya.ai, an Aurionpro Company) since July 2024
Works on Explainable AI (XAI), AI alignment, and AI safety
Enhanced DLBacktrace method and developed benchmarking frameworks for XAI evaluation
Investigates alignment and optimization strategies across CNNs, BERT, and LLaMA
Developing foundation models for tabular data with applications in risk modeling and financial safety
Education
Bachelor’s (B.Tech) in Data Science from Manipal Institute of Technology
Interned at Mila Quebec AI Institute under Dr. David Rolnick
Interned at Bosch Research India with Dr. Amit Kale and Mr. Koustav Mullick
Interned at KLIV Lab, IIT Kharagpur (PI: Dr. Debdoot Sheet)
Conducted research with Mars Rover Manipal AI Research alongside Dr. Ujjwal Verma
Active in Research Society MIT
Mentored by Dr. Abhilash K. Pai
Background
Aspiring AI researcher exploring computer vision, NLP, and deep learning
Focuses on Explainable AI (XAI), AI alignment, and AI safety for high-stakes real-world applications
Passionate about building responsible, transparent, and safe AI systems
Strong interest in AI for Social Good, particularly in medical imagery and remote sensing
Enthusiastic about healthcare, remote sensing, and resource-efficient models
Co-authors
17 total
Aditya Kasliwal
Undergrad student, Manipal Institute of Technology
Co-author 2
Akshat Bhandari
Columbia University
Co-author 4
Co-author 5
Krish Didwania
Manipal Academy of Higher Education
Ishaan Gakhar
Undergraduate Student at MIT Manipal
Co-author 8
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up