I’m a Master’s student in Computer Science (2024–2026) at the University of Massachusetts Amherst, with a research focus on AI security, deep learning, and natural language processing. I hold a B.Tech in Computer Science and Engineering from the Indian Institute of Information Technology Guwahati (2020).
My work centers on enhancing the robustness, efficiency, and generalization of machine learning models. I’ve contributed to projects involving backdoor defenses, model ensembling, and large-scale zero-shot classification, spanning both NLP and computer vision domains.
Professionally, I’ve worked with research and development teams at University College London and MAQ Software, where I gained hands-on experience with LLM-based systems, scalable AI pipelines, and adversarial threat mitigation. I’ve also collaborated with researchers at Google DeepMind. My work has been published at venues like ACL and supported by competitive awards such as the DAAD-WISE Scholarship.
Publications
- Arora, Ansh, et al. (2024). Here’s a Free Lunch: Sanitizing Backdoored Models with Model Merge. ACL 2024.
View Paper
🗞 Recent News
-
May 2025 — Joined Thales in Pasadena, CA as a Research Engineer Intern in the Identity & Biometrics R&D team. Currently exploring fingerprint recognition, with a focus on robust representation learning and matching strategies in secure biometric systems.
-
April 2025 — Started as a Graduate Researcher at Google DeepMind, contributing to cutting-edge work on meta-optimization and adaptive model ensembling. The research aims to improve training efficiency and generalization across deep learning tasks, with a focus on scalable architectures and long-range task adaptation.
-
October 2024 — Published a blog post on Medium breaking down our ACL 2024 paper, covering the inspiration, methodology, and real-world implications of our approach to defending against backdoored NLP models.
➡️ Read: “Here’s a Free Lunch: Sanitizing Backdoored Models with Model Merge” -
February 2024 — Our work “Here’s a Free Lunch: Sanitizing Backdoored Models with Model Merge” was accepted at ACL 2024 (Findings). The paper introduces a defense that merges multiple models at inference-time, significantly reducing the effectiveness of NLP backdoor attacks (75%+ drop in attack success rate), while preserving clean accuracy. Evaluated across SST-2, QNLI, Amazon, and IMDB with BERT, RoBERTa, and Llama2/Mistral LLMs.