🧪 Research
I work broadly in the area of Trustworthy and Privacy-Preserving Machine Learning, with a strong focus on making machine learning systems secure, fair, and usable in real-world settings. My research spans both theoretical foundations and practical deployments, and I’m always looking for curious students and motivated collaborators to join in solving open problems.
🔍 Core Research Areas
🧠 Federated Learning (FL)
- Designing adaptive FL algorithms that are secure against data and model poisoning attacks.
- Exploring vulnerabilities in collaborative learning systems and proposing solutions.
- Developing inference attack models to audit information leakage in federated setups.
- Designing verifiable schemes to ensure the integrity of aggregation results.
- Optimising computational and communicational cost in real-time deployments.
- Study in cross-silo (large institutions) and cross-device (edge devices) settings.
🔐 Privacy-Preserving Machine Learning
- Leveraging Differential Privacy, Homomorphic Encryption and Secure Multi-party Computation to train models on sensitive data without revealing inputs.
- Employing Zero-knowledge Proofs to ensure integrity on computations.
✅ Trustworthy AI
- Investigating bias, fairness, and interpretability of AI systems.
- Creating tools to detect overfitting, distributional shifts, and model leakage.
🔄 Private Data Sharing & Governance
- Developing practical data-sharing systems across international boundaries, ensuring regulatory compliance.
🎓 Openings for Students
If you’re a student [undergraduate, master’s, or PhD aspirant] interested in working with me and making me learn, on closely aligned with any of the above topics, please feel free to email me, with a statement of purpose.
🤝 Looking for Collaborators
I’m looking forward to new collaborations, especially with:
- Enthusiasts passionate about the startup ecosystem for innovation
- Researchers working in TIPS [trust/identity/privacy/security]
- Industry partners interested in deploying privacy-aware applications
- Interdisciplinary teams combining law, policy, and technology
Let’s build safer and more inclusive AI systems together.