Harsh Kasyap

Federated HPC: Towards Privacy-Preserving and Collaborative High Performance Scientific Computing

SCI 2025 Logo IIT BHU Logo

Organisers: Dr. Harsh Kasyap, Prof. Ravi Shankar Singh
Department of Computer Science and Engineering,
Indian Institute of Technology (BHU), Varanasi.

📍 Where: Supercomputing India SCI2025 at Manipal Institute of Technology, Bengaluru

🕒 When: December 12, 2025 (02:00 PM – 07:00 PM)

🔗 Register for the workshop as a delegate [HALL 3 on Dec 12, for this Workshop]

📝 Fill the Workshop Participation Form


About the Workshop

Advances in scientific computing increasingly rely on AI-driven analysis of large-scale, distributed datasets generated across data centres, laboratories, and edge infrastructures. However, collaboration is often constrained by privacy regulations, institutional data policies, and the prohibitive cost of centralising data at scale. Federated Learning (FL) offers a paradigm shift by enabling multiple organisations to collaboratively train models without sharing raw data, while High Performance Computing (HPC) platforms provide the computational backbone for large-scale scientific workloads.

This workshop focuses on scalable, communication-efficient, and privacy-preserving federated learning for scientific applications on HPC systems. We will explore challenges in orchestrating FL on distributed clusters, handling heterogeneous data and compute environments, integrating privacy-preserving mechanisms (such as secure aggregation and differential privacy), and optimising communication overhead. Through expert talks, hands-on sessions, and group activities, the workshop aims to chart a roadmap for secure, collaborative, and trustworthy scientific computing at scale.


Objectives

The workshop aims to:


🗓️ Tentative Programme (Dec 12, 2025: 11:30–03:30 PM)

11:30 – 11:40Opening and Motivation

11:40 – 12:30Talk I: Importance of Tensors in Scientific Computing and Data Science
Speaker: Dr. Ratikanta Behera, Department of CDS, IISc Bangalore

12:30 – 12:45 — Discussion with Speaker and Open Questions

12:45 – 01:30Talk II: Recent Advancements in Parallel Algorithms
Speaker: Prof. Ravi Shankar Singh, Department of CSE, IIT (BHU)

01:30 – 02:20L Break

02:20 – 03:10Talk III: Privacy-Preserving (Collaborative) Machine Learning
Speaker: Dr. Harsh Kasyap, Department of CSE, IIT (BHU)

03:10 – 03:25Group Activity: Identifying Open Challenges & Future Collaborations

03:25 – 03:30 — Concluding Remarks

(Exact timings and titles may be updated based on the final SCI 2025 schedule.)


Talk I: Importance of Tensors in Scientific Computing and Data Science

Abstract: In the era of BIG data, artificial intelligence, and machine learning, we need to process multiway (tensor-shaped) data. These data are mainly in the three or higher-order dimensions, whose orders of magnitude can reach billions. Vast volumes of multidimensional data pose a significant challenge for processing and analysis; the matrix representation of data analysis is insufficient to represent the complete information content of multiway data across various fields. This talk will discuss tensor factorization as a product of tensors. To address the factorizations, we discuss operations between tensors with the concept of transpose, inverse, and identity of a tensor. We will conclude with a few color image applications in a tensor-structured domain.


Talk II: Recent Advancements in Parallel Algorithms.

Abstract: Recent advancements in parallel algorithms are driven by the need to handle very large datasets and massive computations more efficiently. Parallel algorithms focus on optimizing performance for heterogeneous and specialized hardware, such as multi-core CPUs and many-core GPUs, while addressing critical challenges like communication overhead, synchronization, and scalability. The Parallel Random Access Machine (PRAM) model remains a foundational theoretical tool for designing and analyzing parallel algorithms. Recent advancements have focused on using PRAM principles to develop more practical algorithms for modern hardware. Similarly, hypercube is also a powerful conceptual model and recent advancements have focused on adapting its principles for real-world applications in areas like quantum computing, AI, and cloud networking.


Talk III: Privacy-Preserving (Collaborative) Machine Learning

Abstract: Machine learning has been adopted across industries, including the applications directly operated by the end users (individuals). However, users or organisations are reluctant to share data for training. This has also been restricted due to various regulations across demographics, such as GDPR. It may also be due to reasons citing privacy and market competition. However, domains, such as healthcare, finance, etc., demand collaboration to mitigate common challenges, and improve the research. This requires to integrate the privacy enhancing technologies (PETs) with machine learning. PETs include approaches from both machine learning and cryptography. Federated learning (FL) is one of the promising solutions, claiming to provide a privacy-preserving (collaborative) machine learning framework. However, there have been many works, questioning, whether FL is truly privacy-preserving. Therefore, it is time to integrate multiple PETs, such as FL, differential privacy, homomorphic encryption (HE), and secure multi-party computation to achieve a privacy-preserving (collaborative) machine learning solution.


💻 Hands-On / Demonstration

This session will demonstrate how federated learning can be run in an HPC-style environment:

The goal is to give participants a concrete feel for how FL maps onto HPC workflows and what practical challenges arise in Federated HPC.


🤝 Group Activities: Open Challenges & Future Collaborations

Participants will engage in two structured group activities:


🔗 Register for the workshop as a delegate [HALL 3 on Dec 12, for this Workshop]

📝 Fill the Workshop Participation Form

Contact

For queries and participation details, please contact:

Dr. Harsh Kasyap
Email: hkasyap.cse@iitbhu.ac.in