Projects
Communication Efficiency and Privacy-preserving Computation in Vertical Federated Learning
Apr. 2022 - Jan. 2024
Developed an innovative Vertical Federated Learning framework that combines various optimization techniques to improve convergence rates while preserving data privacy. Proposed a theoretical analysis of the frameworkâs convergence and differential privacy guarantees, achieving substantial reductions in communication costs. This series of works are published in NeurIPS 2023 and Machine Learning Journal (MLJ).

Event-Driven Online Federated Learning
Jan. 2023 - Oct. 2024
We address a critical challenge in Vertical Federated Learning (VFL): the absence of synchronous data streaming across clients in online learning scenarios. While prior research often assumes that all clients receive data for the same entity simultaneously, real-world applications are inherently dynamic, characterized by event-driven and asynchronous data arrivals. To overcome this limitation, we propose a novel event-driven online VFL framework that enables asynchronous client activation based on event triggers. This approach significantly reduces communication and computation costs by activating only the relevant subset of clients for each event, offering a scalable and efficient solution for collaborative learning in dynamic environments. Our research expands the practical applicability of online VFL to domains such as IoT networks, sensor systems, and distributed enterprise solutions. This work has been accepted in ICLR 2025.

Kernelized AUC Maximization in Vertical Federated Learning
Jun. 2023 - Jul. 2024
Contributed to the Asynchronous Vertical Federated Kernelized AUC Maximization (AVFKAM) project, designed to enhance model performance on imbalanced datasets. This project demonstrates notable improvements in training efficiency for federated systems. Published in KDD 2024.

Black-box Prompt Learning for Cloud-based LLMs in Federated Learning
In Progress
Leading a project to explore prompt learning techniques for cloud-hosted Large Language Models (LLMs), optimizing prompts in a black-box setting using the OpenAI API. This project addresses prompt effectiveness without access to model internals, bringing efficiency and adaptability to federated learning contexts. Submitted to ICML 2025.