Extending Graph Condensation to Multi-Label Datasets

Training Graph Neural Networks (GNNs) on large graphs is often hindered by redundancy and high computational demands. Existing graph condensation methods typically target single-label scenarios, but many real-world graphs are multi-label—nodes can belong to multiple classes simultaneously. In this talk, I’ll introduce GCond: a graph condensation framework adapted to the multi-label setting, utilizing K-Center initialization and binary cross-entropy loss. Through experiments on eight real-world multi-label graph datasets, GCond consistently achieves state-of-the-art performance, enhancing both scalability and efficiency for multi-label graph learning.

Date
Location
Amos Eaton 216
Speaker: Liangliang (Lia) Zhang from Computer Science (RPI)
Back to top