News & Events
News

The Research Achievement of Professor Yu Xianchuan's team from the School of Artificial Intelligence Was Selected As an Outstanding Paper at ACM International Conference on Multimedia

The ACM International Conference on Multimedia (ACM MM), initiated by the Association for Computing Machinery (ACM), is the most influential international conference in the field of multimedia processing, analysis and computing. ACM MM 2025 was held in Dublin, Ireland from October 27th to 31st, 2025. A total of 5,330 submissions were received for this conference, with 1,250 accepted, resulting in an acceptance rate of 23.45%. Among them, the Content Theme received over 2,400 submissions, and only 3 were awarded Outstanding Paper. The research result of Professor Yu Xianchuan's team from the School of Artificial Intelligence was selected.


image.png


In this paper, the team propose PAUSE, a robust contrastive multi-view clustering framework that mitigates the FNP inherent in traditional contrastive methods. PAUSE integrates pseudo-label-guided universum learning with Mixup to generate generalized negatives that expand class boundaries, reduce misclassification risk, and enhance both intra-class compactness and inter-class separability. The framework employs a two-stage training process. In the warm-up stage, dual contrastive learning generates pseudo labels that capture semantic relationships across views. In the fine-tuning stage, pseudo labels guide the synthesis of universum samples by mixing anchor instances with out-of-class centroids. This process positions the universum samples in neutral boundary regions, thereby proactively preventing FNP without necessitating explicit post hoc identification of positive and negative pairs for correction. Extensive experiments on five multi-view datasets demonstrate that PAUSE outperforms 11 state-of-the-art methods and robustly handles complex cross-modal data.



Abstract:

Recently, contrastive learning has emerged as a promising approach for multi-view clustering (MVC), as it enforces cross-view consistency and leverages complementary information from different views to enhance the analysis of heterogeneous data. However, traditional contrastive MVC methods suffer from an inherent limitation: their one-to-many contrast mechanism induces the False Negative Problem (FNP), where semantically similar intra-class instances are erroneously repelled. This phenomenon compromises intra-class consistency and ultimately degrades clustering performance. To overcome this issue, we propose a novel Pseudo lA bel gU ided univerS um lE arning (PAUSE) framework for robust multi-view clustering. Specifically, PAUSE operates in two synergistic stages: (1) A warm-up stage that employs dual contrastive learning to generate reliable pseudo-labels, establishing robust semantic relationships; (2) A fine-tuning stage that synthesizes universum samples via Mixup between anchor instances and out-of-class centroids, guided by the acquired pseudo-labels. This unique mechanism constructs generalized negative classes that expand inter-class margins while preserving intra-class cohesion. Crucially, the widened decision boundaries prevent misclassification of displaced intra-class instances, effectively circumventing FNP without requiring explicit negative pair correction. We further devise a robust universum contrastive loss that explicitly enforces cross-view consistency through adaptive boundary constraints. Extensive experiments on five multi-view benchmarks demonstrate that our PAUSE consistently outperforms 11 state-of-the-art multi-view learning methods. Our code is accessible at: https://github.com/xixi-555/PAUSE_main_code.


Reference: https://dl.acm.org/doi/abs/10.1145/3746027.3755205