diff --git a/_publications/2024v2-Arena-Bench.md b/_publications/2024v2-Arena-Bench.md index a7a0b8e4ca57c..428da698d954c 100644 --- a/_publications/2024v2-Arena-Bench.md +++ b/_publications/2024v2-Arena-Bench.md @@ -4,8 +4,7 @@ collection: publications permalink: /publication/2024v2-Arena-Bench excerpt: 'Deep multimodal learning has shown remarkable success by leveraging contrastive learning to capture explicit one-to-one relations across modalities. However, real-world data often exhibits shared relations beyond simple pairwise associations. We propose M3CoL, a Multimodal Mixup Contrastive Learning approach to capture nuanced shared relations inherent in multimodal data. Our key contribution is a Mixup-based contrastive loss that learns robust representations by aligning mixed samples from one modality with their corresponding samples from other modalities thereby capturing shared relations between them. For multimodal classification tasks, we introduce a framework that integrates a fusion module with unimodal prediction modules for auxiliary supervision during training, complemented by our proposed Mixup-based contrastive loss. Through extensive experiments on diverse datasets (N24News, ROSMAP, BRCA, and Food-101), we demonstrate that M3CoL effectively captures shared multimodal relations and generalizes across domains. It outperforms state-of-the-art methods on N24News, ROSMAP, and BRCA, while achieving comparable performance on Food-101. Our work highlights the significance of learning shared relations for robust multimodal learning, opening up promising avenues for future research.' date: 2024-09-26 -venue: Preprint, an extended abstract of the paper accepted at NeurIPS 2024 Workshop on Unifying Representations in Neural Models (UniReps) -[Link](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=KQuqOvMAAAAJ&citation_for_view=KQuqOvMAAAAJ:eQOLeE2rZwMC) +venue: Preprint, an extended abstract of the paper accepted at NeurIPS 2024 Workshop on Unifying Representations in Neural Models (UniReps) [Link](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=KQuqOvMAAAAJ&citation_for_view=KQuqOvMAAAAJ:eQOLeE2rZwMC) --- Deep multimodal learning has shown remarkable success by leveraging contrastive learning to capture explicit one-to-one relations across modalities. However, real-world data often exhibits shared relations beyond simple pairwise associations. We propose M3CoL, a Multimodal Mixup Contrastive Learning approach to capture nuanced shared relations inherent in multimodal data. Our key contribution is a Mixup-based contrastive loss that learns robust representations by aligning mixed samples from one modality with their corresponding samples from other modalities thereby capturing shared relations between them. For multimodal classification tasks, we introduce a framework that integrates a fusion module with unimodal prediction modules for auxiliary supervision during training, complemented by our proposed Mixup-based contrastive loss. Through extensive experiments on diverse datasets (N24News, ROSMAP, BRCA, and Food-101), we demonstrate that M3CoL effectively captures shared multimodal relations and generalizes across domains. It outperforms state-of-the-art methods on N24News, ROSMAP, and BRCA, while achieving comparable performance on Food-101. Our work highlights the significance of learning shared relations for robust multimodal learning, opening up promising avenues for future research.