Skip to content

Commit

Permalink
Update 2024v2-Arena-Bench.md
Browse files Browse the repository at this point in the history
  • Loading branch information
raja-7-c authored Oct 23, 2024
1 parent 14b6824 commit fb7bf90
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions _publications/2024v2-Arena-Bench.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@ collection: publications
permalink: /publication/2024v2-Arena-Bench
excerpt: 'Deep multimodal learning has shown remarkable success by leveraging contrastive learning to capture explicit one-to-one relations across modalities. However, real-world data often exhibits shared relations beyond simple pairwise associations. We propose M3CoL, a Multimodal Mixup Contrastive Learning approach to capture nuanced shared relations inherent in multimodal data. Our key contribution is a Mixup-based contrastive loss that learns robust representations by aligning mixed samples from one modality with their corresponding samples from other modalities thereby capturing shared relations between them. For multimodal classification tasks, we introduce a framework that integrates a fusion module with unimodal prediction modules for auxiliary supervision during training, complemented by our proposed Mixup-based contrastive loss. Through extensive experiments on diverse datasets (N24News, ROSMAP, BRCA, and Food-101), we demonstrate that M3CoL effectively captures shared multimodal relations and generalizes across domains. It outperforms state-of-the-art methods on N24News, ROSMAP, and BRCA, while achieving comparable performance on Food-101. Our work highlights the significance of learning shared relations for robust multimodal learning, opening up promising avenues for future research.'
date: 2024-09-26
venue: Preprint, an extended abstract of the paper accepted at NeurIPS 2024 Workshop on Unifying Representations in Neural Models (UniReps)
[Link](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=KQuqOvMAAAAJ&citation_for_view=KQuqOvMAAAAJ:eQOLeE2rZwMC)
venue: Preprint, an extended abstract of the paper accepted at NeurIPS 2024 Workshop on Unifying Representations in Neural Models (UniReps) [Link](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=KQuqOvMAAAAJ&citation_for_view=KQuqOvMAAAAJ:eQOLeE2rZwMC)

---
Deep multimodal learning has shown remarkable success by leveraging contrastive learning to capture explicit one-to-one relations across modalities. However, real-world data often exhibits shared relations beyond simple pairwise associations. We propose M3CoL, a Multimodal Mixup Contrastive Learning approach to capture nuanced shared relations inherent in multimodal data. Our key contribution is a Mixup-based contrastive loss that learns robust representations by aligning mixed samples from one modality with their corresponding samples from other modalities thereby capturing shared relations between them. For multimodal classification tasks, we introduce a framework that integrates a fusion module with unimodal prediction modules for auxiliary supervision during training, complemented by our proposed Mixup-based contrastive loss. Through extensive experiments on diverse datasets (N24News, ROSMAP, BRCA, and Food-101), we demonstrate that M3CoL effectively captures shared multimodal relations and generalizes across domains. It outperforms state-of-the-art methods on N24News, ROSMAP, and BRCA, while achieving comparable performance on Food-101. Our work highlights the significance of learning shared relations for robust multimodal learning, opening up promising avenues for future research.
Expand Down

0 comments on commit fb7bf90

Please sign in to comment.