forked from embeddings-benchmark/mteb
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathcitation.cff
More file actions
205 lines (205 loc) · 7.13 KB
/
citation.cff
File metadata and controls
205 lines (205 loc) · 7.13 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
cff-version: 1.2.0
message: "If you use this software, please cite both of the below papers. In addition please cite any paper associated with the specific datasets and benchmarks you use."
title: "MTEB: Your toolkit for evaluating embeddings"
type: software
url: "https://arxiv.org/abs/2502.13595"
repository-code: "https://github.com/embeddings-benchmark/mteb"
license: "Apache-2.0"
keywords:
- text embeddings
- multilingual
- benchmark
preferred-citation:
type: article
title: "MMTEB: Massive Multilingual Text Embedding Benchmark"
authors:
- family-names: Enevoldsen
given-names: Kenneth
- family-names: Chung
given-names: Isaac
- family-names: Kerboua
given-names: Imene
- family-names: Kardos
given-names: Márton
- family-names: Mathur
given-names: Ashwin
- family-names: Stap
given-names: David
- family-names: Gala
given-names: Jay
- family-names: Siblini
given-names: Wissam
- family-names: Krzemiński
given-names: Dominik
- family-names: Winata
given-names: Genta Indra
- family-names: Sturua
given-names: Saba
- family-names: Utpala
given-names: Saiteja
- family-names: Ciancone
given-names: Mathieu
- family-names: Schaeffer
given-names: Marion
- family-names: Sequeira
given-names: Gabriel
- family-names: Misra
given-names: Diganta
- family-names: Dhakal
given-names: Shreeya
- family-names: Rystrøm
given-names: Jonathan
- family-names: Solomatin
given-names: Roman
- family-names: Çağatan
given-names: Ömer
- family-names: Kundu
given-names: Akash
- family-names: Bernstorff
given-names: Martin
- family-names: Xiao
given-names: Shitao
- family-names: Sukhlecha
given-names: Akshita
- family-names: Pahwa
given-names: Bhavish
- family-names: Poświata
given-names: Rafał
- family-names: GV
given-names: Kranthi Kiran
- family-names: Ashraf
given-names: Shawon
- family-names: Auras
given-names: Daniel
- family-names: Plüster
given-names: Björn
- family-names: Harries
given-names: Jan Philipp
- family-names: Magne
given-names: Loïc
- family-names: Mohr
given-names: Isabelle
- family-names: Hendriksen
given-names: Mariya
- family-names: Zhu
given-names: Dawei
- family-names: Gisserot-Boukhlef
given-names: Hippolyte
- family-names: Aarsen
given-names: Tom
- family-names: Kostkan
given-names: Jan
- family-names: Wojtasik
given-names: Konrad
- family-names: Lee
given-names: Taemin
- family-names: Šuppa
given-names: Marek
- family-names: Zhang
given-names: Crystina
- family-names: Rocca
given-names: Roberta
- family-names: Hamdy
given-names: Mohammed
- family-names: Michail
given-names: Andrianos
- family-names: Yang
given-names: John
- family-names: Faysse
given-names: Manuel
- family-names: Vatolin
given-names: Aleksei
- family-names: Thakur
given-names: Nandan
- family-names: Dey
given-names: Manan
- family-names: Vasani
given-names: Dipam
- family-names: Chitale
given-names: Pranjal
- family-names: Tedeschi
given-names: Simone
- family-names: Tai
given-names: Nguyen
- family-names: Snegirev
given-names: Artem
- family-names: Günther
given-names: Michael
- family-names: Xia
given-names: Mengzhou
- family-names: Shi
given-names: Weijia
- family-names: Lù
given-names: Xing Han
- family-names: Clive
given-names: Jordan
- family-names: Krishnakumar
given-names: Gayatri
- family-names: Maksimova
given-names: Anna
- family-names: Wehrli
given-names: Silvan
- family-names: Tikhonova
given-names: Maria
- family-names: Panchal
given-names: Henil
- family-names: Abramov
given-names: Aleksandr
- family-names: Ostendorff
given-names: Malte
- family-names: Liu
given-names: Zheng
- family-names: Clematide
given-names: Simon
- family-names: Miranda
given-names: Lester James
- family-names: Fenogenova
given-names: Alena
- family-names: Song
given-names: Guangyu
- family-names: Safi
given-names: Ruqiya Bin
- family-names: Li
given-names: Wen-Ding
- family-names: Borghini
given-names: Alessia
- family-names: Cassano
given-names: Federico
- family-names: Su
given-names: Hongjin
- family-names: Lin
given-names: Jimmy
- family-names: Yen
given-names: Howard
- family-names: Hansen
given-names: Lasse
- family-names: Hooker
given-names: Sara
- family-names: Xiao
given-names: Chenghao
- family-names: Adlakha
given-names: Vaibhav
- family-names: Weller
given-names: Orion
- family-names: Reddy
given-names: Siva
- family-names: Muennighoff
given-names: Niklas
year: 2025
url: "https://arxiv.org/abs/2502.13595"
date-released: "2025-02-19"
abstract: "Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost."
references:
- type: article
title: "MTEB: Massive Text Embedding Benchmark"
authors:
- family-names: Muennighoff
given-names: Niklas
- family-names: Tazi
given-names: Nouamane
- family-names: Magne
given-names: Loïc
- family-names: Reimers
given-names: Nils
year: 2023
url: "https://arxiv.org/abs/2210.07316"