So it fits on a phone, for instance. Currently the word2vec model uses ~50MB, which we could reduce by:
- using fewer words, winning say a factor of 2
- using half-floats instead of floats, winning a factor of 2
- storing similarities between each dictionary word and Codenames word instead of vectors, winning a factor of 300/100=3 (makes it impossible to play with custom words, though)