You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 5, 2025. It is now read-only.
Really like the app, great idea for Mac AI integration! I have around 200GB of MLX models already downloaded through mlx_lm.server that I use with LibreChat - however, there appears to be no way to use these with HuggingChat. Please let me know if it would be possible to add this functionality.
According to this post MLX is supported and indeed works but only with the two models available in the HC Settings panel (I've tried the Qwen2.5-3B-Instruct model).
Now supports MLX inference. Press CMD+Shift+\ to switch to local inference
This is probably because of the difference where HC and MLX store their models. In HC they're at ~/Documents/huggingface, but with MLX they're at ~/.cache/huggingface. I had hoped to symlink the two but it's not possible as the directory structure is very different. See below for the structure. Thanks!