Skip to content

MLX / HuggingFace release request – OpenEuroLLM-Hungarian #13

@Irogory

Description

@Irogory

Objective

The OpenEuroLLM-Hungarian model currently runs at 98% CPU on Apple Silicon (Mac mini M4, 16GB RAM) via Ollama, making it significantly slower than its potential. After extensive testing of multiple Hungarian language models, this model consistently outperforms all alternatives in fluency and accuracy — it is simply the best locally-runnable Hungarian model available.

Releasing the model weights on HuggingFace (ideally with a 4-bit quantized version) would allow conversion to Apple's MLX format, enabling full GPU utilization and 40–60% faster inference for a large and growing Apple Silicon user base.

Thank you for building something genuinely useful for Hungarian speakers!

— Sándor

Proposal

  1. Release model weights on HuggingFace (openeurollm/OpenEuroLLM-Hungarian)
  2. Provide a 4-bit quantized GGUF or SafeTensors version
  3. Optionally: publish a pre-converted MLX version to mlx-community on HuggingFace

This would require minimal effort but would significantly expand the model's reach to the entire Apple Silicon community.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions