LLMs & AI Infra Open LLM stable

Mistral

Efficient open-weight LLMs for edge and cloud

38.0K stars 100 contributors Since 2023
Website → GitHub

Open-weight LLM: MoE architecture, sliding window attention, function calling, multiple sizes, efficient inference

License
Apache-2.0
Min RAM
16 GB
Min CPUs
4 cores
Scaling
distributed
Complexity
advanced
Performance
enterprise grade
Self-hostable
K8s native
Offline
Pricing
fully free
Docs quality
good
Vendor lock-in
none

Use cases

  • Self-hosted AI chatbot with full data privacy
  • Fine-tune for domain-specific tasks
  • Code generation and review assistant
  • Document analysis and summarization

Anti-patterns / when NOT to use

  • Requires multi-GPU for large models
  • Self-hosting needs significant DevOps
  • Smaller models trade quality for speed
  • Not as capable as frontier closed models for hardest tasks

Replaces / alternatives to

  • GPT-3.5/4
  • Claude Haiku

Technical specs

Language
Python
API type
SDK
Protocols
HTTP
Deployment
dockerbinary
SDKs
python

Community

GitHub stars 38.0K
Contributors 100
Commit frequency monthly
Plugin ecosystem none
Backing Mistral AI
Funding vc_backed

Release

Latest version
Last release
Since 2023

Best fit

Team size
smallmediumenterprise
Industries
generalsaashealthcarefintechlegaleducation

Tags

  • llm
  • open-weights
  • self-hosted
  • fine-tunable
  • reasoning
  • coding
  • multilingual