AI / ML ML Framework stable

JAX

Composable transformations of NumPy for high-performance ML research

32.0K stars 700 contributors Since 2018
Website → GitHub

Google's library for high-performance numerical computing with auto-differentiation, JIT compilation, and native GPU/TPU support built on XLA compiler.

License
Apache-2.0
Min RAM
2 GB
Min CPUs
2 cores
Scaling
distributed
Complexity
expert
Performance
enterprise grade
Self-hostable
K8s native
Offline
Pricing
fully free
Docs quality
good
Vendor lock-in
none

Use cases

  • Cutting-edge ML research requiring custom gradient computation
  • Large-scale scientific simulation on TPU pods
  • Bayesian inference with MCMC methods
  • Physics-informed neural networks

Anti-patterns / when NOT to use

  • Steep learning curve for production engineers
  • Ecosystem smaller than PyTorch/TensorFlow
  • Debugging JIT-compiled code is difficult
  • Not recommended for beginners

Integrates with

Replaces / alternatives to

  • NumPy for GPU workloads

Technical specs

Language
PythonC++
API type
SDK
Protocols
HTTP
Deployment
pip
SDKs
python

Community

GitHub stars 32.0K
Contributors 700
Commit frequency daily
Plugin ecosystem none
Backing Google
Funding corporate

Release

Latest version
Last release
Since 2018

Best fit

Team size
solosmall
Industries
researchscientific-computing

Tags

  • autodiff
  • jit-compilation
  • xla
  • tpu
  • functional-programming
  • scientific-computing