| title |
Get Started with Weights & Biases |
| description |
Choose the right W&B product for your use case and learn how to get started |
Welcome to Weights & Biases! Before getting started with our products, it's important to identify which ones suit your use case.
| Product |
Best For |
Key Features |
| W&B Models |
Training ML models from scratch |
Experiment tracking, hyperparameter optimization, model registry, visualizations |
| W&B Weave |
Building LLM applications |
Tracing, prompt management, evaluation, cost tracking for production AI apps |
| W&B Inference |
Using pre-trained models |
Hosted open-source models, API access, model playground for testing |
| W&B Training |
Fine-tuning models |
Create and deploy LoRAs and custom model adaptations with reinforcement learning |
The "hello world" of W&B, which guides you to logging your first data.
A full-fledged tutorial that walks through the entire Models product using a real ML experiment.
A video-led course that emphasizes experiment tracking and features quizzes to ensure comprehension.
Learn how models are trained, evaluated, developed, and deployed and how you can use wandb at each step of that lifecycle to help build better performing models faster.
Learn how to decorate your code so that calling into an LLM logs Weave traces and sets you on the path of a perfect LLM workflow.
A full-fledged tutorial that shows Weave doing real-world evaluation of the performance of various models hosted by W&B Inference
A video-led course that teaches you how to log, debug, and evaluate language model workflows, and features quizzes to ensure comprehension.
Learn how you can evaluate, monitor, and iterate continuously on your AI applications and improve quality, latency, cost, and safety.
Features a quickstart that shows how you use the standard OpenAI REST API to call any model hosted on W&B Inference.
A full-fledged tutorial that shows Weave doing real-world evaluation of the performance of various models hosted by W&B Inference
W&B Inference is really simple to use. Click on any model we host, start trying prompts, and see our observability layer kick into action.
Run through a few quick examples of W&B Inference tracing calls to popular LLMs and evaluating the results.
Use W&B Training with OpenPipe's ART library to train a model to play the game 2048.
After creating your trained model, learn how to use it in your code.