Guides
Welcome to the GuideLLM guides section! Here you'll find comprehensive documentation covering key components and concepts of the GuideLLM platform. These guides will help you understand the inner workings of GuideLLM, how to configure its various components, and how to interpret benchmark results to optimize your LLM deployments.
Whether you're interested in understanding the system architecture, exploring supported backends, configuring datasets, analyzing metrics, or setting service level objectives, these guides provide the detailed information you need to make informed decisions about your LLM deployments.
Key Guides
-
Architecture
Understanding the modular design of GuideLLM and how core components interact to evaluate LLM deployments.
-
Backends
Learn about supported LLM backends and how to set up OpenAI-compatible servers for benchmarking.
-
Datasets
Configure and use different data sources for benchmarking, including synthetic data, Hugging Face datasets, and file-based options.
-
Metrics
Explore the comprehensive metrics provided by GuideLLM to evaluate performance, including latency, throughput, and token-level analysis.
-
Outputs
Learn about supported output formats and how to customize result reporting for your benchmarks.
-
Service Level Objectives
Define and implement SLOs and SLAs for your LLM deployments to ensure reliability and performance.