Documentation Index
Fetch the complete documentation index at: https://docs.cartesia.ai/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before deploying Cartesia’s self-hosted solution, you’ll need:Enterprise Contract
Cartesia’s self-hosted products generally require an enterprise contract. Please reach out to support@cartesia.ai to request a conversation with our Go-to-Market team.Infrastructure
Hardware Requirements
Cartesia models require GPUs running NVidia devices from the Ampere family or newer, with at least 24GB GPU Memory. We’ll provide more specifics depending on how you run your GPU clusters. See Hardware Selection for more details.Deployment Options
You can deploy a self-hosted Cartesia cluster in one of 3 ways that we provide today:- Via Helm Charts on a Managed Kubernetes Cluster with the right hardware.
- Via Docker Compose / Docker Swarm on bare-metal or VM nodes (beta).
- Via managed endpoints on Sagemaker Jumpstart.
Setup Stages
We highly recommend trying out our cloud offering first, since you can test your application and integrate it without all the work required for self-hosting.
Create Cartesia Account
Sign up at play.cartesia.ai and create an API key.
Navigate to play.cartesia.ai/keys and select your organization.
Request Enterprise Access
Contact support@cartesia.ai for getting enterprise access.If you’re deploying on AWS Sagemaker, you can request directly on the cloud platform itself.
Choose Deployment Method
Select your preferred deployment approach based on your infrastructure:Depending on how you’re deploying, you’ll also decide on the hardware at this stage.
Deploy
Once approved, you’ll receive access to:
- Google Cloud Storage bucket containing cartesia-kube and related artifacts (Docker images, voices, LoRA weights)
- Private Docker registry credentials
- Helm chart repositories
- Terraform configuration examples
- Deployment documentation and support
- An offline license (required if you are doing an air-gapped deployment)
Post Deployment
Post deployment, we provide some resources to validate and benchmark your deployment on your own hardware. See Testing and Benchmarking.
If you’re looking to setup monitoring on the deployment, checkout Metrics