Product / Instances

Launch GPU sessions in minutes.

Self-serve instances for exploration, model prototyping, and burst training runs with real-time queue visibility.

Why teams choose this path

Capabilities built for high-throughput ML operations

  • Self-serve notebooks, SSH sessions, and batch launch workflows
  • Live queue visibility with clear provisioning and runtime states
  • Per-project usage tracking for cost and performance accountability

What you can do

Operational workflows supported on day one

Instance Catalog

Compare GPU type, memory, CPU/RAM profile, and storage options before launching.

Session Lifecycle

Track workloads from queued to running to completed with clear state transitions and status context.

Usage Controls

Set runtime expectations and monitor spend by project to keep experimentation within budget.

Talk to our team about your rollout plan