XFLOPS: an open source community for AI applications

XFLOPS helps enterprises build AI applications with open tools, frameworks, and best practices. Flame is the community's core project: a distributed engine for secure, cost-effective, and high-performance elastic AI workloads.

What XFLOPS Focuses On

Elastic

Scale AI workloads dynamically based on demand with runtime-aware scheduling and resource optimization.

Security

Use session-based access, isolated runtime environments, and secure component communication for elastic workloads.

Cost Effective

Optimize resource utilization and workload distribution so AI systems can run efficiently at scale.

Heterogeneous

Support varied infrastructure, including CPU, GPU, and accelerator-backed environments.

High Performance

Improve throughput and roundtrip time for task-heavy AI systems through distributed execution.

Cloud Native

Build portable runtime infrastructure for cloud, on-premise, and hybrid deployments.

Core Project: Flame

Flame is the core XFLOPS project for elastic AI workloads. It provides the distributed runtime mechanisms behind sessions, task scheduling, executor reuse, object caching, and secure service integration for agents, reinforcement learning, generated-code execution, and more.

Flame architecture diagram

How Flame Works

  • Session: A group of related tasks with scheduling, resource, and isolation boundaries.
  • Task: A unit of work submitted by a client and executed by a service in an executor.
  • Executor: A runtime environment that hosts application services for a session.
  • Object cache: A shared data layer used by Runner, common data, and incremental object updates.

Latest from the XFLOPS Blog

Technical walkthroughs and examples from the XFLOPS community and Flame project.

Join the XFLOPS Community

Discuss Flame, report issues, and contribute examples or runtime improvements.