KitchenAI | Home

KitchenAI Logo

KitchenAI

AI Framework => Production simplified.

The OSS control plane for your AI implementations.

Start Building →
Web Apps
Mobile Apps
Internal Tools
KitchenAI Control Plane
Version Management
Track & Roll Back
API Gateway
Single Entry Point
Prompt Management
Template Library
Agent Tools
Function Registry
Security
Auth & Access Control
Langchain Box
Chain Manager
Memory Store
RAG Box
Vector Store
Embeddings
Agents Box
Tool Registry
Task Planner
Deploy to:
AWS
GCP
On-Prem
OpenAI
Anthropic
Hugging Face
Custom Models

Control Plane in Action

Experience how KitchenAI simplifies AI framework orchestration

KitchenAI Playground Interface

Try out the KitchenAI Playground and see the control plane in action

Framework Orchestration

Manage multiple AI frameworks through a single, standardized API endpoint. From Langchain to RAG, deploy with confidence.

Bento Box Management

Create, test, and deploy self-contained AI implementations. Version control and instant rollbacks included.

Real-Time Monitoring

Watch your AI workflows execute in real-time. Debug and optimize with comprehensive observability.

Empowering Your Teams

Simplify AI Integration. Amplify Expertise.

Application Developers

Simple API integration using familiar OpenAI SDK


  client.chat.completions.create(
    model="@my-bento/query",
    messages=[{"role": "user", 
              "content": query}]
  )

AI Teams

Build sophisticated AI workflows in Bento Boxes


  index = VectorStoreIndex.from_vector_store(vector_store)
  query_engine = index.as_query_engine(
    chat_mode="best",
    filters=filters,
    llm=llm,
    verbose=True
  )

  # Execute query
  response = await query_engine.aquery(data.query)
    ...

KitchenAI bridges the gap between application developers and AI teams, allowing each to focus on their strengths while delivering powerful AI capabilities.

FOR DATA SCIENTISTS & ENGINEERS

From AI Framework
to Production API

Focus on building AI implementations. We'll handle the infrastructure.

Bento Boxes

Self-contained AI implementations using Langchain, LlamaIndex, and other AI frameworks. Version, test, and deploy with confidence.

Control Plane

A unified interface that standardizes how your apps interact with AI frameworks. One API endpoint, multiple implementations.

Framework Agnostic

Build with any AI framework. Swap implementations without changing your application code.

from openai import OpenAI

# Initialize client with KitchenAI endpoint
client = OpenAI(
    base_url="https://api.kitchenai.dev/v1",
    api_key="your-api-key"
)

# Use your bento box like any OpenAI model
response = await client.chat.completions.create(
    model="@my-bento/query",
    messages=[
        {"role": "user", "content": query}
    ]
)

print(response.choices[0].message.content)

Pro tip: Use the familiar OpenAI SDK to interact with your bento boxes. Point the client to your KitchenAI control plane and access your AI implementations like any other model.

Framework Integration

Simple APIs to manage your AI implementations across different frameworks

@kitchen.query.handler("query")
async def query_handler(data: WhiskQuerySchema) -> WhiskQueryBaseResponseSchema:
    """Query handler with RAG"""
    # Create index and query engine
    index = VectorStoreIndex.from_vector_store(vector_store)
    query_engine = index.as_query_engine(
        chat_mode="best",
        filters=filters,
        llm=llm,
        verbose=True
    )

    # Execute query
    response = await query_engine.aquery(data.query)

    return WhiskQueryBaseResponseSchema.from_llama_response(
        data,
        response,
        token_counts=TokenCountSchema(**token_counts),
        metadata={"token_counts": token_counts, **data.metadata}
    )

LlamaIndex

Like AWS Lambda for AI frameworks - define the kitchenai entrypoint, build with any framework

Entrypoint Signature

@kitchen.query.handler("query")
async def query_handler(data: WhiskQuerySchema) -> WhiskQueryBaseResponseSchema:

Level Up Your LLMOps With KitchenAI Managed Cloud

First 100 teams onboarded get 500 free AI credits

What Makes Us Unique

Redefining how organizations integrate and manage AI workflows

Application Layer Web Apps Mobile Apps NATS Messaging Layer Bento Boxes LLM Logic RAG Agents Custom Logic

Modular Architecture

Package AI workflows as independent "bento boxes" that can be updated, replaced, or scaled without affecting the entire system.

High-Performance Messaging

Built on NATS for lightning-fast, reliable communication between AI modules and dynamic service discovery.

Framework Agnostic

Work with any AI framework or model. No vendor lock-in, maximum flexibility for your AI stack.