Deploying Flows

Deploying Flows

Turn your Langflow workflows into production-ready APIs and integrations.

API Deployment

Generating an API Endpoint

Every flow can become an API:

  1. Open your saved flow
  2. Click API in the toolbar
  3. View your endpoint URL
  4. Copy the endpoint for use

Endpoint URL Format

Your flow’s API endpoint:

https://your-workspace-url/langflow/api/v1/flows/{flow-id}/run

Making API Calls

Basic request:

curl -X POST "your-flow-endpoint" \
  -H "Content-Type: application/json" \
  -d '{
    "input_value": "Hello, how are you?"
  }'

With parameters:

curl -X POST "your-flow-endpoint" \
  -H "Content-Type: application/json" \
  -d '{
    "input_value": "Summarize this document",
    "tweaks": {
      "OpenAI-xxxxx": {
        "model_name": "gpt-4o",
        "temperature": 0.5
      }
    }
  }'

Response Format

{
  "result": {
    "output": "I'm doing well, thank you for asking!"
  },
  "session_id": "abc123",
  "metadata": {
    "duration": 1.23,
    "tokens_used": 45
  }
}

API Authentication

Using API Keys

Generate an API key:

  1. Go to SettingsAPI Keys
  2. Click Create New Key
  3. Copy and securely store the key

Using the key:

curl -X POST "your-flow-endpoint" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"input_value": "Hello"}'

Token Permissions

Configure what each key can do:

  • Read: View flows
  • Execute: Run flows
  • Write: Modify flows
  • Admin: Full access

Tweaks and Runtime Configuration

What Are Tweaks?

Tweaks let you modify component settings at runtime without editing the flow.

Using Tweaks

Override component settings:

{
  "input_value": "My question",
  "tweaks": {
    "ComponentName-id": {
      "parameter": "new_value"
    }
  }
}

Common Tweaks

Change model:

"tweaks": {
  "OpenAI-xxxxx": {
    "model_name": "gpt-4o"
  }
}

Adjust temperature:

"tweaks": {
  "OpenAI-xxxxx": {
    "temperature": 0.2
  }
}

Change retrieval count:

"tweaks": {
  "Retriever-xxxxx": {
    "k": 5
  }
}

Session Management

Stateless Requests

Each request is independent:

curl -X POST "endpoint" \
  -d '{"input_value": "Hello"}'
# No memory of previous requests

Stateful Sessions

Maintain conversation state:

# First request - get session ID
curl -X POST "endpoint" \
  -d '{"input_value": "My name is Alice"}'

curl -X POST "endpoint" \
  -d '{
    "input_value": "What is my name?",
    "session_id": "abc123"
  }'

Session Lifecycle

  • Sessions expire after inactivity
  • Default: 30 minutes
  • Configurable per flow
  • Can be manually cleared

Webhooks

Setting Up Webhooks

Receive notifications when flows complete:

  1. Go to flow settings
  2. Add webhook URL
  3. Select events to trigger

Webhook Payload

{
  "event": "flow_completed",
  "flow_id": "xxx",
  "session_id": "abc123",
  "result": {
    "output": "..."
  },
  "timestamp": "2024-01-15T10:30:00Z"
}

Webhook Events

EventTrigger
flow_startedFlow execution begins
flow_completedFlow finishes successfully
flow_errorFlow encounters error
component_completedIndividual component finishes

Client Libraries

Python

from langflow import Client

client = Client(
    base_url="your-workspace-url",
    api_key="your-api-key"
)

result = client.run_flow(
    flow_id="your-flow-id",
    input_value="Hello!",
    tweaks={
        "OpenAI-xxxxx": {"temperature": 0.5}
    }
)

print(result.output)

JavaScript/TypeScript

import { LangflowClient } from 'langflow-client';

const client = new LangflowClient({
  baseUrl: 'your-workspace-url',
  apiKey: 'your-api-key'
});

const result = await client.runFlow({
  flowId: 'your-flow-id',
  inputValue: 'Hello!'
});

console.log(result.output);

Exporting Flows

Export as JSON

Download your flow for:

  • Version control
  • Backup
  • Sharing
  1. Click ExportJSON
  2. Save the file
  3. Import elsewhere with ImportJSON

Export as Python

Generate standalone Python code:

  1. Click ExportPython
  2. Get code using LangChain
  3. Run independently

Example generated code:

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_template("You are a helpful assistant...")
chain = prompt | llm

result = chain.invoke({"input": "Hello"})

Best Practices

API Design

  • Use descriptive flow names
  • Document expected inputs
  • Handle errors gracefully
  • Set appropriate timeouts

Security

  • Rotate API keys regularly
  • Use minimum required permissions
  • Don’t expose keys in client code
  • Use HTTPS always

Performance

  • Cache where possible
  • Set reasonable timeouts
  • Monitor token usage
  • Use streaming for long responses

Monitoring

  • Log all API calls
  • Track error rates
  • Monitor latency
  • Alert on anomalies