Deploying Flows
Deploying Flows
Turn your Langflow workflows into production-ready APIs and integrations.
API Deployment
Generating an API Endpoint
Every flow can become an API:
- Open your saved flow
- Click API in the toolbar
- View your endpoint URL
- Copy the endpoint for use
Endpoint URL Format
Your flow’s API endpoint:
https://your-workspace-url/langflow/api/v1/flows/{flow-id}/runMaking API Calls
Basic request:
curl -X POST "your-flow-endpoint" \
-H "Content-Type: application/json" \
-d '{
"input_value": "Hello, how are you?"
}'With parameters:
curl -X POST "your-flow-endpoint" \
-H "Content-Type: application/json" \
-d '{
"input_value": "Summarize this document",
"tweaks": {
"OpenAI-xxxxx": {
"model_name": "gpt-4o",
"temperature": 0.5
}
}
}'Response Format
{
"result": {
"output": "I'm doing well, thank you for asking!"
},
"session_id": "abc123",
"metadata": {
"duration": 1.23,
"tokens_used": 45
}
}API Authentication
Using API Keys
Generate an API key:
- Go to Settings → API Keys
- Click Create New Key
- Copy and securely store the key
Using the key:
curl -X POST "your-flow-endpoint" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"input_value": "Hello"}'Token Permissions
Configure what each key can do:
- Read: View flows
- Execute: Run flows
- Write: Modify flows
- Admin: Full access
Tweaks and Runtime Configuration
What Are Tweaks?
Tweaks let you modify component settings at runtime without editing the flow.
Using Tweaks
Override component settings:
{
"input_value": "My question",
"tweaks": {
"ComponentName-id": {
"parameter": "new_value"
}
}
}Common Tweaks
Change model:
"tweaks": {
"OpenAI-xxxxx": {
"model_name": "gpt-4o"
}
}Adjust temperature:
"tweaks": {
"OpenAI-xxxxx": {
"temperature": 0.2
}
}Change retrieval count:
"tweaks": {
"Retriever-xxxxx": {
"k": 5
}
}Session Management
Stateless Requests
Each request is independent:
curl -X POST "endpoint" \
-d '{"input_value": "Hello"}'
# No memory of previous requestsStateful Sessions
Maintain conversation state:
# First request - get session ID
curl -X POST "endpoint" \
-d '{"input_value": "My name is Alice"}'
curl -X POST "endpoint" \
-d '{
"input_value": "What is my name?",
"session_id": "abc123"
}'Session Lifecycle
- Sessions expire after inactivity
- Default: 30 minutes
- Configurable per flow
- Can be manually cleared
Webhooks
Setting Up Webhooks
Receive notifications when flows complete:
- Go to flow settings
- Add webhook URL
- Select events to trigger
Webhook Payload
{
"event": "flow_completed",
"flow_id": "xxx",
"session_id": "abc123",
"result": {
"output": "..."
},
"timestamp": "2024-01-15T10:30:00Z"
}Webhook Events
| Event | Trigger |
|---|---|
flow_started | Flow execution begins |
flow_completed | Flow finishes successfully |
flow_error | Flow encounters error |
component_completed | Individual component finishes |
Client Libraries
Python
from langflow import Client
client = Client(
base_url="your-workspace-url",
api_key="your-api-key"
)
result = client.run_flow(
flow_id="your-flow-id",
input_value="Hello!",
tweaks={
"OpenAI-xxxxx": {"temperature": 0.5}
}
)
print(result.output)JavaScript/TypeScript
import { LangflowClient } from 'langflow-client';
const client = new LangflowClient({
baseUrl: 'your-workspace-url',
apiKey: 'your-api-key'
});
const result = await client.runFlow({
flowId: 'your-flow-id',
inputValue: 'Hello!'
});
console.log(result.output);Exporting Flows
Export as JSON
Download your flow for:
- Version control
- Backup
- Sharing
- Click Export → JSON
- Save the file
- Import elsewhere with Import → JSON
Export as Python
Generate standalone Python code:
- Click Export → Python
- Get code using LangChain
- Run independently
Example generated code:
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_template("You are a helpful assistant...")
chain = prompt | llm
result = chain.invoke({"input": "Hello"})Best Practices
API Design
- Use descriptive flow names
- Document expected inputs
- Handle errors gracefully
- Set appropriate timeouts
Security
- Rotate API keys regularly
- Use minimum required permissions
- Don’t expose keys in client code
- Use HTTPS always
Performance
- Cache where possible
- Set reasonable timeouts
- Monitor token usage
- Use streaming for long responses
Monitoring
- Log all API calls
- Track error rates
- Monitor latency
- Alert on anomalies