Components
Langflow components are the building blocks of your AI workflows.
Inputs
Chat Input
Receives user messages in a chat interface.
Outputs:
message: The user’s text input
Use when: Building conversational interfaces
Text Input
Accepts a single text value.
Parameters:
value: The text content
Use when: Providing static text or variables
File Input
Accepts uploaded files.
Parameters:
file_types: Allowed file extensionsmax_size: Maximum file size
Outputs:
content: File contents as textpath: File location
Use when: Processing documents or data files
LLMs (Large Language Models)
OpenAI
Connect to OpenAI’s models.
Parameters:
model: gpt-4o, gpt-4, gpt-3.5-turboapi_key: Your OpenAI API keytemperature: Creativity (0-1)max_tokens: Response length limit
Anthropic
Connect to Anthropic’s Claude models.
Parameters:
model: claude-3-opus, claude-3-sonnet, claude-3-haikuapi_key: Your Anthropic API keytemperature: Creativity (0-1)max_tokens: Response length limit
Azure OpenAI
Connect to Azure-hosted OpenAI models.
Parameters:
deployment_name: Your deploymentapi_key: Azure API keyendpoint: Azure endpoint URL
Ollama
Connect to local Ollama models.
Parameters:
model: Model name (llama3, mixtral, etc.)base_url: Ollama server URL
Use when: Running models locally
AWS Bedrock
Connect to AWS Bedrock models.
Parameters:
model_id: Bedrock model IDregion: AWS region- Credentials via environment or IAM
Prompts
Prompt Template
Create reusable prompt templates.
Parameters:
template: Text with{variable}placeholders
Example:
You are a {role} assistant.
Answer the following question: {question}System Message
Set the AI’s behavior and context.
Parameters:
content: System instructions
Use when: Defining AI personality or constraints
Few-Shot Prompt
Provide examples for the AI.
Parameters:
examples: List of input/output pairsprefix: Instructions before examplessuffix: Instructions after examples
Memory
Conversation Buffer Memory
Stores recent conversation history.
Parameters:
k: Number of exchanges to rememberreturn_messages: Return as messages vs string
Conversation Summary Memory
Summarizes long conversations.
Parameters:
llm: LLM for summarizationmax_token_limit: When to summarize
Entity Memory
Tracks mentioned entities.
Parameters:
llm: LLM for entity extractionentity_store: Storage backend
Chains
LLM Chain
Basic prompt → LLM → response chain.
Inputs:
llm: The language modelprompt: The prompt templatememory: Optional memory
Conversation Chain
Pre-built conversational chain.
Inputs:
llm: The language modelmemory: Conversation memoryprompt: Optional custom prompt
Sequential Chain
Run multiple chains in sequence.
Inputs:
chains: List of chains to runinput_variables: Starting inputsoutput_variables: What to return
Tools
Web Search
Search the web using various APIs.
Parameters:
search_engine: Google, Bing, DuckDuckGoapi_key: Search API keynum_results: Results to return
Calculator
Perform mathematical calculations.
Inputs:
expression: Math expression to evaluate
Python REPL
Execute Python code.
Parameters:
timeout: Execution time limit
Security: Runs in sandboxed environment
SQL Database
Query databases.
Parameters:
connection_string: Database URLtables: Tables to include
Wikipedia
Search and retrieve Wikipedia content.
Parameters:
top_k_results: Number of resultslanguage: Wikipedia language
Agents
ReAct Agent
Reasoning and acting agent.
Inputs:
llm: The language modeltools: Available toolsmemory: Optional memory
Use when: Tasks requiring tool use and reasoning
Function Calling Agent
OpenAI function calling agent.
Inputs:
llm: OpenAI model (with function calling)tools: Tools defined as functions
Plan and Execute Agent
Plans before executing.
Inputs:
planner_llm: LLM for planningexecutor_llm: LLM for executiontools: Available tools
Vector Stores
Chroma
Local vector database.
Parameters:
collection_name: Collection identifierembedding: Embedding model
Pinecone
Cloud vector database.
Parameters:
api_key: Pinecone API keyindex_name: Index to useembedding: Embedding model
FAISS
In-memory vector storage.
Parameters:
embedding: Embedding model
Use when: Quick prototyping, no persistence needed
Embeddings
OpenAI Embeddings
OpenAI’s embedding models.
Parameters:
model: text-embedding-ada-002, text-embedding-3-smallapi_key: OpenAI API key
Hugging Face Embeddings
Hugging Face embedding models.
Parameters:
model_name: HuggingFace model ID
Ollama Embeddings
Local embedding models.
Parameters:
model: Model namebase_url: Ollama server
Outputs
Chat Output
Displays responses in chat format.
Inputs:
message: Text to display
Text Output
Returns plain text.
Inputs:
text: Text to output
JSON Output
Returns structured JSON.
Inputs:
data: Data to serialize
Utilities
Text Splitter
Split text into chunks.
Parameters:
chunk_size: Characters per chunkchunk_overlap: Overlap between chunks
Document Loader
Load documents from files.
Parameters:
file_path: Path to documentloader_type: PDF, DOCX, TXT
Conditional
Branch based on conditions.
Parameters:
condition: Expression to evaluatetrue_path: Path if truefalse_path: Path if false