Components

Components

Langflow components are the building blocks of your AI workflows.

Inputs

Chat Input

Receives user messages in a chat interface.

Outputs:

  • message: The user’s text input

Use when: Building conversational interfaces

Text Input

Accepts a single text value.

Parameters:

  • value: The text content

Use when: Providing static text or variables

File Input

Accepts uploaded files.

Parameters:

  • file_types: Allowed file extensions
  • max_size: Maximum file size

Outputs:

  • content: File contents as text
  • path: File location

Use when: Processing documents or data files

LLMs (Large Language Models)

OpenAI

Connect to OpenAI’s models.

Parameters:

  • model: gpt-4o, gpt-4, gpt-3.5-turbo
  • api_key: Your OpenAI API key
  • temperature: Creativity (0-1)
  • max_tokens: Response length limit

Anthropic

Connect to Anthropic’s Claude models.

Parameters:

  • model: claude-3-opus, claude-3-sonnet, claude-3-haiku
  • api_key: Your Anthropic API key
  • temperature: Creativity (0-1)
  • max_tokens: Response length limit

Azure OpenAI

Connect to Azure-hosted OpenAI models.

Parameters:

  • deployment_name: Your deployment
  • api_key: Azure API key
  • endpoint: Azure endpoint URL

Ollama

Connect to local Ollama models.

Parameters:

  • model: Model name (llama3, mixtral, etc.)
  • base_url: Ollama server URL

Use when: Running models locally

AWS Bedrock

Connect to AWS Bedrock models.

Parameters:

  • model_id: Bedrock model ID
  • region: AWS region
  • Credentials via environment or IAM

Prompts

Prompt Template

Create reusable prompt templates.

Parameters:

  • template: Text with {variable} placeholders

Example:

You are a {role} assistant.
Answer the following question: {question}

System Message

Set the AI’s behavior and context.

Parameters:

  • content: System instructions

Use when: Defining AI personality or constraints

Few-Shot Prompt

Provide examples for the AI.

Parameters:

  • examples: List of input/output pairs
  • prefix: Instructions before examples
  • suffix: Instructions after examples

Memory

Conversation Buffer Memory

Stores recent conversation history.

Parameters:

  • k: Number of exchanges to remember
  • return_messages: Return as messages vs string

Conversation Summary Memory

Summarizes long conversations.

Parameters:

  • llm: LLM for summarization
  • max_token_limit: When to summarize

Entity Memory

Tracks mentioned entities.

Parameters:

  • llm: LLM for entity extraction
  • entity_store: Storage backend

Chains

LLM Chain

Basic prompt → LLM → response chain.

Inputs:

  • llm: The language model
  • prompt: The prompt template
  • memory: Optional memory

Conversation Chain

Pre-built conversational chain.

Inputs:

  • llm: The language model
  • memory: Conversation memory
  • prompt: Optional custom prompt

Sequential Chain

Run multiple chains in sequence.

Inputs:

  • chains: List of chains to run
  • input_variables: Starting inputs
  • output_variables: What to return

Tools

Web Search

Search the web using various APIs.

Parameters:

  • search_engine: Google, Bing, DuckDuckGo
  • api_key: Search API key
  • num_results: Results to return

Calculator

Perform mathematical calculations.

Inputs:

  • expression: Math expression to evaluate

Python REPL

Execute Python code.

Parameters:

  • timeout: Execution time limit

Security: Runs in sandboxed environment

SQL Database

Query databases.

Parameters:

  • connection_string: Database URL
  • tables: Tables to include

Wikipedia

Search and retrieve Wikipedia content.

Parameters:

  • top_k_results: Number of results
  • language: Wikipedia language

Agents

ReAct Agent

Reasoning and acting agent.

Inputs:

  • llm: The language model
  • tools: Available tools
  • memory: Optional memory

Use when: Tasks requiring tool use and reasoning

Function Calling Agent

OpenAI function calling agent.

Inputs:

  • llm: OpenAI model (with function calling)
  • tools: Tools defined as functions

Plan and Execute Agent

Plans before executing.

Inputs:

  • planner_llm: LLM for planning
  • executor_llm: LLM for execution
  • tools: Available tools

Vector Stores

Chroma

Local vector database.

Parameters:

  • collection_name: Collection identifier
  • embedding: Embedding model

Pinecone

Cloud vector database.

Parameters:

  • api_key: Pinecone API key
  • index_name: Index to use
  • embedding: Embedding model

FAISS

In-memory vector storage.

Parameters:

  • embedding: Embedding model

Use when: Quick prototyping, no persistence needed

Embeddings

OpenAI Embeddings

OpenAI’s embedding models.

Parameters:

  • model: text-embedding-ada-002, text-embedding-3-small
  • api_key: OpenAI API key

Hugging Face Embeddings

Hugging Face embedding models.

Parameters:

  • model_name: HuggingFace model ID

Ollama Embeddings

Local embedding models.

Parameters:

  • model: Model name
  • base_url: Ollama server

Outputs

Chat Output

Displays responses in chat format.

Inputs:

  • message: Text to display

Text Output

Returns plain text.

Inputs:

  • text: Text to output

JSON Output

Returns structured JSON.

Inputs:

  • data: Data to serialize

Utilities

Text Splitter

Split text into chunks.

Parameters:

  • chunk_size: Characters per chunk
  • chunk_overlap: Overlap between chunks

Document Loader

Load documents from files.

Parameters:

  • file_path: Path to document
  • loader_type: PDF, DOCX, TXT

Conditional

Branch based on conditions.

Parameters:

  • condition: Expression to evaluate
  • true_path: Path if true
  • false_path: Path if false