Testing & Debugging

Testing & Debugging

Ensure your flows work correctly before deployment.

The Playground

Opening Playground

  1. Click Run in the toolbar
  2. Playground opens in a panel
  3. Test your flow interactively

Playground Features

  • Chat interface: Test conversational flows
  • Input fields: Provide test values
  • Output display: See results
  • Reset: Clear conversation history

Testing Workflow

  1. Start with simple inputs
  2. Verify expected outputs
  3. Test edge cases
  4. Try unexpected inputs
  5. Check error handling

Debugging Tools

Step-by-Step Execution

Run components one at a time:

  1. Right-click a component
  2. Select Run to here
  3. See output at this step
  4. Identify where issues occur

Inspect Connections

View data flowing between components:

  1. Click on a connection line
  2. See current value
  3. Verify correct data passing

Component Logs

View detailed component logs:

  1. Click on a component
  2. Open Logs tab
  3. See inputs, outputs, errors
  4. Timestamps for performance

Common Issues

Component Not Producing Output

Symptoms:

  • Output shows empty
  • Next component errors

Debugging:

  1. Check input connections
  2. Verify input data exists
  3. Check component configuration
  4. Look for validation errors

Common causes:

  • Missing required parameter
  • Incorrect API key
  • Model not available

Connection Type Mismatch

Symptoms:

  • Connection shows red
  • “Incompatible types” error

Debugging:

  1. Check output type of source
  2. Check expected input type
  3. Add conversion component if needed

Solutions:

  • Use Text converter
  • Parse JSON
  • Extract specific field

LLM Not Responding

Symptoms:

  • Timeout errors
  • Empty responses
  • Connection failures

Debugging:

  1. Verify API key is valid
  2. Check network connectivity
  3. Verify model exists
  4. Check rate limits

Solutions:

  • Use correct API key
  • Select available model
  • Add retry logic

Memory Not Working

Symptoms:

  • Bot forgets conversation
  • Context not passed

Debugging:

  1. Check memory connected to LLM
  2. Verify memory type matches
  3. Check k parameter

Solutions:

  • Use correct memory type
  • Connect to memory input
  • Increase history size

Slow Performance

Symptoms:

  • Long response times
  • Timeouts
  • UI freezing

Debugging:

  1. Check which component is slow
  2. Monitor token usage
  3. Look for unnecessary operations

Solutions:

  • Use faster models
  • Reduce context size
  • Cache repeated calls
  • Optimize prompts

Testing Strategies

Unit Testing Components

Test each component individually:

  1. Isolate the component
  2. Provide test inputs
  3. Verify outputs
  4. Document expected behavior

Integration Testing

Test connected components:

  1. Create test flow subset
  2. Test data flow
  3. Verify transformations
  4. Check error propagation

Edge Case Testing

Test unusual inputs:

Empty input: ""
Very long input: [1000+ characters]
Special characters: "Test <script>alert(1)</script>"
Unicode: "Test 你好 🎉"
Numbers as strings: "12345"

Error Case Testing

Deliberately cause errors:

  • Invalid inputs
  • Missing parameters
  • Network failures
  • Rate limit conditions

Verify graceful handling.

Monitoring in Production

Logging

Enable comprehensive logging:

  1. Set log level to DEBUG during development
  2. INFO for production
  3. Monitor error logs

Metrics

Track key metrics:

  • Response times
  • Success rates
  • Token usage
  • Error rates

Alerts

Set up alerts for:

  • Error rate spikes
  • Latency increases
  • API failures
  • Token budget exceeded

Debugging Checklist

Before Reporting Issues

✓ Verify API keys are valid ✓ Check network connectivity ✓ Review component configuration ✓ Check input data format ✓ Look at component logs ✓ Try simpler test case ✓ Check for similar issues in docs

Information to Gather

  • Flow export (JSON)
  • Error messages
  • Component logs
  • Input that caused issue
  • Expected vs actual output

Tips for Effective Debugging

Isolate the Problem

  1. Disconnect components
  2. Test each separately
  3. Find which fails
  4. Focus debugging there

Use Test Data

Create consistent test data:

{
  "simple": "Hello, how are you?",
  "complex": "Analyze this dataset...",
  "edge": "",
  "long": "Very long text..."
}

Document What Works

When something works:

  • Save the configuration
  • Note the working state
  • Use for comparison

Version Your Flows

Save versions as you develop:

  • chatbot-v1.json
  • chatbot-v2-memory.json
  • chatbot-v3-tools.json

Roll back if needed.

Ask for Help

If stuck:

  1. Check documentation
  2. Search for similar issues
  3. Ask in community
  4. Contact support with details