Chapter 2: Visual Agent Development with Langflow

Welcome to Chapter 2! Here we’ll explore low-code visual agent development using Langflow - the perfect middle ground between no-code simplicity and full programming control.

Learning Objectives

By the end of this chapter, you’ll be able to:

  • Set up and navigate the Langflow visual interface
  • Build basic chatbots using IBM watsonx.ai models
  • Create agentic workflows with tool integration
  • Implement post-processing with advanced prompt engineering
  • Deploy and test your visual agent workflows

What is Langflow?

Langflow is a visual framework for building multi-agent and RAG applications. It provides:

  • Drag-and-drop interface for creating AI workflows
  • Pre-built components for common AI tasks
  • Visual debugging to see data flow between components
  • Easy deployment of your completed workflows
  • Integration with multiple LLM providers and vector databases

Setup Instructions

Getting started with Langflow is straightforward:

  1. Sign up for a Langflow Trial via the DataStax Website:
    https://astra.datastax.com/signup?type=langflow
    
  2. Access Langflow:
    https://astra.datastax.com/langflow
    
  3. Start building your first flow!

Option 2: Local Installation

If you prefer to run Langflow locally:

pip install langflow
langflow run

Then open your browser to http://localhost:7860

Course Exercises

Exercise A: Basic Prompting

First up, we’ll create a simple chatbot with no tools - this establishes the groundwork for more complex agents.

Exercise A - Basic Prompting Workflow Basic prompting workflow showing Chat Input β†’ IBM watsonx.ai β†’ Chat Output

System Prompt:

You are a helpful agent designed to assist with user queries.

Nodes Required:

  • Chat Input - Receives user messages
  • IBM watsonx.ai - Language model for processing
  • Chat Output - Displays agent responses

Workflow Setup:

  1. Add a Chat Input component
  2. Connect it to IBM watsonx.ai component
  3. Configure the system prompt in the watsonx.ai component
  4. Connect to Chat Output component

Test Prompts:

  • β€œHow do LLMs work?”
  • β€œWhy is trust in AI important?”
  • β€œHow can we detect hallucination?”

Exercise B: Creating an Agent with Tool Nodes

Now we enhance our basic chatbot by converting it into an agent with tool access. We’ll add the Arxiv tool for researching the latest research papers.

Exercise B - Agent with Tools Enhanced workflow with Agent node orchestrating IBM watsonx.ai and Arxiv tool

Additional Nodes Required:

  • Arxiv - Tool for academic paper research
  • Agent - Orchestrates tool usage and responses

Workflow Enhancement:

  1. Replace the direct IBM watsonx.ai connection with an Agent node
  2. Connect the Arxiv tool to the Agent
  3. Configure the Agent to use IBM watsonx.ai as the underlying model
  4. Test the enhanced workflow

Test Prompts:

  • β€œWhat are the trending papers in machine learning governance?”
  • β€œIs there any new research on LLM hallucinations?”
  • β€œFind recent papers about AI safety and alignment”

Exercise C: Post Processing with Prompt Engineering

This exercise demonstrates sophisticated prompt engineering for post-processing and integrating web scraping capabilities.

Exercise C - Multi Tool Setup Multi-tool workflow with Arxiv, Agent, FirecrawlScrapeAPI, and post-processing

Exercise C - Output Parsing Output parsing and evaluation workflow with Prompt component for quality assessment

Engineered Prompt:

Review the following input and rate the output on a scale of 1-10 on how much clarity it provides.
{chat_input}

Additional Nodes Required:

  • FirecrawlScrapeAPI - Set to tool node for web scraping
  • Prompt - For advanced prompt engineering and post-processing

Workflow Enhancement:

  1. Add FirecrawlScrapeAPI as a tool node
  2. Connect it to your existing agent
  3. Add a Prompt component for post-processing evaluation
  4. Configure the evaluation prompt template

Test Prompt:

Summarise this page - https://www.ato.gov.au/about-ato/commitments-and-reporting/information-and-privacy/ato-ai-transparency-statement

Key Concepts

Visual Programming Progression

Level 1: Basic Prompting

  • Simple chat flow: Input β†’ LLM β†’ Output
  • System prompts: Define agent behavior
  • Direct responses: No external tool access

Level 2: Agentic Behavior

  • Tool integration: Access to external resources
  • Decision making: Agent chooses when to use tools
  • Research capabilities: Can gather current information

Level 3: Advanced Processing

  • Post-processing: Evaluate and refine outputs
  • Web scraping: Access live web content
  • Quality assessment: Rate and improve responses

Component Deep Dive

Core Components

  • Chat Input/Output: User interface components
  • IBM watsonx.ai: Enterprise-grade language model
  • Agent: Orchestrates multiple tools and decisions
  • Prompt: Advanced prompt engineering and templating

Tool Components

  • Arxiv: Academic research paper search
  • FirecrawlScrapeAPI: Web content extraction
  • Custom Tools: Extensible for specific needs

Processing Components

  • Memory: Conversation context management
  • Evaluators: Quality assessment and scoring
  • Filters: Content processing and refinement

Low-Code Benefits

  • Faster development: Visual interface speeds up creation
  • Better collaboration: Non-technical team members can contribute
  • Easier debugging: Visual flow makes issues obvious
  • Rapid prototyping: Quick iteration and testing
  • Enterprise integration: Native support for IBM watsonx.ai

Advanced Workflows

Multi-Tool Research Agent

Combine multiple tools for comprehensive research:

  1. Arxiv for academic papers
  2. FirecrawlScrapeAPI for web content
  3. Agent orchestrates tool selection
  4. Post-processing evaluates and synthesizes results

Quality Assessment Pipeline

Implement automated quality control:

  1. Initial response generation
  2. Clarity evaluation using prompt engineering
  3. Iterative improvement based on scores
  4. Final output with quality metrics

Practice Projects

Project 1: Academic Research Assistant

Build an agent that:

  • Searches academic papers via Arxiv
  • Scrapes relevant web content
  • Synthesizes findings with clarity scoring
  • Provides comprehensive research summaries

Project 2: Government Policy Analyzer

Create a workflow that:

  • Scrapes government policy documents
  • Evaluates content clarity and accessibility
  • Provides simplified summaries
  • Rates information quality

Project 3: AI Governance Monitor

Develop an agent for:

  • Tracking AI policy developments
  • Analyzing transparency statements
  • Monitoring research trends
  • Providing governance insights

Debugging and Optimization

Visual Debugging

  1. Component inspection: Check data flow between nodes
  2. Message tracing: Follow conversation paths
  3. Tool monitoring: Verify tool calls and responses
  4. Performance metrics: Monitor response times and quality

Common Issues

  • Tool connectivity: Ensure API keys are properly configured
  • Prompt formatting: Check template syntax and variables
  • Agent routing: Verify tool selection logic
  • Output formatting: Ensure consistent response structure

Next Steps

Ready to take your skills to the next level with full programming control? Move on to:

Chapter 3: Langgraph - Master pro-code agent development with Python


Explore the Langflow resources:

2. Langflow/
β”œβ”€β”€ README.md                      # Setup instructions
└── Screenshot 2025-09-03...png    # Interface example

Additional Resources