Agent Node

The AI-powered brain of your chatbot. Configure LLMs, tools, knowledge bases, and response behavior.

Overview

The Agent Node is where the AI magic happens. It processes user messages using Large Language Models (LLMs) and generates intelligent responses. You can enhance its capabilities with tools, knowledge bases, and custom response models.

LLM Providers

Monology supports custom LLM provider credentials. You bring your own API keys, giving you full control over costs and data.

OpenAI
Connect using your OpenAI API key. Supports GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, and other models.
Azure OpenAI
Enterprise-grade option with Azure's infrastructure. Requires Azure endpoint, API key, and deployment name.
Credential Validation: Monology validates your LLM credentials when you save them to ensure they work correctly before deploying your chatbot.

Prompt Configuration

Configure how the Agent processes and responds to messages:

System Prompt

Define the agent's personality, role, and behavior. This is the most important configuration for shaping your chatbot's responses.

Use Markdown

Enable markdown formatting in responses for rich text, lists, code blocks, and links.

Use Storage

Allow the agent to store and retrieve information across the conversation.

Read Chat History

Enable the agent to access previous messages in the conversation for context.

Add History to Messages

Include conversation history in the LLM prompt for better context awareness.

Structured Outputs

Force the LLM to return responses in a specific JSON structure for programmatic use.

Append Previous Node Response

Include the output from the previous node in the agent's context.

Tools

Attach tools to extend the agent's capabilities. Tools allow the agent to perform specific tasks during conversations.

IT Services Intent Classifier
A pre-trained tool for accurate intent classification

This tool is fine-tuned using distilbert-base-uncased to classify user intents with high accuracy. It's more reliable than relying on LLM-based classification, which can sometimes hallucinate.

Supported Intents:

Requirement SubmissionGeneral QueryContact DetailsFeedback SubmissionAppreciationGreetingJob Application Submission
Why use this tool?

Using the Intent Classifier saves LLM tokens and provides more consistent, accurate classification compared to prompting the LLM directly.

More tools coming soon! We're working on additional tools and the ability to train custom intent classifiers for your specific use cases.

Learn more about Tools

Knowledge Base

Connect your own data sources so the agent can access accurate, up-to-date information when responding to users.

CSV Files

Structured data in tabular format

PDF Files

Documents, manuals, guides

Website Links

Web pages to crawl and index

Data Quality Matters

The quality of AI responses depends entirely on your data. Ensure your knowledge base content is clean, accurate, and well-organized for best results.

Response Model

Define a custom response model to structure the agent's output. This is useful when you need to extract specific fields from the conversation for use in subsequent nodes or external systems.

Defining Fields

Add fields with names and data types. The agent will extract and return these fields in a structured format that can be referenced by other nodes.

Example: Extract user_intent (string),urgency_level (number), andrequires_human (boolean) from each conversation.

Next Steps