Chat Interceptors¶
Chat interceptors provide a powerful mechanism to intercept and modify chat messages both before they are processed by the LLM and after the LLM generates a response. This functionality is specifically designed for Chat message types only and enables you to implement custom logic for message processing, filtering, transformation, or blocking.
Overview¶
Chat interceptors implement the IChatInterceptor
interface and allow you to:
- Intercept incoming messages before they reach the LLM for processing
- Intercept outgoing messages after the LLM generates a response but before sending to the user
- Modify message content at both incoming and outgoing stages
- Block or filter messages based on custom logic
- Add additional processing such as logging, validation, or enrichment
- Transform message format or content structure
The IChatInterceptor Interface¶
The IChatInterceptor
interface defines two key methods for message interception:
public interface IChatInterceptor
{
Task<MessageThread> InterceptIncomingMessageAsync(MessageThread messageThread);
Task<string> InterceptOutgoingMessageAsync(MessageThread messageThread, string? response);
}
InterceptIncomingMessageAsync¶
This method is called before the message thread is processed by the LLM.
Parameters:
messageThread
: The incoming message thread containing the user's message and conversation history
Returns:
MessageThread
: The (potentially modified) message thread to be processed by the LLM
Use Cases:
- Validate or sanitize incoming messages
- Add context or metadata to messages
- Filter out inappropriate content
- Transform message format
- Block messages by returning an empty or modified thread
InterceptOutgoingMessageAsync¶
This method is called after the LLM generates a response but before it's sent to the user.
Parameters:
messageThread
: The complete message thread including the conversation historyresponse
: The LLM-generated response (can be null)
Returns:
string
: The (potentially modified) response to be sent to the user
Use Cases:
- Post-process LLM responses
- Add formatting or additional information
- Filter or censor response content
- Log interactions
- Transform response format
- Block responses by returning empty string
Implementation Example¶
Here's a basic implementation of a chat interceptor:
using XiansAi.Flow;
using XiansAi.Messaging;
namespace Agents.LegalContract;
public class ChatInterceptor : IChatInterceptor
{
public Task<MessageThread> InterceptIncomingMessageAsync(MessageThread messageThread)
{
// Example: Log incoming messages
Console.WriteLine($"Incoming message: {messageThread.LastMessage?.Content}");
// Example: Add timestamp to message
if (messageThread.LastMessage != null)
{
messageThread.LastMessage.Content = $"[{DateTime.Now:HH:mm:ss}] {messageThread.LastMessage.Content}";
}
return Task.FromResult(messageThread);
}
public Task<string> InterceptOutgoingMessageAsync(MessageThread messageThread, string? response)
{
response = response ?? string.Empty;
// Example: Add signature to responses
if (!string.IsNullOrEmpty(response))
{
response += "\n\n---\n*Response generated by Legal Contract Agent*";
}
// Example: Log outgoing responses
Console.WriteLine($"Outgoing response: {response}");
return Task.FromResult(response);
}
}
Setting Up Chat Interceptors¶
To register a chat interceptor with your bot, use the SetChatInterceptor
method during bot configuration:
using DotNetEnv;
using XiansAi.Flow;
using Agents.LegalContract;
// Load environment variables
Env.Load();
Console.WriteLine("Starting Legal Contract Agent...\n");
var agent = new Agent("Legal Contract Agent");
var bot = agent.AddBot<LegalContractBot>();
bot.AddCapabilities(typeof(GeneralCapabilities));
// Set the chat interceptor
bot.SetChatInterceptor(new ChatInterceptor());
await agent.RunAsync();
Advanced Use Case: LLM-Powered Response Analysis¶
Here's a sophisticated real-world example that demonstrates combining LLM analysis with UI automation in the outgoing interceptor:
using XiansAi.Flow;
using XiansAi.Flow.Router;
using XiansAi.Logging;
using XiansAi.Messaging;
public class ChatInterceptor : IChatInterceptor
{
private readonly Logger<ChatInterceptor> _logger = Logger<ChatInterceptor>.For();
public Task<MessageThread> InterceptIncomingMessageAsync(MessageThread messageThread)
{
// Pre-processing: Pass through without modification in this example
return Task.FromResult(messageThread);
}
public Task<string?> InterceptOutgoingMessageAsync(MessageThread messageThread, string? response)
{
// Post-processing: Analyze response and trigger UI actions
// Fire-and-forget for better performance
_ = AnalyzeResponseAndTriggerUI(messageThread, response);
return Task.FromResult(response);
}
private async Task AnalyzeResponseAndTriggerUI(MessageThread messageThread, string? response)
{
string analysisPrompt = @"Analyze the following assistant message and determine ONLY if it contains
an explicit, direct request to the user to enter, set, or change exactly one of these contract properties:
- Title
- Effective Date
- Parties
Consider it a direct request only when the message asks a question or gives a clear instruction
that expects immediate user input.
Return exactly the property name if there is a direct request. Otherwise, return null.
Do not return any additional text.
Message: " + response;
try
{
// Use LLM to analyze the assistant's response
var propertyName = await SemanticRouterHub.ChatCompletionAsync(analysisPrompt);
_logger.LogInformation($"Detected property request: {propertyName}");
// Trigger appropriate UI commands based on analysis
switch (propertyName?.Trim())
{
case "Effective Date":
await messageThread.SendData(
new UICommand(
"Calendar",
new Dictionary<string, object> { { "field", propertyName } }
)
);
break;
case "Parties":
await messageThread.SendData(
new UICommand(
"ContractParty",
new Dictionary<string, object> { { "command", "Add" } }
)
);
break;
}
}
catch (Exception ex)
{
_logger.LogError(ex, "Error in response analysis");
}
}
}
Key Features of This Approach¶
Post-Processing Intelligence:
- Uses
ChatCompletionAsync
to analyze the LLM's response with another LLM call - Classifies the response to determine if specific UI actions should be triggered
- Operates asynchronously without blocking the main response flow
UI Command Integration:
- Automatically triggers relevant UI components (Calendar, ContractParty forms)
- Enhances user experience by presenting contextual interfaces
- Bridges the gap between conversational AI and structured data entry
Performance Optimization:
- Uses fire-and-forget pattern (
_ = AnalyzeResponseAndTriggerUI(...)
) for non-blocking execution - Keeps the main response flow fast while performing background analysis
- Implements proper error handling to prevent UI command failures from affecting chat
Practical Benefits:
- Provides intelligent UI automation based on conversation context
- Reduces user friction by anticipating needed actions
- Maintains clean separation between chat logic and UI commands
Important Notes¶
- Chat Message Type Only: Chat interceptors only work with Chat message types and will not be triggered for other message types
- Order of Execution: Incoming interceptor runs before LLM processing, outgoing interceptor runs after LLM processing
- Thread Safety: Ensure your interceptor implementation is thread-safe if handling concurrent requests
- Error Handling: Implement proper error handling to prevent interceptor failures from breaking the chat flow
- Performance: Keep interceptor logic lightweight to avoid introducing latency in message processing
Chat interceptors provide a flexible way to customize your bot's message handling behavior while maintaining clean separation of concerns in your agent architecture.