Beyond Basic Bots (BBB): Complete Guide to Integrating OpenAI with Intercom [Part 2]
![Beyond Basic Bots (BBB): Complete Guide to Integrating OpenAI with Intercom [Part 2]](https://images.ctfassets.net/t4k6r9ov29zq/4LhYxVXIt1AH6CE0F9hsSG/8738881751885e0772664e9420641599/Untitled_design__8_.webp?w=1200&q=80)
Ever wondered how to build a smart support assistant that can actually help your customers beyond canned responses? In this guide, I’ll walk you through building a sophisticated AI support system that integrates OpenAI’s Assistant API with Intercom to create a truly helpful customer support experience.
What We’re Building
By the end of this tutorial, you’ll have an AI assistant that can:
Answer customer questions using your actual user data
Check account balances, subscription status, and usage history
Perform actions like pausing subscriptions or managing permissions
Seamlessly hand off complex issues to human agents
Remember conversation context across multiple messages
Prerequisites
We're using the following tech, but this can all be adapted for your stack.
Node.js application (we’ll use Remix, but Express works too)
Intercom account with admin access
OpenAI API account
Redis instance (for background jobs and caching, see below for easy initialization)
Prisma (or your preferred ORM, if at all)
Step 1: Set Up Your Environment
First, install the required dependencies:
npm install intercom-client openai bull redis yup
npm install --save-dev @types/bull
If you don't have a Redis server running locally, the quickest way to throw one up is via Docker:
docker run -d --name my-redis-container -p 6379:6379 redis
Create a private Intercom app via the Intercom Developer Hub, then add the following environment variables to your .env
file:
# Grab these from your Intercom Developer Hub
INTERCOM_ACCESS_TOKEN=your_intercom_token
INTERCOM_APP_ID=your_app_id
# This should be the ID of the agent (create a new "Support Agent" user via Intercom)
INTERCOM_ADMIN_ID=your_admin_id
# This should be the ID of a human-monitored account, who can help when the agent can't
INTERCOM_PRIMARY_CONTACT_ID=backup_human_agent_id
# Grab these from your OpenAI API dashboard
OPENAI_API_KEY=your_openai_key
OPENAI_SUPPORT_ASSISTANT_ID=assistant_id_from_openai
# This is the standard Redis port, change if needed
REDIS_URL=redis://localhost:6379
Step 2: Create the Intercom Webhook Handler
Create an API endpoint to receive Intercom webhooks. This endpoint will queue background jobs for AI processing:
import { ActionFunction, json } from "@remix-run/node";
import * as Yup from "yup";
// This is a simple helper that returns a bull queue
import { getBullQueue } from "~/utils/queue";
// Validation schema for Intercom webhooks
const webhookSchema = Yup.object({
topic: Yup.string()
.oneOf(["conversation.user.created", "conversation.user.replied"])
.required(),
data: Yup.object({
item: Yup.object({
type: Yup.string().oneOf(["conversation"]),
id: Yup.string().required(),
source: Yup.object({
type: Yup.string().oneOf(["conversation"]),
body: Yup.string().required(),
author: Yup.object({
email: Yup.string(),
}),
}),
admin_assignee_id: Yup.number().nullable(),
}),
}),
});
export const action: ActionFunction = async ({ request }) => {
if (request.method !== "POST") {
return json({ error: "Method not allowed" }, 405);
}
try {
const rawData = await request.json();
// Handle Intercom ping events
if (rawData.topic === "ping") {
return json({ success: true });
}
// Validate webhook payload
const { data } = webhookSchema.validateSync(rawData);
// Queue background job for AI processing
const queue = getBullQueue("assistant-reply");
await queue.add(
{ conversationId: data.item.id },
{ attempts: 3, backoff: "exponential" }
);
return json({ success: true });
} catch (error) {
console.error("Webhook error:", error);
return json({ error: error.message }, 400);
}
Step 3: Set Up Background Job Processing
Now we have our webhook sending new Intercom messages through to our background job, let's listen for it and send it through to our processAssistantReply
function.
import Bull from "bull";
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL);
export const assistantQueue = new Bull("assistant-reply", {
redis: {
port: 6379,
host: "localhost",
},
});
// Process jobs
assistantQueue.process(async (job) => {
const { conversationId } = job.data;
await processAssistantReply(conversationId);
});
Step 4: Create OpenAI Assistant Tool Functions
Build the functions your AI assistant can call. Here’s an example for checking user credits:
export async function getCredits(toolCall: any, conversation: any) {
try {
const userEmail = conversation.source.author.email;
if (!userEmail) {
return {
tool_call_id: toolCall.id,
output: "User is not logged in. Please ask them to log in first.",
};
}
const user = await db.user.findUnique({
where: { email: userEmail },
});
if (!user) {
return {
tool_call_id: toolCall.id,
output: `No user found with email ${userEmail}. Please contact support.`,
};
}
return {
tool_call_id: toolCall.id,
output: `User has ${user.creditsAvailable} credits available.`,
};
} catch (error) {
return {
tool_call_id: toolCall.id,
output: "Error retrieving credit information. Please contact support.",
};
}
}
We can create similar functions for other capabilities. For instance, to allow the agent to assign conversations to an admin (real person) if it can't figure out how to resolve the issue:
export async function assignToHuman(toolCall: any, conversation: any) {
await intercom.conversations.assign({
id: conversation.id,
type: "admin",
adminId: process.env.INTERCOM_PRIMARY_CONTACT_ID,
});
return {
tool_call_id: toolCall.id,
output: "Conversation assigned to human agent. They'll respond soon."
};
}
Step 5: Build the Main AI Processing Logic
Now for the fun part! Here's how to create the core worker that uses OpenAI to handle the conversation:
import { Client } from "intercom-client";
import OpenAI from "openai";
import Redis from "ioredis";
import { getCredits } from "./tools/getCredits";
import { assignToHuman } from "./tools/assignToHuman";
const intercom = new Client({
tokenAuth: { token: process.env.INTERCOM_ACCESS_TOKEN },
});
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const redis = new Redis(process.env.REDIS_URL);
export async function processAssistantReply(conversationId: string) {
// Get conversation from Intercom
const conversation = await intercom.conversations.find({
id: conversationId,
inPlainText: true,
});
// Only respond if unassigned or assigned to our AI
if (!shouldAIRespond(conversation)) {
return;
}
// Get or create OpenAI thread
const threadId = await getOrCreateThread(conversation);
// Add user message to thread
await openai.beta.threads.messages.create(threadId, {
role: "user",
content: getLatestUserMessage(conversation),
});
// Run the assistant
let run = await openai.beta.threads.runs.create(threadId, {
assistant_id: process.env.OPENAI_SUPPORT_ASSISTANT_ID,
additional_instructions: await getPersonalizedInstructions(conversation),
});
// Poll until completion
while (["queued", "in_progress", "requires_action"].includes(run.status)) {
if (run.status === "requires_action") {
// Handle tool calls
const toolOutputs = await handleToolCalls(
run.required_action.submit_tool_outputs.tool_calls,
conversation
);
await openai.beta.threads.runs.submitToolOutputs(
threadId,
run.id,
{ tool_outputs: toolOutputs }
);
}
await new Promise((resolve) => setTimeout(resolve, 1000));
run = await openai.beta.threads.runs.retrieve(threadId, run.id);
}
// Send AI response back to Intercom
if (run.status === "completed") {
const messages = await openai.beta.threads.messages.list(threadId);
const aiResponse = getLatestAssistantMessage(messages);
await intercom.conversations.replyByIdAsAdmin({
id: conversationId,
adminId: process.env.INTERCOM_ADMIN_ID,
messageType: "comment",
body: formatResponseAsHTML(aiResponse),
});
// Add AI disclaimer for new conversations
if (isNewConversation(conversation)) {
await intercom.conversations.replyByIdAsAdmin({
id: conversationId,
adminId: process.env.INTERCOM_ADMIN_ID,
messageType: "comment",
body: "<p><i>I'm an AI assistant. For complex issues, just ask to speak with a human! 🤖</i></p>",
});
}
}
}
async function handleToolCalls(toolCalls: any[], conversation: any) {
const outputs = await Promise.all(
toolCalls.map(async (toolCall) => {
switch (toolCall.function.name) {
case "get_credits":
return await getCredits(toolCall, conversation);
case "assign_to_human":
return await assignToHuman(toolCall, conversation);
default:
return {
tool_call_id: toolCall.id,
output: "Unknown function requested.",
};
}
})
);
return outputs;
}
Step 6: Configure Your OpenAI Assistant
In the OpenAI dashboard, create an assistant with these tools:
{
"tools": [
{
"type": "function",
"function": {
"name": "get_credits",
"description": "Get the user's current credit balance",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
},
{
"type": "function",
"function": {
"name": "assign_to_human",
"description": "Transfer the conversation to a human agent",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
]
}
Give your assistant instructions like:
You are a helpful customer support assistant for [Your Platform Name]
Key guidelines:
- Always be friendly and professional
- Use the user's name when possible
- Check user data before making assumptions
- For complex technical issues, assign to human agents
- Never make promises about features or billing without checking first
When users ask about:
- Credits: Use get_credits function
- Complex issues: Use assign_to_human function
- Account problems: Check their data first, then decide if human help is needed
Step 7: Configure Intercom Webhooks
In your Intercom settings:
Go to Settings → Developers → Webhooks
Add your webhook URL. For example:
https://yourdomain.com/api/webhooks/support-assistant
Subscribe to the following events:
conversation.user.created
conversation.user.replied
Step 8: Test Your Integration
Test the webhook: Send a test message through Intercom and verify your endpoint receives it
Test AI responses: Ask questions that trigger your tool functions
Test handoff: Ask for human help and verify assignment works
Test conversation memory: Send multiple messages in one conversation
Advanced Features to Consider
Smart Routing
Route conversations based on content:
function shouldUseAI(conversation: any): boolean {
const message = getLatestUserMessage(conversation).toLowerCase();
// Route technical/billing issues to humans immediately
if (message.includes("bug") || message.includes("billing issue")) {
return false;
}
return true;
}
Rate Limiting
Prevent AI spam:
const rateLimiter = new Map();
function checkRateLimit(conversationId: string): boolean {
const key = `ai_response_${conversationId}`;
const lastResponse = rateLimiter.get(key);
if (lastResponse && Date.now() - lastResponse < 30000) {
return false; // Too soon
}
rateLimiter.set(key, Date.now());
return true;
}
Analytics
Track AI performance:
await analytics.track("ai_support_interaction", {
conversationId,
toolsUsed: toolCalls.map((tc) => tc.function.name),
responseTime: Date.now() - startTime,
wasHandedOff: toolCalls.some((tc) => tc.function.name === "assign_to_human"),
});
Conclusion
You now have a sophisticated AI support assistant that can handle real customer inquiries using your actual business data. The system is designed to be:
Scalable: Background jobs handle multiple conversations simultaneously
Reliable: Error handling and fallbacks ensure customers always get help
Maintainable: Modular tool functions make it easy to add new capabilities
Smart: Context awareness and personalization create better customer experiences
The key to success is starting simple and gradually adding more sophisticated tools and logic as you learn what your customers need most. Monitor your AI’s performance and continuously refine its instructions and capabilities based on real usage patterns.
Remember: the goal isn’t to replace human support entirely, but to handle routine inquiries efficiently while seamlessly escalating complex issues to your team. This creates a better experience for customers and frees up your support team to focus on high-value interactions.

Liam O'Connell
Liam is a veteran in the SaaS industry with a deep-seated passion for business growth and customer loyalty. With over 15 years of experience spanning product development, marketing, and operations, Liam brings a holistic perspective to the challenges of subscription management. He’s particularly invested in supporting the journey of small to medium-sized SaaS ventures and the innovative spirit of Indie Hackers, helping them decode customer behavior and craft strategies that convert cancellations into long-term relationships. Off-duty, Liam enjoys tinkering with smart home tech and is an avid cyclist, always looking for new routes and challenges.