Skip to main content

LangChain

Connect Unizo MCP to LangChain using the MCP adapters package.

Overview

LangChain provides MCP integration through langchain-mcp-adapters, converting MCP tools into LangChain-compatible tools. Combined with LangGraph's create_react_agent, you can build powerful AI agents with access to Unizo's unified integration platform.

Prerequisites

Before setting up the MCP integration, you'll need:

  • Unizo API Key: Used for authentication (found in the Unizo Console)
  • Python 3.11+: Required for LangChain MCP adapters
  • OpenAI API Key: For the underlying LLM (or other supported LLM provider)

Installation

pip install langchain-mcp-adapters langchain langgraph langchain-openai

Or with uv:

uv add langchain-mcp-adapters langchain langgraph langchain-openai

Authentication

Step 1: Generate a Service Key

Use your Unizo API key to generate a service key for MCP authentication:

curl --location 'https://api.unizo.ai/api/v1/serviceKeys' \
--header "apiKey: {unizo_api_key}" \
--header "Content-Type: application/json" \
--data '{
"name": "MCP LangChain Integration",
"subOrganization": {
"name": "{YOUR_CUSTOMER_NAME}",
"externalKey": "{YOUR_CUSTOMER_UNIQUE_IDENTIFIER}"
},
"integration": {
"target": {
"categorySelectors": [
{
"type": "TICKETING"
}
]
}
}
}'

Required Parameters:

ParameterDescription
apiKeyYour Unizo API key (found in the Unizo Console)
nameDescriptive name for the integration (e.g., "MCP LangChain Integration")
subOrganization.nameYour organization name (e.g., "Acme Inc")
subOrganization.externalKeyUnique identifier (UUID format recommended)

Response:

{
"state": "ACTIVE",
"displayId": "{UNIZO_SERVICE_KEY}",
...
}

Important: Save the displayId from the response  this is your service key for authenticating with the MCP server.

Step 2: Configure Environment Variables

Create a .env file in your project root:

UNIZO_SERVICE_KEY=your_service_key_from_displayId
OPENAI_API_KEY=your_openai_api_key

Quick Start

Connect to Unizo MCP and use tools in LangChain:

import os
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.pre-built import create_react_agent
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv

load_dotenv()

async def main():
# Connect to Unizo MCP server
client = MultiServerMCPClient({
"unizo": {
"url": "https://api.unizo.ai/mcp",
"transport": "streamable_http",
"headers": {
"servicekey": os.getenv("UNIZO_SERVICE_KEY"),
"x-mcp-scopes": "ticketing"
}
}
})

# Get Unizo tools
tools = await client.get_tools()

# Create LangChain agent with OpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0)
agent = create_react_agent(llm, tools)

# Run agent
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "List ticketing connectors"}]
})
print(result["messages"][-1].content)

# Run async function
if __name__ == "__main__":
asyncio.run(main())

Headers:

HeaderDescription
servicekeyYour Unizo service key (from the authentication step)
x-mcp-scopesComma-separated scopes (see Available Scopes below)

Example with multiple scopes:

"x-mcp-scopes": "platform,ticketing,vms"

Interactive Chat Interface

Build a simple chat interface to interact with Unizo MCP tools:

import os
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.pre-built import create_react_agent
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from dotenv import load_dotenv

load_dotenv()

async def chat():
# Connect to Unizo MCP
client = MultiServerMCPClient({
"unizo": {
"url": "https://api.unizo.ai/mcp",
"transport": "streamable_http",
"headers": {
"servicekey": os.getenv("UNIZO_SERVICE_KEY"),
"x-mcp-scopes": "platform,ticketing"
}
}
})

tools = await client.get_tools()
print(f"Connected! {len(tools)} tools available.\n")

# Create agent
llm = ChatOpenAI(model="gpt-4o", temperature=0)
agent = create_react_agent(llm, tools)

# Initialize conversation
messages = [
SystemMessage(content="You are a helpful assistant for managing Unizo integrations.")
]

print("=" * 50)
print("UNIZO CHAT (type 'quit' to exit)")
print("=" * 50 + "\n")

while True:
user_input = input("You: ").strip()
if user_input.lower() in ['quit', 'exit']:
print("Goodbye!")
break
if not user_input:
continue

messages.append(HumanMessage(content=user_input))
result = await agent.ainvoke({"messages": messages})
messages = result["messages"]
print(f"\nAgent: {messages[-1].content}\n")

if __name__ == "__main__":
asyncio.run(chat())

Using Different LLM Providers

Anthropic Claude

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
api_key=os.getenv("ANTHROPIC_API_KEY"),
temperature=0
)
agent = create_react_agent(llm, tools)

Azure OpenAI

from langchain_openai import AzureChatOpenAI

llm = AzureChatOpenAI(
azure_deployment="gpt-4o",
api_version="2024-02-15-preview",
temperature=0
)
agent = create_react_agent(llm, tools)