TengineAI is execution infrastructure for AI tool calls.
It provides a remote MCP server that exposes your HTTP APIs as tools to AI runtimes. You define which endpoints the model can call. TengineAI handles authentication, execution, identity injection, and usage control — and returns structured results to the model.
TengineAI does not provide prebuilt third-party integrations. Instead, you define the HTTP APIs your model can call, and TengineAI handles the rest. Think of it as the execution layer between the model and your services.
TengineAI is to MCP tools what API Gateway is to REST APIs.
TengineAI works with any MCP-compatible model SDK or client. Anthropic is used throughout the documentation as the reference implementation because it is the most widely used and fully tested today.
TengineAI is built for developers who are building AI-powered products and need tool calls to execute safely against real APIs.
TengineAI is the right fit if:
TengineAI is not the right fit if:
┌─────────────┐ ┌──────────────────┐ ┌─────────────────────┐
│ │ │ │ │ │
│ MCP Client │────────▶│ TengineAI │────────▶│ Your APIs │
│ (Runtime) │ │ (Execution Layer)│ │ (CRM, billing, │
│ │◀────────│ │◀────────│ internal services, │
└─────────────┘ └──────────────────┘ │ public APIs) │
└─────────────────────┘
Flow:
TengineAI uses five primitives:
Isolated execution environments. Each project has its own API keys, integrations, and enabled tools. A token from one project cannot access another project's tools or configuration.
HTTP endpoints you register with TengineAI that the model can call. You define the name, description, HTTP method, URL template, input schema, and authentication strategy. TengineAI handles execution.
Tools are opt-in per project. The model only sees what you enable.
MCP client connections. An integration defines how a model runtime (Claude Desktop, Cursor, custom SDK) authenticates with TengineAI.
Project-scoped credentials (tengine_...). They authenticate requests from your application to TengineAI. Use them for server-to-server workflows and automation pipelines where all calls share a single project identity.
Short-lived JWTs (tng_mst_..., 15-minute TTL) that scope an MCP session to a specific end user. Your backend mints them server-side by presenting a cryptographically signed member_assertion. TengineAI verifies it and issues the session token.
Use member session tokens when each end user should have their tool calls attributed to them individually.
See Execution Model for details on retries, request IDs, and key revocation behavior.
TengineAI supports two client authentication modes:
| Mode | Token | Use Case |
|---|---|---|
| API Key | tengine_... | Workflows, automation, server-to-server |
| User-Scoped | tng_mst_... | Multi-tenant apps, per-user tool execution |
Both modes pass the credential as the authorization_token in the Anthropic SDK's mcp_servers configuration.
See API Keys and User-Scoped Sessions for full details.
TengineAI is not:
It is execution infrastructure. You define the tools. The model decides what to call. TengineAI executes it safely.
The model can only call tools you have explicitly registered and enabled. TengineAI does not fetch arbitrary URLs — every endpoint the model can reach was configured by you.