← Back to Blog
Node.jsTypeScriptLaunchSDK

TokenFence Node.js SDK Is Live on npm

·5 min read

The TokenFence Node.js SDK is now live on npm. One install, zero dependencies, full TypeScript support.

Why Node.js?

When we launched the Python SDK last week, the #1 request was: "When's the Node.js version coming?"

Fair question. A huge chunk of production AI agent code runs on Node.js and TypeScript — Next.js API routes, Express backends, serverless functions, Vercel Edge, Cloudflare Workers. If you're building AI-powered products, chances are your backend is JavaScript.

Today, that's covered.

npm install tokenfence

What You Get

  • Per-workflow budget caps — set a dollar limit per agent run, per user, per request
  • Auto model downgrade — when budget gets tight, automatically switch from GPT-4o to GPT-4o-mini
  • Hard kill switch — exceed the budget? The agent stops. No surprises on your bill.
  • Full TypeScript types — first-class DX with complete type definitions
  • ESM + CommonJS — works everywhere: import or require
  • Zero dependencies — nothing to audit, nothing to break

Quick Start

import { TokenFence } from 'tokenfence';

const fence = new TokenFence({
  budget: 0.50,           // $0.50 per workflow
  downgradeAt: 0.80,      // switch models at 80% budget
  downgradeModel: 'gpt-4o-mini',
  onKill: (usage) => console.log('Budget exceeded:', usage)
});

// Wrap your OpenAI call
const result = await fence.guard(async () => {
  return await openai.chat.completions.create({
    model: fence.currentModel('gpt-4o'),
    messages: [{ role: 'user', content: 'Analyze this data...' }]
  });
});

Works With Everything

TokenFence is provider-agnostic. It works with:

  • OpenAI — GPT-4o, GPT-4o-mini, GPT-5, o1, o3
  • Anthropic — Claude 4, Claude Sonnet, Claude Haiku
  • Google Gemini — via OpenAI-compatible API
  • Any LLM — if it has a cost, TokenFence can cap it

Python + Node.js = Full Stack Coverage

With both SDKs live, you can protect your entire AI stack:

# Python (FastAPI, Django, LangChain, CrewAI)
pip install tokenfence

# Node.js (Next.js, Express, serverless)
npm install tokenfence

Same concepts, same API design, both languages. Your team uses Python for ML pipelines and Node.js for the web layer? Both are covered.

What's Next

  • Usage dashboard — see your agent spend in real-time
  • Team budgets — aggregate limits across multiple workflows
  • Webhook alerts — get notified before budgets hit
  • Framework plugins — native LangChain.js and Vercel AI SDK integrations

Get started now:

npm install tokenfence

Read the documentation or check out examples on GitHub.

Ready to protect your AI budget?

Two lines of code. Per-workflow budgets. Automatic model downgrade. Hard kill switch.