Node.jsTypeScriptSDKTutorial

How to Track LLM Token Usage in Node.js — Complete Tutorial

AI Cost Guard Team2026-03-108 min read

Why Track Token Usage?

Every LLM API call costs money. Without tracking, you're flying blind — a single misconfigured prompt can burn through your monthly budget in hours. Token tracking gives you:

  • Cost attribution: Know which features and users drive your AI spend
  • Budget control: Set alerts before costs spiral
  • Optimization data: Identify prompts that use too many tokens
  • Setup: Install the SDK

    npm install @aicostguard/sdk
    
    
    

    Basic Usage with OpenAI

    import { CostGuard } from '@aicostguard/sdk';
    
    
    import OpenAI from 'openai';
    
    
    
    
    
    const guard = new CostGuard({
    
    
      apiKey: process.env.COST_GUARD_API_KEY!,
    
    
      projectId: 'my-chatbot',
    
    
    });
    
    
    
    
    
    const openai = guard.wrap(new OpenAI());
    
    
    
    
    
    // Use openai as normal — every call is tracked automatically
    
    
    const response = await openai.chat.completions.create({
    
    
      model: 'gpt-4o',
    
    
      messages: [{ role: 'user', content: 'Summarize this article...' }],
    
    
    });
    
    
    

    That's it. Two lines of setup, and every API call now logs:

  • Model used
  • Input and output token counts
  • Cost in dollars
  • Latency in milliseconds
  • Project and environment tags
  • Tracking Anthropic (Claude)

    import Anthropic from '@anthropic-ai/sdk';
    
    
    
    
    
    const anthropic = guard.wrap(new Anthropic());
    
    
    
    
    
    const message = await anthropic.messages.create({
    
    
      model: 'claude-sonnet-4-20250514',
    
    
      max_tokens: 1024,
    
    
      messages: [{ role: 'user', content: 'Hello, Claude!' }],
    
    
    });
    
    
    

    Adding Custom Metadata

    Tag requests with custom metadata for fine-grained attribution:

    const response = await openai.chat.completions.create({
    
    
      model: 'gpt-4o-mini',
    
    
      messages: [{ role: 'user', content: userQuery }],
    
    
    }, {
    
    
      costGuard: {
    
    
        userId: user.id,
    
    
        feature: 'customer-support',
    
    
        environment: 'production',
    
    
      },
    
    
    });
    
    
    

    Viewing Your Data

    Once integrated, visit your AI Cost Guard dashboard to see:

  • Real-time cost feed — every request with model, tokens, and cost
  • Daily/weekly/monthly aggregates — trend charts and projections
  • Per-model breakdown — which models drive your spend
  • Per-feature attribution — which features cost the most
  • Setting Budget Alerts

    // In your AI Cost Guard dashboard, or via API:
    
    
    await guard.setBudgetAlert({
    
    
      amount: 500, // $500/month
    
    
      alertAt: [0.5, 0.8, 1.0], // Alert at 50%, 80%, 100%
    
    
      action: 'notify', // or 'auto-stop'
    
    
      channels: ['slack', 'email'],
    
    
    });
    
    
    

    Best Practices

  • Always set projectId — enables per-project cost tracking
  • Tag by environment — separate dev/staging/production costs
  • Set budget alerts on day 1 — don't wait for the first surprise bill
  • Review weekly — 15-minute cost review catches issues early
  • Next Steps

  • Sign up for free — 10,000 requests/month included
  • Read the full SDK docs — all configuration options
  • Try the Cost Calculator — estimate costs before you build
  • Related Articles

    Start Saving on AI Costs Today

    Join thousands of developers who save up to 40% on their AI API bills with AI Cost Guard.