Tracking OpenAI Costs

In this guide, we will walk you through the process of using the Telemetry SDK to track OpenAI API costs, including the number of input and output tokens used. By the end of this guide, you'll have a setup to log and analyze your OpenAI API usage, focusing on both input/output tokens and associated costs.

Prerequisites

  • A valid API key for Telemetry

  • Access to OpenAI API

  • Basic understanding of JavaScript and Node.js

Step 1: Install Telemetry SDK

First, you need to install the Telemetry SDK in your project. If you haven't done so already, run the following command:

npm install telemetry-sh

Step 2: Initialize Telemetry

After installing the SDK, import and initialize Telemetry in your project. Replace YOUR_API_KEY with your actual Telemetry API key.

import telemetry from "telemetry-sh";

telemetry.init("YOUR_API_KEY");

Step 3: Track OpenAI API Costs with Token Details

To track your OpenAI API costs, you need to log each API call along with details such as input tokens, output tokens, and the total cost. We'll log the following information:

  • model: The model used (e.g., gpt-4, gpt-3.5-turbo)

  • input_tokens: The number of input tokens used in the API call

  • output_tokens: The number of output tokens generated by the API

  • total_tokens: The sum of input and output tokens

  • cost: The cost of the API call

  • timestamp: The time when the API call was made

Here's an example of how you can log this data:

const trackOpenAICost = (model, inputTokens, outputTokens, cost) => {
  telemetry.log("openai_api_usage", {
    model: model,
    input_tokens: inputTokens,
    output_tokens: outputTokens,
    total_tokens: inputTokens + outputTokens,
    cost: cost,
    timestamp: new Date().toISOString()
  });
};

// Example usage
trackOpenAICost("gpt-4", 500, 500, 0.06); // Adjust cost as per OpenAI's pricing

Step 4: Automate Cost Tracking with Token Details

You can automate the tracking of OpenAI costs by integrating the trackOpenAICost function directly into your code where OpenAI API calls are made. Here's an example with a simple API call using axios:

import axios from "axios";

const openaiApiCall = async (model, prompt) => {
  const response = await axios.post('https://api.openai.com/v1/completions', {
    model: model,
    prompt: prompt,
    max_tokens: 100
  }, {
    headers: {
      'Authorization': `Bearer YOUR_OPENAI_API_KEY`
    }
  });

  const inputTokens = response.data.usage.prompt_tokens;
  const outputTokens = response.data.usage.completion_tokens;
  const totalTokens = response.data.usage.total_tokens;
  const cost = calculateCost(model, totalTokens);

  trackOpenAICost(model, inputTokens, outputTokens, cost);

  return response.data;
};

// Example cost calculation function
const calculateCost = (model, totalTokens) => {
  const costPerToken = model === 'gpt-4' ? 0.00006 : 0.00003; // Adjust rates based on OpenAI pricing
  return totalTokens * costPerToken;
};

// Example API call
openaiApiCall("gpt-4", "Hello, how can I help you today?");

Step 5: Query and Analyze Costs with Token Details

Once you have logged sufficient data, you can query and analyze it using Telemetry's query API. For example, to get the average cost and token usage per model:

const results = await telemetry.query(`
  SELECT
    model,
    AVG(cost) AS avg_cost,
    AVG(input_tokens) AS avg_input_tokens,
    AVG(output_tokens) AS avg_output_tokens,
    AVG(total_tokens) AS avg_total_tokens
  FROM
    openai_api_usage
  GROUP BY
    model
`);

console.log(results);

Step 6: Explore Data with Telemetry's UI

Telemetry's UI allows you to visualize and explore your logged data interactively. Visit Telemetry Dashboard and log in with your credentials to create dashboards, charts, and more based on your OpenAI usage data.

Conclusion

By following these steps, you can effectively track and analyze your OpenAI API costs, including input and output tokens, using the Telemetry SDK. This setup provides valuable insights into your API usage and helps you manage costs efficiently.

Last updated