Learn more about how AI costs are calculated and how to control them

If you are using any of the AI APIs, you may be charged for the tokens used. The cost is calculated based on the number of input and output tokens used, as well as the model used.

Airtop uses a credit system to keep track of your usage since we often use multiple models and different pricing tiers for each model. We do not charge any markup on LLM costs.

Keeping track of costs

Every AI API call returns a meta.usage object containing the number of credits used by this call. You can use this number to monitor your usage on a per-request basis.

You can also see your consumed AI credits for the month on the billing page of the developer portal.

Controlling costs

In addition to monitoring, you can also specify a max cost for every AI API call. There are 2 fields that can be helpful for this:

  1. costThresholdCredits: A credit threshold that, once exceeded, will cause the operation to be cancelled. Note that this is not a hard limit, but a threshold that is checked periodically during the course of fulfilling the request. A request may slightly exceed the threshold if it’s in the middle of an LLM call that is already in progress, but any subsequent calls will not be permitted to continue.

  2. timeThresholdSeconds: A time threshold in seconds that, once exceeded, will cause the operation to be cancelled. Note that this is not a hard limit, but a threshold that is checked periodically during the course of fulfilling the request. If you would like to prevent a single request from running for more than a certain amount of time rather than a cost threshold, you can use this field.

Built with