Token & Storage Usage Info
Understanding how different agent types consume tokens and storage can help you plan your usage and optimize costs. Below is a breakdown of typical usage patterns for each agent type.
Usage Estimates by Agent Type
The following table provides average estimates for token consumption, storage requirements, and caching behavior across all BadgerFy agent types.
| Agent Type | Avg Token Usage | Avg Storage Usage | Cache Usage | Notes |
|---|---|---|---|---|
| AI Assistant | High | Medium | None | Storage depends on size of dataset |
| Quiz | Medium | Medium | Cached locally for 30 days | Reduces regeneration; storage depends on dataset size |
| Nudge | Low–Medium | Medium | Cached for 1 hour | Storage depends on size of dataset |
| Recommendation Strip | Very Low | Medium–High | Cached for 8 hours | Order data grows over time. Clear older orders periodically. |
| Survey | Low–Medium | No storage | Cached for 30 days | Common website content is often already cached and won't use tokens |
Understanding Token Usage
What Consumes Tokens
Tokens are consumed whenever an AI model processes or generates content:
- AI Assistant: Each conversation turn uses tokens for both the user's message and the AI's response. Longer conversations and larger context windows consume more tokens.
- Quiz: Token usage occurs when generating questions and processing answers. Cached quizzes significantly reduce ongoing token costs.
- Nudge: Tokens are used when the AI generates contextual nudge content based on page context.
- Survey: Tokens are consumed when detecting survey-worthy moments on your pages. Cached detections minimize repeated usage.
Reducing Token Usage
- Keep your data sources concise and well-organized to minimize context size
- Take advantage of caching—agents with longer cache durations will use fewer tokens over time
- For AI Assistant, consider using suggested prompts to guide users toward pre-defined questions
Understanding Storage Usage
What Counts Toward Storage
Storage is consumed by:
- Data sources: Uploaded files, PDFs, and scraped website content
- Order data: Historical order information used by Recommendation Strips
- Vector embeddings: Processed versions of your data sources for AI retrieval
Managing Storage
- Regularly review and remove outdated data sources that are no longer needed
- For Recommendation Strips, periodically clear older order data to prevent unbounded growth
- Consider consolidating multiple small files into larger, organized documents
💡 Tip: Caching helps reduce token usage significantly. Agents like Quiz and Survey that cache for 30 days will see dramatically lower token consumption after initial generation.
⚠️ Order Data Growth: Recommendation Strips rely on order history, which grows over time. Monitor your storage usage and clear orders older than your typical analysis window (e.g., 12 months) to keep storage manageable.