Frequently Asked Questions
Find answers to common questions about BadgerFy.ai. Can't find what you're looking for? Contact us.
Data & Privacy
No, we do not use customer data for training AI models. Our external providers also have agreements in place not to use your personal data for training their models.
We don't store any of your data permanently, other than usage and billing data which we retain for a maximum of one year. When you delete your data source files or agents, they are permanently removed from our systems.
No. We only track agent/assistant engagement on your site. Every agent has its own analytics section where you can see exactly what is being tracked.
Billing & Plans
We generally do not provide refunds, but you may cancel your plan at any time. Your service will continue until the end of your current billing period.
Please email us and we can discuss your use case and how to accommodate it price-wise. All plans have overage charges that apply for tokens used beyond what is included in the plan.
For token usage, overage charges apply beyond your included amount. For data storage, when you exceed the plan's limit, we block future data source uploads until files have been deleted. We provide some leeway for website scrapes since the size is difficult to determine until completion.
Token usage is variable based on the agent type and the content of the data source files. On the high end, the AI Assistant can consume the most tokens and on the low end, the Recommendation Strip can consume the least. For certain agent types, we cache the data for a certain time period. This will reduce your token usage and improve response time.
Data Sources
There can be various reasons, including third-party service interruptions. We will retry your data source upload a maximum of one time before complete failure. If the data contains binary information or is unprocessable, make sure it is one of the acceptable file types: TXT, JSON, PDF, or CSV.
Yes, you can! However, it's not advised unless the PDF file is text-heavy. PDFs often include extra formatting information that might not be ideal for LLM consumption and can inflate token usage.
We provide tips on how to format data source files from the Data Source landing page and in our documentation. By adding the necessary metadata, you can ensure that your agent is successful. Most importantly, remove extra information not needed, as it inflates your token usage and can distract the agent.
We may raise this limit in the future, but to maintain consistency and performance across all agents, we try to keep a reasonable limit. Additionally, the LLMs themselves have limited context windows, so keeping data sources focused is beneficial for response quality.
We support Plain Text (.txt), JSON (.json), CSV (.csv), and PDF (.pdf) files. The maximum file size for a single upload is 10MB.
Agents & Configuration
At this time, this will need to be a manual process on your part. We don't provide automated migrations between projects.
This could be solved by refining the AI instructions, fixing the data source file content, or ensuring that the correct script tag implementation has been followed. Check our documentation for troubleshooting tips.
All agents have a built-in test environment accessible from the dashboard. The only thing you can't test from this environment is tool calls. For critical use cases, you should test your agent in a development or staging environment first. This is the main purpose of having different projects—so you can test changes before they go live.
We limit auto-loading to once every 48 hours per user session so visitors aren't repeatedly annoyed by the agent opening on every page visit.
Technical & API
Yes! Conversion tracking is available on Pro and Business plans. Please see our documentation for details on how to track conversions.
The Consumer API has endpoints where, with a developer, you could build a job that keeps your data updated on a regular cadence. Please see our documentation for details on the available API endpoints.
We do not currently offer a way to programmatically initiate website scraping jobs via the Consumer API. Website scrapes must be initiated through the dashboard.
Support & Service
There is no guaranteed support included with the plans, but feel free to email us and we'll make our best effort to answer your questions promptly.
We're a new platform and the average uptime hasn't been established yet, but we aim for consistent availability. If the app goes down for routine maintenance, we generally disable all deployed agents temporarily to prevent any disruptions to your customers.
Still have questions?
Check out our documentation for detailed guides, or reach out to our team directly.