BadgerFy.ai Docs

Consumer API

The BadgerFy.ai Consumer API is designed to provide programmatic access to key platform functionalities, primarily focusing on automating data source uploads. This allows developers to build custom integrations and maintain up-to-date knowledge bases for their AI agents.

📋 Plan Requirement

The Consumer API is available on Pro and Business plans only. Basic plan subscribers can upgrade to access automated data source updates via API.

🔐 API Key Security is Paramount!

Never reveal your API keys in your client-side code (frontend) or commit them directly to your codebase. Treat them as highly sensitive credentials. Always store them as secure environment variables and implement a practice of rotating these API keys regularly.

Available Endpoints

The Consumer API provides the following endpoints for managing data source files:

File Endpoints

  • GET /datasources/{datasourceId}/files - List all files in a datasource
  • GET /datasources/{datasourceId}/files/by-name/{name} - Get a file by its reference name
  • POST /datasources/{datasourceId}/files - Upload a new file
  • PUT /datasources/{datasourceId}/files/{fileId} - Update an existing file
  • DELETE /datasources/{datasourceId}/files/{fileId} - Delete a file

Order Data Endpoints

For datasources configured with format order-data (used with Recommendation Strips):

  • POST /datasources/{datasourceId}/orders - Create order data
  • PUT /datasources/{datasourceId}/orders/{fileId} - Update order data

Product-Promo Endpoints

For datasources configured with format product-promo (used with Nudge and Exit-Offer agents):

  • POST /datasources/{datasourceId}/product-promo - Create product-promo data
  • PUT /datasources/{datasourceId}/product-promo/{fileId} - Update product-promo data

Job Endpoints

  • GET /jobs/{jobId} - Check processing job status
  • GET /datasources/{datasourceId}/jobs - List all jobs for a datasource

Creating a New Data Source File

To upload a new file to your data source, use the POST endpoint:

  1. Upload the File: Send a POST request to /datasources/{datasourceId}/files with your file and a unique reference name.
  2. Name Tokenization: The name you provide will be automatically tokenized to be URL-safe (e.g., "My Product Catalog 2024" becomes "my-product-catalog-2024").
  3. Track Processing: The response includes a jobId. Use GET /jobs/{jobId} to poll for status.
  4. Completion: When the job status changes to completed, the response includes the dataSourceFileId and name of the created file. The file is immediately available to your AI agents.
💡 Tip:

Store your file's reference name in your configuration. You can later use GET /datasources/{datasourceId}/files/by-name/{name} to retrieve the file ID without having to list all files.

Updating an Existing Data Source File

To replace an existing file with a new version, use the PUT endpoint. This is the recommended approach for keeping your data sources synchronized:

  1. Get the File ID: Use GET /datasources/{datasourceId}/files/by-name/{name} to retrieve the current file's ID using its reference name.
  2. Upload the Replacement: Send a PUT request to /datasources/{datasourceId}/files/{fileId} with your new file.
  3. Automatic Handling: The system will:
    • Process the new file
    • Keep the original file active during processing (zero downtime)
    • Only delete the old file after the new one is successfully processed
    • Preserve the original reference name
  4. Track Processing: Use the returned jobId to monitor the update progress.
✅ Benefits of Using Update:

The update endpoint ensures zero downtime during file replacements. Your AI agents continue using the existing data until the new version is fully processed and ready.

Order Data for Recommendation Strips

If you're using Recommendation Strip agents, you can upload order/purchase history data directly via the API. Order data is processed differently from regular files—it bypasses embedding generation and is stored as structured JSON for fast product lookups.

📋 Prerequisite:

Order data can only be uploaded to datasources created with useOrderData: true. Regular file uploads are blocked on order data datasources, and vice versa.

Order Data Format

Send a JSON body with name (reference name) and orders array:

{
"name": "order-history-2024",
"orders": [
{
"orderId": "ORD-12345",
"orderDate": "2024-01-15",
"metadata": { "source": "api" },
"products": [
{
"displayName": "Wireless Mouse",
"productPath": "/products/wireless-mouse",
"category": "Electronics",
"productCode": "WM-001",
"sku": "SKU-12345",
"thumbUrl": "https://example.com/images/mouse.jpg",
"purchasePrice": "29.99"
}
]
}
]
}

Creating Order Data

Send a POST request to /datasources/{datasourceId}/orders:

  1. Include both name and orders in the JSON body
  2. Each order must have orderId, orderDate, and at least one product
  3. Each product requires: displayName, productPath, category,productCode, sku, thumbUrl, and purchasePrice
  4. The response includes a jobId for tracking and validation stats

Updating Order Data

To replace existing order data, use PUT /datasources/{datasourceId}/orders/{fileId}:

  1. Get the file ID using GET /datasources/{datasourceId}/files/by-name/{name}
  2. Send the updated orders array in the request body
  3. The original data remains active until the new data is processed (zero downtime)
⚡ Fast Processing:

Order data processing is significantly faster than regular file uploads since it skips embedding generation. Your recommendation strips can access the new data within seconds of upload completion.

Product-Promo Data for Nudge & Exit-Offer Agents

If you're using Nudge or Exit-Offer agents, you can upload product and promotion data directly via the API. This data powers personalized promotional messages and exit intent offers.

📋 Prerequisite:

Product-promo data can only be uploaded to datasources created with format product-promo. Regular file uploads are blocked on product-promo datasources, and vice versa.

⚠️ API Limitations:

The API accepts a single pre-formatted JSON file at a time. For uploading multiple files with AI-powered field mapping (CSV, JSON from different sources like Shopify exports), use the BadgerFy.ai dashboard to upload and process files.

Product-Promo Data Format

Send a JSON body with name (reference name) and records array. Each record represents one product with a variants array (at least one variant per product). Do not flatten variants into separate records—all SKUs for a product go in that product's variants array.

{
"name": "summer-sale-promos",
"records": [
{
"productPath": "/products/wireless-headphones",
"productName": "Premium Wireless Headphones",
"category": "Electronics",
"productCode": "WH-1000",
"highPrice": "149.99",
"lowPrice": "149.99",
"productThumbUrl": "https://example.com/images/headphones.jpg",
"metadata": { "brand": "TechAudio" },
"variants": [
{
"variantName": "Premium Wireless Headphones - Black",
"variantPath": "/products/wireless-headphones?variant=blk",
"variantPrice": "149.99",
"sku": "WH-1000-BLK"
}
],
"promotions": [
{
"promotionType": "percent",
"promotionValue": 20,
"promotionText": "Summer Sale - 20% Off!",
"discountPrice": "119.99",
"discountCode": "SUMMER20",
"ctaText": "Shop Now",
"ctaLink": "/products/wireless-headphones?promo=summer20",
"validFrom": "2024-06-01T00:00:00Z",
"validUntil": "2024-08-31T23:59:59Z"
}
]
},
{
"productPath": "/products/running-shoes",
"productName": "Ultra Comfort Running Shoes",
"category": "Footwear",
"productCode": "RS-500",
"highPrice": "89.99",
"lowPrice": "89.99",
"variants": [
{
"variantName": "Ultra Comfort Running Shoes - Size 10",
"variantPath": "/products/running-shoes",
"variantPrice": "89.99",
"sku": "RS-500-10"
}
],
"promotions": [
{
"promotionType": "dollar",
"promotionValue": 15,
"promotionText": "$15 Off All Running Shoes",
"discountPrice": "74.99"
},
{
"promotionType": "shipping",
"promotionText": "Free Shipping on Orders $50+",
"ctaLink": "/shipping-policy",
"ctaText": "Learn More"
}
]
},
{
"productPath": "/products/fitness-bundle",
"productName": "Complete Fitness Bundle",
"category": "Fitness",
"productCode": "FB-PRO-001",
"highPrice": "299.99",
"lowPrice": "299.99",
"productThumbUrl": "https://example.com/images/fitness-bundle.jpg",
"variants": [
{
"variantName": "Complete Fitness Bundle",
"variantPath": "/products/fitness-bundle",
"variantPrice": "299.99",
"sku": "FB-PRO-001"
}
],
"promotions": [
{
"promotionType": "bundle",
"promotionValue": 50,
"promotionText": "Save $50 when you buy the complete set!",
"discountPrice": "249.99",
"ctaText": "Get Bundle Deal",
"ctaLink": "/products/fitness-bundle#bundle-offer"
}
]
}
]
}

Product Record Fields

Required fields (product level):

  • productPath - URL path to product page (e.g., "/products/wireless-headphones", "/product/my-item")
  • productName - Base product name (no variant attributes)
  • category - Product category for grouping and filtering
  • productCode - Parent/main SKU identifier tying variants together
  • highPrice - Highest variant price as a string (e.g., "149.99")
  • lowPrice - Lowest variant price as a string (e.g., "99.99")
  • variants - Array of at least one variant object (see below)

Required fields (each variant in variants):

  • variantName - Product name plus variant attributes (e.g., "Headphones - Black / Large")
  • variantPath - Full path to this variant (e.g., product path + query params)
  • variantPrice - Price for this variant as a string (e.g., "99.99")
  • sku - Variant SKU identifier

Optional fields (product level):

  • productThumbUrl - URL to product thumbnail image
  • description - Short product description (plain text, up to 300 characters)
  • promotions - Array of promotion objects
  • metadata - Optional object for custom fields (brand, tags, etc.)

Optional fields (variant level):

  • variantThumbUrl - Thumbnail URL for this variant
  • metadata - Variant-specific options (e.g., color, size)

Promotion Object Fields

Each product must have at least one promotion. All promotion fields are optional, but you should include enough information for meaningful display:

  • promotionType - Type of promotion. Valid values: percent, dollar, shipping, bundle, bogo, quantity, gift, rebate, coupon, sale, clearance, flash, other
  • promotionValue - Numeric discount value (e.g., 20 for 20% off or $20 off depending on type)
  • promotionText - Human-readable promotion message shown to users
  • discountPrice - Final price after discount as a string (e.g., "79.99")
  • discountCode - Coupon code the customer can use at checkout
  • ctaLink - Call-to-action link URL (e.g., product page, promo landing page)
  • ctaText - Call-to-action button text (e.g., "Shop Now", "Learn More")
  • validFrom - Promotion start date in ISO 8601 format
  • validUntil - Promotion end date in ISO 8601 format
💡 Multiple Promotions Per Product:

A single product can have multiple promotions. For example, a product might have both a percentage discount and free shipping. The Nudge/Exit-Offer agent will intelligently select the most relevant promotion to display based on context.

Creating Product-Promo Data

Send a POST request to /datasources/{datasourceId}/product-promo:

  1. Include both name and records in the JSON body
  2. Each record must have productPath, productName, category,productCode, highPrice, lowPrice, and a variantsarray with at least one variant (each variant needs variantName, variantPath,variantPrice, sku)
  3. The response includes the dataSourceFileId and recordCount

Updating Product-Promo Data

To replace existing product-promo data, use PUT /datasources/{datasourceId}/product-promo/{fileId}:

  1. Get the file ID using GET /datasources/{datasourceId}/files/by-name/{name}
  2. Send the updated records array in the request body
  3. All existing records are replaced with the new records
⚡ Instant Updates:

Product-promo data is stored directly without queue processing, making updates nearly instantaneous. Your Nudge and Exit-Offer agents immediately see the new promotions.

Example Automation Script Flow

Here's a typical automation flow for keeping your data synchronized:

  1. Configuration: Store your datasource ID and file reference name in environment variables or a config file.
  2. Check for Existing File: Call GET /datasources/{datasourceId}/files/by-name/{name}.
  3. Create or Update:
    • If file exists → Use PUT to update it
    • If file doesn't exist → Use POST to create it
  4. Poll for Completion: Check GET /jobs/{jobId}until the job completes or fails.
  5. Schedule: Run this on a cron schedule (e.g., daily) to keep your AI agents up-to-date.

API Usage Guidelines

  • Upload Limits: The same upload limits (e.g., 10MB per file) apply to API uploads as they do to manual uploads through the dashboard.
  • One Job at a Time: Only one file can be processed per datasource at a time. Check GET /datasources/{datasourceId}/jobs before uploading to ensure no jobs are in progress.
  • Scraped Files: Files created via website scraping cannot be updated via the API. Delete and re-scrape using the dashboard instead.
  • Management: You can manage and view all uploaded files and data sources on the BadgerFy.ai data source page in your dashboard.
  • No Website Scraping API: Currently, we do not offer an API endpoint for programmatically initiating website scraping jobs.

API Reference

For detailed information on all available endpoints, request/response formats, and authentication methods, please refer to our Swagger documentation:

View API Documentation