CSV import

Every list-shaped entity in BottleCRM (leads, contacts, accounts) supports CSV import through a dedicated upload endpoint. The same pattern handles small ad-hoc batches and bulk migrations of tens of thousands of rows.

Endpoints

POST /api/leads/upload/
POST /api/contacts/upload/
POST /api/accounts/upload/

All three accept multipart/form-data with a file field.

Required headers

Each upload endpoint accepts a flexible set of CSV columns. The minimum:

Entity Required headers
Leads email (or first_name + last_name if no email)
Contacts email
Accounts name

Any other column whose header matches a serializer field is imported. Custom-field values can be imported by prefixing the header with cf_ (e.g. cf_industry).

How matching works

  • Leads dedupe on email within the org — duplicates are updated, not re-created.
  • Contacts match on email. The account_name column is fuzzy-matched against existing accounts; near-misses go into a review queue.
  • Accounts match on name first, then fall back to email domain.

Response shape

{
  "created": [{ "row": 1, "id": "ab12-…" }, …],
  "updated": [{ "row": 3, "id": "cd34-…" }, …],
  "skipped": [{ "row": 5, "reason": "duplicate" }],
  "errors": [
    { "row": 7, "field": "email", "message": "Enter a valid email address." }
  ],
  "total": 1000
}

The endpoint never fails the whole batch because of a few bad rows — every row is reported individually so you can fix and re-upload only the broken ones.

Performance

Uploads use a generator-style parser and batched ORM bulk_create / bulk_update calls. A 10,000-row leads import typically takes ~30 seconds on a single-core API container.

For files larger than 5 MB, the endpoint queues the import as a Celery job and returns:

{ "task_id": "imp_2c…", "status": "queued" }

Poll GET /api/imports/<task_id>/ for progress until status is done or failed.

Custom fields

If your CSV has columns for custom fields, prefix them with cf_:

email,first_name,last_name,cf_industry,cf_bant_score
jane@acme.com,Jane,Doe,saas,72

Values are validated against the org's active definitions exactly like an API write — invalid dropdown values or out-of-range numbers land in errors, not in the record.

Tips for big migrations

  1. Test with 50 rows first — confirms your column mapping is right before you upload the full file.
  2. Split files over 100 MB — easier to retry, easier to debug.
  3. Run after hours — the parser is single-threaded; concurrent imports will queue.
  4. Keep the original CSV — the response references rows by index; you'll want it to investigate errors.

See also

  • Custom fields — defining the cf_* columns you can import.
  • Errors — full error response shape.