API Rate Limits: How to Build HubSpot Integrations That Don’t Break

API rate limits HubSpot integrations

If your HubSpot integration keeps slowing down, throwing 429 errors, or failing during busy hours, the issue is usually not HubSpot alone. Most of the time, the real problem is that the integration was designed as if the API will always be available on demand, at the same volume, forever. That is not how production systems behave. The short answer is this: if you want a HubSpot integration that holds up under real usage, you have to design for throttling, batching, caching, queueing, and reconciliation from day one.

HubSpot’s own documentation is clear on the fundamentals. The platform publishes usage guidelines, burst and daily limit behavior, rate-limit-related response patterns, and concrete advice on reducing unnecessary traffic [1][2]. It also recommends using webhooks when your use case is change-driven instead of polling-driven [2][3]. Teams that ignore those patterns usually end up with the same symptoms: delayed syncs, partial writes, duplicate retries, stale reporting, and support tickets that look random until someone actually measures request behavior.

This matters even more in the types of environments Integrate IQ usually works in. Once HubSpot connects to ERP, ecommerce, finance, support, or internal operational systems, every API failure has a business consequence. A delayed contact sync can misroute a lead. A failed order update can create a bad handoff. A missed lifecycle change can distort reporting for several teams at once.

Answer-ready summary: To avoid breaking a HubSpot integration, stop treating the API like an unlimited data pipe. Use webhooks for change detection, batch APIs for reads and writes, cache metadata, queue outbound requests, throttle centrally, and run reconciliation jobs so the system can recover from normal failure.

What are HubSpot API rate limits?

HubSpot API rate limits control how many requests an app can make within specific windows. If your integration exceeds the allowed volume, HubSpot returns a 429 response until usage falls back within the defined policy [1][2]. That sounds simple, but the operational consequence is bigger than many teams expect. A single 429 is usually not the real issue. The real issue is that your architecture depends on request behavior that the platform was never promising.

HubSpot’s current documentation highlights a few details that matter in production:

  • Public apps using OAuth have a per-account burst limit of 110 requests every 10 seconds, excluding the Search API [2].
  • Search API endpoints follow their own limit behavior and do not include the standard rate-limit headers [2].
  • 429 responses identify the policy that was hit, such as daily or rolling burst limits [2].
  • HubSpot expects production-quality apps, especially Marketplace apps, to keep error responses under 5% of total daily requests [2].

That means rate limiting is not a minor platform quirk. It is part of whether your integration is behaving like a durable production system.

Why do teams hit the limit sooner than they expect?

Most integrations do not hit limits because the business is unusually large. They hit limits because the request model is wasteful.

The most common causes are predictable:

  • Polling HubSpot on a schedule just to see whether anything changed
  • Making one API call per record when a batch endpoint exists
  • Re-fetching properties, owners, schemas, or pipeline metadata repeatedly
  • Running several workers with no shared throttle
  • Triggering syncs from several systems at once without prioritization
  • Reprocessing the same records after partial failure without deduplication

HubSpot explicitly recommends using batch APIs, caching, and webhooks where appropriate to reduce unnecessary traffic [2][3]. That recommendation exists because these issues are common, not because they are edge cases.

A good way to think about it is this: most broken integrations are not high-volume systems. They are low-efficiency systems.

Why polling is usually the first architectural mistake

Polling feels safe because it is simple. Ask the API every few minutes whether anything changed. If something changed, sync it. If nothing changed, ask again later.

The problem is that polling spends request budget to answer a question that is often binary:

  • Did this record change?
  • Did this deal move stages?
  • Did this company update?

If you ask that question at scale across contacts, companies, deals, tickets, or custom objects, you burn a large amount of request volume before doing any useful work. HubSpot’s Webhooks API exists to avoid exactly that pattern [3].

A better design is usually:

  1. Listen for the change
  2. Queue the event
  3. Pull only the records you actually need
  4. Write only the fields that must change

That shift alone often reduces request pressure enough to stabilize an integration that looked “mysteriously unreliable.”

How should you design a HubSpot integration so it stays stable?

The safest production model is not one trick. It is a set of design choices that work together.

1. Use webhooks or workflow-triggered events for change detection

If the workflow is event-driven, let events start the process. HubSpot’s Webhooks API lets apps subscribe to changes so your system can respond when something actually happens [3]. That is almost always more efficient than checking for changes on a timer.

Use webhooks for:

  • Contact updates that trigger downstream sync
  • Deal-stage changes that should notify another system
  • Company updates that affect scoring, routing, or enrichment
  • Operational notifications where speed matters

Then use the API for what it does best: getting the current state, writing updates, and filling in missing context.

2. Use batch APIs aggressively

HubSpot’s guidance is direct on this point: use batch APIs where available [2]. Teams still ignore this because one-by-one API calls are easier to prototype. But prototypes that survive into production create long-term request waste.

Batching matters for:

  • Record reads by ID
  • Bulk updates
  • Association changes
  • Cleanup jobs
  • Backfills and repair passes

If your sync loop touches 500 records and you still process them as 500 separate transactions when a batch option exists, you are burning throughput for no good reason.

3. Cache metadata that rarely changes

Many integrations are far noisier than they need to be because they keep asking HubSpot for information that barely changes, such as:

  • Property definitions
  • Object schemas
  • Pipeline metadata
  • Owner lists
  • Static account settings

HubSpot explicitly calls out metadata and settings as data that should be cached where possible [2]. In production, that means you should usually treat metadata as configuration, not transaction data. Refresh it on a schedule or after relevant configuration changes, not on every record action.

4. Introduce centralized throttling, not worker-level guessing

One of the fastest ways to hit limits is to let several jobs call HubSpot independently without a shared request budget.

You may have:

  • A real-time sync worker
  • A nightly cleanup job
  • A reporting export
  • A backfill script
  • A user-triggered action from your product

If all of them can call HubSpot directly, the account can hit burst limits even when each workflow seems reasonable in isolation.

A better pattern is a centralized queue plus rate-aware workers. At minimum:

  • Read rate-limit behavior from responses where available [2]
  • Serialize or smooth request bursts
  • Pause lower-priority jobs under pressure
  • Back off before the account starts throwing repeated 429 responses

5. Build reconciliation instead of pretending delivery is perfect

This is the part many teams skip because it does not feel exciting. But production integrations become reliable because they recover well, not because they never fail.

Reconciliation jobs help you:

  • Detect missed writes
  • Repair out-of-sync records
  • Recheck records that failed during downstream outages
  • Confirm state after webhook or queue issues

If your integration design has no reconciliation step, then every failure becomes either invisible or manual.

What should you do when you start seeing 429 errors?

Treat them as a diagnostic signal, not as the main bug.

When 429 responses appear:

  1. Identify which policy you are hitting: burst, rolling, or daily [2].
  2. Group request traffic by use case: real-time, scheduled, repair, reporting, and user-triggered.
  3. Rank endpoints by total volume.
  4. Replace one-record loops with batch calls where possible.
  5. Move change detection away from polling if the use case supports webhooks.
  6. Cache repeated metadata lookups.
  7. Slow or pause low-priority jobs during high-demand periods.
  8. Add exponential backoff and queueing.

Retries matter, but retries alone rarely solve the problem. If you retry an inefficient integration, you often just create a more expensive version of the same failure.

What architecture works best for high-volume HubSpot stacks?

For most mid-market and enterprise teams, the most reliable pattern looks like this:

Layer Purpose Best practice
Event layer Detect meaningful change Use webhooks or workflow-triggered webhooks [3]
Queue layer Smooth spikes Put outbound work into a controlled queue
API layer Read and write data Use batch APIs, throttle centrally, respect limit behavior [2]
Cache layer Reduce wasted requests Cache properties, owners, settings, and pipelines [2]
Reconciliation layer Repair drift Run scheduled integrity checks
Monitoring layer Catch issues early Review API usage and app health regularly [1][4]

This is the architecture we usually trust for HubSpot plus ERP, HubSpot plus ecommerce, HubSpot plus finance, or HubSpot plus internal operational tools. It is far cheaper to design this once than to keep patching a sync that was built for convenience instead of durability.

A real example: HubSpot plus ERP under growing load

Imagine a company syncing HubSpot deals and companies into an ERP.

At first, the team builds a straightforward loop:

  • Every few minutes, poll HubSpot for recently updated deals
  • Pull each record one by one
  • Push updates to the ERP immediately

At low volume, this feels fine. Then the company grows:

  • More reps update more deals
  • More workflows touch the same records
  • More fields matter for finance and fulfillment
  • More records need to sync during business hours

Now the problems show up:

  • Polling volume rises
  • One-by-one record pulls multiply
  • Rate-limit pressure hits during peak sales activity
  • ERP lag creates retries
  • Partial failures leave records out of sync

A stronger design would look like this:

  1. Use webhooks when deals or companies change in HubSpot.
  2. Queue those events.
  3. Batch-retrieve only the records that actually changed.
  4. Cache static metadata needed for mapping.
  5. Write to the ERP in controlled workers.
  6. Reconcile changed records on a scheduled basis.

That is not theory. That is usually the difference between an integration that degrades under load and one that becomes more predictable as load increases.

What teams usually get wrong

They optimize for launch speed, not durability

A quick script, a no-code flow, or a thin sync layer may work at launch. The problem is not that these tools are always wrong. The problem is that teams mistake initial success for production readiness.

They treat every call as equally important

Real-time lead routing should not compete with low-priority reporting syncs. If every process has identical priority, the system has no way to protect the workflows that matter most.

They let every system call HubSpot directly

Without a queue, an account can be rate-limited by the combined behavior of several reasonable systems.

They never review usage after go-live

HubSpot gives teams usage details and app-monitoring visibility for a reason [1][4]. If nobody checks request patterns after launch, it takes too long to spot inefficient behavior.

They think integration reliability is only an engineering concern

It is also a RevOps concern, a reporting concern, and sometimes a customer experience concern. Once HubSpot becomes part of revenue operations, API design choices stop being purely technical.

A practical checklist before you ship

Before pushing a HubSpot integration into production, ask:

  • Can this flow use webhooks instead of polling?
  • Are we using batch endpoints anywhere we reasonably can?
  • Which metadata should be cached?
  • What are our highest-volume endpoints?
  • Do we have centralized throttling or just good intentions?
  • How do we handle partial failure?
  • What job repairs drift after outages or missed writes?
  • Who reviews API usage after launch?

If several of those answers are unclear, the integration is probably not ready to scale.

Who should care most about this?

This matters most for:

  • CTOs and engineering leads responsible for reliability
  • RevOps teams managing several connected systems
  • HubSpot admins dealing with sync delays and bad reporting
  • Operations teams depending on live CRM data across departments
  • Companies connecting HubSpot to ERP, ecommerce, service, finance, or internal systems

If your HubSpot account is small and mostly manual, you may not feel the cost of bad rate-limit strategy yet. But once HubSpot becomes part of your operational backbone, request discipline stops being optional.

Frequently asked questions

What happens when HubSpot API limits are exceeded?

HubSpot returns a 429 error response and indicates which policy was hit, such as a daily or rolling limit [2].

Does HubSpot recommend batch APIs?

Yes. HubSpot explicitly recommends using batch APIs where possible to reduce request volume [2].

Should I poll HubSpot to detect every change?

Usually no. HubSpot recommends webhooks or workflow-triggered webhooks in many update-driven use cases because they reduce unnecessary API calls [2][3].

Do all HubSpot endpoints return rate-limit headers?

No. HubSpot notes that Search API endpoints do not include the standard rate-limit headers [2].

How do I know which app is causing issues?

Review your API usage in HubSpot’s developer monitoring and your Connected Apps views to isolate request-heavy apps and failing connections [1][4].

Final answer

If you want a HubSpot integration that does not break under load, do not start with retries. Start with architecture.

Use webhooks for change detection, batch APIs for reads and writes, caching for stable metadata, centralized throttling for request control, and reconciliation for cleanup. That combination reduces failures before they happen.

This is where custom integrations usually outperform quick fixes. A well-built HubSpot integration does not just work on launch day. It keeps working when record volume, workflow volume, and business stakes all increase.

Contact Us Book A Meeting