Solana RPC Rate Limits Explained
Understand Carbium RPC rate limits, 429 responses, burst traffic, and safe retry patterns for production Solana apps.
Solana RPC Rate Limits Explained
If your Solana app starts returning 429 Too Many Requests, the fix is usually not "retry faster". It is to understand how your client creates bursts, how Carbium applies throughput limits, and how to reduce pressure without making the problem worse.
This page is about rate-limit behavior in production: what a 429 means, what usually causes it, and how to recover safely. For the full tier comparison and plan-selection guidance, use RPC Pricing and Usage Tiers.
Part of the Carbium full-stack Solana infrastructure stack.
What Carbium is limiting
Developers often mix these up, but they solve different problems:
| Limit type | What it controls | What happens when you hit it |
|---|---|---|
| Requests / second | Short-term throughput | You can receive 429 Too Many Requests |
| Credits / month | Total usage over time | You need to reduce usage or move to a larger plan |
In practice:
- A wallet or dashboard can stay well within monthly credits and still trigger
429responses during a short burst. - A bot can stay under the RPS cap but still exhaust its monthly credits if it runs constantly.
- Higher tiers increase both the long-term budget and the short-term throughput ceiling.
The operational problem on this page is the short-term throughput limit. That is the thing that produces 429 responses during spikes.
Which endpoints use your RPC key
Carbium's RPC key covers both the standard JSON-RPC endpoint and the streaming endpoint:
| Product | Endpoint | Auth |
|---|---|---|
| JSON-RPC | https://rpc.carbium.io/?apiKey=YOUR_RPC_KEY | Query parameter or X-API-KEY header |
| gRPC / streaming | wss://grpc.carbium.io/?apiKey=YOUR_RPC_KEY | Query parameter or x-token header |
If you need the exact tier matrix or when gRPC starts by plan, use the dedicated pricing page.
What a 429 means
A 429 Too Many Requests response means your client sent traffic faster than your current plan allows. It does not automatically mean:
- your API key is invalid
- Carbium is down
- the Solana method itself is broken
For Carbium, the operational guidance is straightforward:
| HTTP code | Meaning | What to do |
|---|---|---|
401 | Invalid or missing API key | Check your key and auth format |
403 | Plan restriction | Upgrade if you need gated features such as gRPC |
429 | Rate limit exceeded | Back off immediately and retry later |
500 | Server error | Retry after a short delay |
503 | Temporary unavailability | Retry with exponential backoff |
What usually causes rate-limit spikes
Most rate-limit incidents come from traffic shape, not from one bad request.
Common causes:
- multiple workers retrying the same failed read at once
- browser clients sharing one key and polling aggressively
- bursty cron or queue jobs starting at the same second
- repeated health checks or balance checks with no cache
- mixing development, staging, and production traffic on one key
- transaction send loops that retry before checking status
Safe retry pattern for Solana clients
When you hit rate limits, the goal is to reduce pressure instead of amplifying it.
Good retry behavior
- Use exponential backoff.
- Add jitter so many workers do not retry at the same time.
- Separate read traffic from write traffic if possible.
- Cache hot reads such as balances, recent slots, and token metadata.
- Queue bursty background jobs instead of firing them all at once.
Dangerous retry behavior
- Blindly retrying every failed request in parallel.
- Retrying
sendTransactionwithout first checking whether the transaction already landed. - Polling aggressively from browser clients with shared API keys.
For transactions specifically, Carbium's internal guidance is:
Do not blindly retry
sendTransactionuntil you have checkedgetSignatureStatus.
Example: JSON-RPC with a basic backoff
const RPC_URL = `https://rpc.carbium.io/?apiKey=${process.env.CARBIUM_RPC_KEY}`;
async function rpcCall(method: string, params: unknown[] = [], retries = 5) {
let delayMs = 500;
for (let attempt = 0; attempt <= retries; attempt++) {
const res = await fetch(RPC_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
jsonrpc: "2.0",
id: 1,
method,
params,
}),
});
if (res.status !== 429) {
return res.json();
}
if (attempt === retries) {
throw new Error("Rate limit exceeded after retries");
}
await new Promise((resolve) => setTimeout(resolve, delayMs));
delayMs = Math.min(delayMs * 2, 10_000);
}
}This pattern is intentionally simple. For production workloads, add jitter and centralize request budgeting so multiple workers do not compete blindly.
How to reduce rate-limit pressure
Before upgrading, check whether your client is wasting requests:
- Reuse a single RPC connection pool instead of creating new clients for every job.
- Cache responses that do not need sub-second freshness.
- Replace constant polling with streaming where that fits your architecture.
- Monitor which methods consume the most traffic in the Usage dashboard.
- Split environments so development traffic does not compete with production traffic.
When rate limits are telling you something structural
A recurring 429 is usually a signal that one of these is true:
- the application burst pattern is now part of normal production behavior
- several services share the same budget and step on each other
- you are using polling where streaming would reduce pressure
- your current client-side retry behavior multiplies load during incidents
Fix the traffic shape first where you can. If the workload is genuinely bigger now, then move back to RPC Pricing and Usage Tiers and pick the right plan from there.
This page owns throttling behavior and mitigation. For plan selection, credits, and tier comparison, use RPC Pricing and Usage Tiers.
Building a wallet, trading bot, or Solana backend? Compare plans and start with Carbium at carbium.io.
Updated 2 days ago
