Rate Limits
Understanding Browser7's concurrent render limits and how to work within them.
How Rate Limits Work
Browser7 uses a concurrent render limit rather than traditional requests-per-second limits. Since the API is asynchronous, all endpoint responses are near-instantaneous, and the actual constraint is how many renders you can have processing simultaneously.
Concurrent Render Limit
When you create a new render with POST /renders, Browser7 checks how many renders you currently have in progress:
- If you're under your limit → New render is accepted and queued
- If you're at your limit → Request is rejected with a
429 Too Many Requestserror
Other endpoints (GET /renders/:id, GET /account/balance) are not rate limited since they only query data and don't create new render jobs.
Rate Limit Tiers
| Plan | Concurrent Render Limit |
|---|---|
| Starter, Pro, Business | 10 concurrent renders |
| Enterprise | 50 concurrent renders |
What Counts as "In Progress"
A render is considered "in progress" from the moment it's created until it reaches a terminal state:
- ✅ Counts toward limit: Status is
processingorqueued - ❌ Doesn't count: Status is
completedorfailed
Example Timeline:
1. POST /renders → Render created → Counts toward limit (1/10)
2. Status: processing → Still counts (1/10)
3. Status: completed → No longer counts (0/10)
Error Response
When you exceed your concurrent limit:
HTTP 429 Too Many Requests
{
"error": "Rate Limit Exceeded",
"message": "Concurrent render limit reached (10/10). Please wait for existing renders to complete."
}
Managing Concurrent Renders
Strategy 1: Poll and Queue
Track your in-progress renders and only create new ones when slots are available:
const Browser7 = require('browser7');
const client = new Browser7({ apiKey: process.env.BROWSER7_API_KEY });
class RenderQueue {
constructor(maxConcurrent = 10) {
this.maxConcurrent = maxConcurrent;
this.inProgress = new Set();
this.queue = [];
}
async processQueue() {
while (this.queue.length > 0 && this.inProgress.size < this.maxConcurrent) {
const { url, options, resolve, reject } = this.queue.shift();
this.processRender(url, options)
.then(resolve)
.catch(reject);
}
}
async processRender(url, options) {
const { renderId } = await client.createRender(url, options);
this.inProgress.add(renderId);
try {
// Poll for completion
let result;
while (true) {
result = await client.getRender(renderId);
if (result.status === 'completed' || result.status === 'failed') {
break;
}
await new Promise(resolve => setTimeout(resolve, result.retryAfter * 1000));
}
return result;
} finally {
// Remove from in-progress set
this.inProgress.delete(renderId);
// Process next item in queue
this.processQueue();
}
}
async render(url, options = {}) {
return new Promise((resolve, reject) => {
this.queue.push({ url, options, resolve, reject });
this.processQueue();
});
}
}
// Usage
const queue = new RenderQueue(10);
// Add 100 URLs - will process 10 at a time
const urls = [...Array(100)].map((_, i) => `https://example.com/page${i}`);
const results = await Promise.all(
urls.map(url => queue.render(url))
);
Strategy 2: Batch with Delays
Process renders in controlled batches:
async function processBatch(urls, batchSize = 10) {
const results = [];
for (let i = 0; i < urls.length; i += batchSize) {
const batch = urls.slice(i, i + batchSize);
console.log(`Processing batch ${i / batchSize + 1}: ${batch.length} URLs`);
// Process batch concurrently
const batchResults = await Promise.all(
batch.map(url => client.render(url))
);
results.push(...batchResults);
console.log(`Batch complete. ${results.length}/${urls.length} processed`);
}
return results;
}
// Process 100 URLs in batches of 10
const urls = [...Array(100)].map((_, i) => `https://example.com/page${i}`);
const results = await processBatch(urls, 10);
Strategy 3: Retry on Rate Limit
Automatically retry when hitting the limit:
async function createRenderWithRetry(url, options = {}, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await client.createRender(url, options);
} catch (error) {
if (error.message.includes('429')) {
// Rate limited - wait and retry
const delay = (attempt + 1) * 2000; // 2s, 4s, 6s, 8s, 10s
console.log(`Rate limited. Retrying in ${delay}ms... (attempt ${attempt + 1}/${maxRetries})`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Max retries exceeded');
}
Best Practices
1. Use the SDK's render() Method
The SDK automatically handles polling, so renders complete and free up slots:
// ✅ Good - render() polls until completion, freeing up the slot
const result = await client.render('https://example.com');
// ⚠️ Less efficient - createRender() doesn't poll, slot stays occupied
const { renderId } = await client.createRender('https://example.com');
// Slot is still occupied until you poll and it completes
2. Process Batches Sequentially
For large jobs, process in batches equal to your concurrent limit:
const CONCURRENT_LIMIT = 10;
const urls = [...]; // Your URL list
for (let i = 0; i < urls.length; i += CONCURRENT_LIMIT) {
const batch = urls.slice(i, i + CONCURRENT_LIMIT);
await Promise.all(batch.map(url => client.render(url)));
}
3. Monitor Your Concurrent Usage
Track how many renders you have in progress:
class ConcurrentTracker {
constructor() {
this.active = 0;
}
async track(fn) {
this.active++;
console.log(`Active renders: ${this.active}`);
try {
return await fn();
} finally {
this.active--;
console.log(`Active renders: ${this.active}`);
}
}
}
const tracker = new ConcurrentTracker();
// Use it
const result = await tracker.track(() =>
client.render('https://example.com')
);
4. Handle 429 Errors Gracefully
Always implement retry logic for rate limit errors:
try {
const { renderId } = await client.createRender(url);
} catch (error) {
if (error.message.includes('429')) {
console.log('Concurrent limit reached, waiting before retry...');
await new Promise(resolve => setTimeout(resolve, 5000));
// Retry logic
}
}
5. Optimize Render Times
Faster renders = more throughput within your concurrent limit:
- Set
blockImages: trueto reduce load times - Use appropriate wait action timeouts (don't wait longer than necessary)
- Choose proxy locations close to your API endpoint
- Avoid unnecessary
fetchUrls
Python Example
import asyncio
from browser7 import Browser7
client = Browser7(api_key=os.environ['BROWSER7_API_KEY'])
async def process_batch(urls, batch_size=10):
results = []
for i in range(0, len(urls), batch_size):
batch = urls[i:i + batch_size]
print(f"Processing batch: {len(batch)} URLs")
# Process batch concurrently
batch_results = await asyncio.gather(
*[client.render_async(url) for url in batch]
)
results.extend(batch_results)
print(f"Batch complete. {len(results)}/{len(urls)} processed")
return results
# Process 100 URLs in batches of 10
urls = [f"https://example.com/page{i}" for i in range(100)]
results = asyncio.run(process_batch(urls, 10))
PHP Example
use Browser7\Browser7Client;
$client = new Browser7Client($_ENV['BROWSER7_API_KEY']);
function processBatch($client, $urls, $batchSize = 10) {
$results = [];
for ($i = 0; $i < count($urls); $i += $batchSize) {
$batch = array_slice($urls, $i, $batchSize);
echo "Processing batch: " . count($batch) . " URLs\n";
// Process batch (PHP doesn't have native async like Node.js)
$batchResults = [];
foreach ($batch as $url) {
$batchResults[] = $client->render($url);
}
$results = array_merge($results, $batchResults);
echo "Batch complete. " . count($results) . "/" . count($urls) . " processed\n";
}
return $results;
}
// Process 100 URLs in batches of 10
$urls = array_map(fn($i) => "https://example.com/page{$i}", range(0, 99));
$results = processBatch($client, $urls, 10);
Upgrading Your Plan
If you consistently hit your concurrent render limit, consider upgrading:
- Enterprise Plan: 50 concurrent renders (5x capacity)
- Higher throughput for large-scale scraping operations
- Contact sales@browser7.com for Enterprise pricing
Frequently Asked Questions
Q: Why did I get a 429 error if I'm only making one request?
Your previous renders may still be processing. Check how many renders you have with status processing - if it's at your limit (10 or 50), you'll need to wait for some to complete.
Q: How long do renders stay "in progress"?
Most renders complete in 5-15 seconds. Renders with wait actions or CAPTCHA solving may take longer. The maximum render time is 60 seconds before timeout.
Q: Can I check my current concurrent usage via API?
Not directly, but you can track render IDs you've created and poll their status to know when slots free up.
Q: Do failed renders count toward my limit?
Only while they're processing. Once a render reaches failed status, it no longer counts toward your concurrent limit.
Q: What if I need more than 50 concurrent renders?
Contact our sales team at sales@browser7.com to discuss custom Enterprise plans with higher limits.
Related Documentation
- Create Render - Creating render jobs
- Get Render - Polling for completion
- Error Handling - Handling rate limit errors
- Pricing - View plans and limits