Skip to main content

Rate Limits

LinkForty API can implement rate limiting to ensure fair usage and protect infrastructure from abuse.

Overview

Configuration Required

Rate limiting is disabled by default in self-hosted deployments. To enable it, set RATE_LIMIT_ENABLED=true in your environment variables.

LinkForty Cloud (hosted service) has rate limiting enabled by default.

Default Rate Limits (when enabled):

  • 100 requests/minute per API key or IP address
  • Limits reset every 60 seconds
  • Rate limit tracking uses Redis for distributed environments

No request limits on:

  • Short link redirects (/:shortCode)
  • Public endpoints (health check, QR codes)

Rate Limit Headers

When rate limiting is enabled, every API response includes rate limit information:

HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1711373520
HeaderDescriptionExample
X-RateLimit-LimitMaximum requests per window100
X-RateLimit-RemainingRequests remaining in current window95
X-RateLimit-ResetUnix timestamp when limit resets (seconds)1711373520

Note: These headers are only present when RATE_LIMIT_ENABLED=true. If rate limiting is disabled, these headers will not appear in responses.

Exceeding Rate Limits

When you exceed the rate limit:

Status Code: 429 Too Many Requests

{
"error": "Too Many Requests",
"message": "Rate limit exceeded. Try again in 42 seconds.",
"statusCode": 429,
"retryAfter": 42
}

Response Headers:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1642531200
Retry-After: 42

Handling Rate Limits

Basic Retry Logic

async function makeRequest(url, options) {
const response = await fetch(url, options);

if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
console.log(`Rate limited. Retrying in ${retryAfter} seconds...`);

await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
return makeRequest(url, options);
}

return response;
}

Exponential Backoff

async function makeRequestWithBackoff(url, options, attempt = 1) {
const response = await fetch(url, options);

if (response.status === 429 && attempt <= 3) {
const delay = Math.min(1000 * Math.pow(2, attempt), 32000);
console.log(`Rate limited. Waiting ${delay}ms before retry ${attempt}/3`);

await new Promise(resolve => setTimeout(resolve, delay));
return makeRequestWithBackoff(url, options, attempt + 1);
}

return response;
}

Check Before Sending

class RateLimitedClient {
constructor(apiKey, limit = 100) {
this.apiKey = apiKey;
this.limit = limit;
this.remaining = limit;
this.resetTime = Date.now() + 60000;
}

updateLimits(headers) {
this.remaining = parseInt(headers.get('X-RateLimit-Remaining'));
this.resetTime = parseInt(headers.get('X-RateLimit-Reset')) * 1000;
}

async waitIfNeeded() {
if (this.remaining <= 0) {
const waitTime = this.resetTime - Date.now();
if (waitTime > 0) {
console.log(`Rate limit reached. Waiting ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
}
}
}

async request(url, options = {}) {
await this.waitIfNeeded();

const response = await fetch(url, {
...options,
headers: {
...options.headers,
'Authorization': `Bearer ${this.apiKey}`
}
});

this.updateLimits(response.headers);
return response;
}
}

// Usage
const client = new RateLimitedClient(API_KEY);
const response = await client.request('https://api.linkforty.com/api/links');

Rate Limit by Endpoint

Endpoint CategoryRate LimitNotes
Link Creation100/minApplies to POST /api/links
Link Updates100/minApplies to PUT /api/links/:id
Link Reads100/minApplies to GET /api/links
Bulk Operations20/minLower limit for bulk endpoints
Analytics100/minGET /api/analytics/*
Templates100/minAll template operations
Webhooks50/minWebhook management
QR Generation50/minGET /api/links/:id/qr

Bulk Operations

Bulk endpoints have stricter limits:

Bulk Create Links:

  • 20 requests/minute
  • 1,000 links per request maximum
# This counts as 1 request
curl -X POST https://api.linkforty.com/api/links/bulk \
-H "Authorization: Bearer $LINKFORTY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"links": [
{ "templateId": "template_123", "originalUrl": "https://example.com/1" },
{ "templateId": "template_123", "originalUrl": "https://example.com/2" },
... // Up to 1,000 links
]
}'

Benefit: Create 1,000 links with 1 request vs 1,000 separate requests.

Increasing Rate Limits

Cloud Plans

Free Tier:

  • 100 requests/minute
  • 1,000 links total

Pro Tier ($29/month):

  • 500 requests/minute
  • Unlimited links

Enterprise:

Self-Hosted

Self-hosted deployments have rate limiting disabled by default.

To enable rate limiting, configure these environment variables in your .env file:

# .env
RATE_LIMIT_ENABLED=true # Enable rate limiting
RATE_LIMIT_MAX=100 # Maximum requests per window (default: 100)
RATE_LIMIT_WINDOW=60000 # Time window in milliseconds (default: 60000 = 1 minute)

Examples:

# Standard rate limit: 100 requests per minute
RATE_LIMIT_ENABLED=true
RATE_LIMIT_MAX=100
RATE_LIMIT_WINDOW=60000

# Higher limit: 500 requests per minute
RATE_LIMIT_ENABLED=true
RATE_LIMIT_MAX=500
RATE_LIMIT_WINDOW=60000

# Stricter limit: 50 requests per 30 seconds
RATE_LIMIT_ENABLED=true
RATE_LIMIT_MAX=50
RATE_LIMIT_WINDOW=30000

Requirements:

  • Redis must be running and configured (REDIS_URL env variable)
  • Rate limiting uses Redis for distributed tracking across multiple server instances

Best Practices

1. Use Bulk Endpoints

Instead of creating links one by one:

❌ Bad:

// 100 requests
for (const url of urls) {
await fetch('https://api.linkforty.com/api/links', {
method: 'POST',
body: JSON.stringify({ templateId: 'template_123', originalUrl: url })
});
}

✅ Good:

// 1 request
await fetch('https://api.linkforty.com/api/links/bulk', {
method: 'POST',
body: JSON.stringify({
links: urls.map(url => ({ templateId: 'template_123', originalUrl: url }))
})
});

2. Cache Responses

Don't fetch the same data repeatedly:

const cache = new Map();

async function getLink(id) {
if (cache.has(id)) {
return cache.get(id);
}

const response = await fetch(`https://api.linkforty.com/api/links/${id}`);
const link = await response.json();

cache.set(id, link);
return link;
}

3. Batch Requests

Group multiple operations:

// Instead of 50 separate requests, wait and batch
const pendingCreates = [];

function queueLinkCreation(data) {
pendingCreates.push(data);

if (pendingCreates.length >= 50) {
flushBatch();
}
}

async function flushBatch() {
const batch = pendingCreates.splice(0, pendingCreates.length);

await fetch('https://api.linkforty.com/api/links/bulk', {
method: 'POST',
body: JSON.stringify({ links: batch })
});
}

// Flush every 5 seconds if there are pending requests
setInterval(() => {
if (pendingCreates.length > 0) {
flushBatch();
}
}, 5000);

4. Monitor Rate Limit Headers

Track your usage:

async function monitoredRequest(url, options) {
const response = await fetch(url, options);

const remaining = response.headers.get('X-RateLimit-Remaining');
const limit = response.headers.get('X-RateLimit-Limit');

console.log(`Rate limit: ${remaining}/${limit} remaining`);

if (parseInt(remaining) < 10) {
console.warn('⚠️ Approaching rate limit!');
}

return response;
}

5. Implement Circuit Breakers

Stop sending requests when rate limited:

class CircuitBreaker {
constructor() {
this.failures = 0;
this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
this.resetTime = null;
}

async execute(fn) {
if (this.state === 'OPEN') {
if (Date.now() < this.resetTime) {
throw new Error('Circuit breaker is OPEN');
}
this.state = 'HALF_OPEN';
}

try {
const result = await fn();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}

onSuccess() {
this.failures = 0;
this.state = 'CLOSED';
}

onFailure() {
this.failures++;

if (this.failures >= 3) {
this.state = 'OPEN';
this.resetTime = Date.now() + 60000; // 1 minute
}
}
}

const breaker = new CircuitBreaker();

await breaker.execute(() =>
fetch('https://api.linkforty.com/api/links')
);

Monitoring Rate Limits

Check Response Headers

Monitor your rate limit usage by inspecting response headers:

const response = await fetch('https://api.linkforty.com/api/links', {
headers: { 'Authorization': `Bearer ${API_KEY}` }
});

const limit = response.headers.get('X-RateLimit-Limit');
const remaining = response.headers.get('X-RateLimit-Remaining');
const reset = response.headers.get('X-RateLimit-Reset');

console.log(`Rate Limit: ${remaining}/${limit} remaining`);
console.log(`Resets at: ${new Date(parseInt(reset) * 1000).toISOString()}`);

if (parseInt(remaining) < 10) {
console.warn('⚠️ Approaching rate limit!');
}

Webhook Rate Limits

Webhook deliveries have separate limits:

Outbound Webhooks:

  • 10 deliveries/second per webhook
  • 3 retry attempts with exponential backoff
  • 30-second timeout per delivery

If your webhook endpoint is slow or down, deliveries may be delayed or dropped.

Common Errors

Error: "Rate limit exceeded"

{
"error": "Too Many Requests",
"message": "Rate limit exceeded. Try again in 42 seconds.",
"statusCode": 429
}

Solution:

  • Wait for retryAfter seconds
  • Implement exponential backoff
  • Use bulk endpoints
  • Cache responses

Error: "Bulk operation limit exceeded"

{
"error": "Bad Request",
"message": "Maximum 1000 links per bulk request",
"statusCode": 400
}

Solution:

  • Split into multiple batches of 1,000 links each
  • Add delay between batches (1 request/3 seconds = 20/min limit)

Comparison with Other Platforms

ProviderFree Tier LimitPaid Tier LimitPaid Plan
LinkForty100/min500/min$29/mo
AppsFlyerN/A (no free tier)1000/minEnterprise
Branch60/min600/minPaid plans
AdjustN/A (no free tier)CustomEnterprise

LinkForty Advantage:

  • Free tier included
  • Reasonable limits for most use cases
  • Self-hosted option has NO limits

Next Steps

Support

Need higher rate limits?

  • Pro Plan: Upgrade at linkforty.com/pricing
  • Enterprise: Contact enterprise@linkforty.com
  • Self-Hosted: No limits, deploy on your infrastructure