Advanced Techniques

Request Batching

Although the provider processes individual requests by default, batching operations can significantly improve efficiency. By grouping multiple requests into a single operation, overall performance is enhanced and network overhead is minimized.

Parallel Requests

Batching multiple independent requests using Promise.all() offers several advantages:

  • Executes all requests concurrently rather than sequentially.
  • Reduces total wait time by processing requests in parallel.
  • Minimizes round-trip latency by avoiding sequential network calls.
  • Improves application responsiveness and overall throughput.
typescript
// Batch multiple independent requests
const [block, balance, utxos] = await Promise.all([
    provider.getBlockNumber(),
    provider.getBalance('bc1q...'),
    provider.utxoManager.getUTXOs({ address: 'bc1q...' }),
]);

Sequential Requests with Dependencies

When requests depend on the result of previous operations, sequential execution is required. In the following example, the block number must be retrieved before fetching the block, and the block data is needed before fetching its transactions. The final step uses Promise.all() to fetch all transactions concurrently, optimizing performance where possible while respecting the necessary execution order.

typescript
// When requests depend on each other
const blockNumber = await provider.getBlockNumber();
const block = await provider.getBlock(blockNumber);
const transactions = await Promise.all(
    block.transactions?.map((txId) => provider.getTransaction(txId)) || []
);

Batch Pattern for Multiple Items

For scenarios requiring data retrieval across multiple items, batch methods provide an efficient solution. Instead of making individual requests for each item, this approach retrieves all data in a single network call, significantly reducing latency and improving performance when working with large datasets.

typescript
// Efficient batch balance check
async function getBalancesBatch(addresses: string[]): Promise<Map<string, bigint>> {
    const balanceMap = await provider.getBalances(addresses, true);
    return new Map(Object.entries(balanceMap));
}

const addresses = ['bc1q...', 'bc1p...', 'bc1r...'];
const balances = await getBalancesBatch(addresses);

Timeout Configuration

Request-Level Timeout

Timeout settings are configured at the provider level, applying a consistent timeout duration to all outgoing requests. This ensures predictable behavior across all operations and prevents requests from hanging indefinitely. The timeout value can be specified during provider initialization.

typescript
// Provider-wide timeout
const provider = new JSONRpcProvider(
    url,
    network,
    60000  // 60 second timeout
);

Operation-Specific Timeout

For scenarios requiring granular control over individual operations, a custom timeout can be applied on a per-request basis using a wrapper function. This approach allows specific calls to have shorter timeout durations than the provider's default setting, providing flexibility for time-sensitive operations. Note that the custom timeout cannot exceed the provider-level timeout, as the provider's timeout will always take precedence.

typescript
// Wrap with custom timeout
async function withTimeout<T>(
    promise: Promise<T>,
    timeoutMs: number
): Promise<T> {
    let timeoutId: NodeJS.Timeout;

    const timeoutPromise = new Promise<never>((_, reject) => {
        timeoutId = setTimeout(() => {
            reject(new Error(`Operation timed out after ${timeoutMs}ms`));
        }, timeoutMs);
    });

    try {
        return await Promise.race([promise, timeoutPromise]);
    } finally {
        clearTimeout(timeoutId!);
    }
}

// Usage
const block = await withTimeout(
    provider.getBlock(height),
    5000  // 5 second timeout for this specific call
);

Request Rate Limiting

Implementing rate limiting prevents overwhelming nodes with excessive requests and ensures stable, reliable communication with the network. Two common approaches are available:

  • Concurrency limiting.
  • Token bucket rate limiting.

Concurrency Limiting

Concurrency limiting restricts the number of requests that can be executed simultaneously. This approach is useful when processing large batches of operations, ensuring that only a specified number of requests are in flight at any given time. Once a request completes, the next one in the queue is automatically initiated.

typescript
import pLimit from 'p-limit';

// Limit concurrent requests
const limit = pLimit(10);  // Max 10 concurrent

async function rateLimitedFetch(addresses: string[]) {
    return await Promise.all(
        addresses.map((addr) =>
            limit(() => provider.getBalance(addr))
        )
    );
}

Token Bucket Rate Limiter

The token bucket algorithm provides more granular control over request rates by maintaining a pool of tokens that replenish at a fixed rate. Each request consumes one token, and if no tokens are available, the request waits until a token becomes available. This approach smooths out request bursts and maintains a consistent request rate over time, making it ideal for adhering to node rate limits or API quotas.

typescript
class RateLimiter {
    private tokens: number;
    private lastRefill: number;

    constructor(
        private readonly maxTokens: number = 10,
        private readonly refillRate: number = 1  // tokens per second
    ) {
        this.tokens = maxTokens;
        this.lastRefill = Date.now();
    }

    async acquire(): Promise<void> {
        this.refill();

        if (this.tokens > 0) {
            this.tokens--;
            return;
        }

        // Wait for token
        const waitTime = (1 / this.refillRate) * 1000;
        await new Promise((r) => setTimeout(r, waitTime));
        return this.acquire();
    }

    private refill(): void {
        const now = Date.now();
        const elapsed = (now - this.lastRefill) / 1000;
        const newTokens = elapsed * this.refillRate;

        this.tokens = Math.min(this.maxTokens, this.tokens + newTokens);
        this.lastRefill = now;
    }
}

// Usage
const limiter = new RateLimiter(10, 5);  // 10 tokens, 5/second refill

async function rateLimitedCall<T>(operation: () => Promise<T>): Promise<T> {
    await limiter.acquire();
    return operation();
}

Error Handling

Always wrap provider calls in try-catch-finally blocks to ensure robust error handling. This pattern guarantees that exceptions are caught and handled appropriately, preventing unhandled errors from crashing the application. The finally block ensures that cleanup operations such as releasing resources or resetting state are executed regardless of whether the operation succeeds or fails, maintaining application stability and preventing resource leaks.

Checking OP_NET Specific Error Types

typescript
import { OPNetError } from 'opnet';

try {
    await provider.getBlock(999999999);
} catch (error) {
    if (error instanceof OPNetError) {
        console.error('OPNet Error');
        console.error('  Message:', error.message);
        console.error('  Code:', error.code);
    } else if (error instanceof Error) {
        // Network or other error
        console.error('Error:', error.message);
    }
}

Common Error Scenarios

Error Type Cause Solution
Timeout Request took too long Increase timeout or retry
Connection Refused Server unavailable Check URL, retry later
Invalid Response Malformed RPC response Check request parameters
Block Not Found Non-existent block Verify block number exists
Revert Contract execution failed Check simulation first

Safe Wrapper Methods

Creating safe wrapper methods around provider operations enables comprehensive error handling and provides a consistent approach to managing exceptions. These wrappers intercept errors, allowing specific error types to be handled gracefully such as returning null for missing resources while re-throwing unexpected errors for further handling. This pattern improves code maintainability and ensures predictable behavior across the application.

typescript
// Comprehensive error handling
async function safeGetBlock(height: bigint) {
    try {
        return await provider.getBlock(height);
    } catch (error) {
        if (error instanceof Error) {
            if (error.message.includes('not found')) {
                console.warn('Block not found');
                return null;
            }
        }
        throw error;  // Re-throw unknown errors
    }
}

Retry Logic

Implementing retry logic helps handle transient failures such as network timeouts or temporary node unavailability. Several strategies are available depending on the application's requirements.

Simple Retry

A simple retry mechanism attempts an operation multiple times with a linear delay between attempts. If the operation fails, the function waits before retrying, with the delay increasing proportionally with each attempt. This approach is suitable for handling temporary failures without overwhelming the node.

typescript
async function withRetry<T>(
    operation: () => Promise<T>,
    maxRetries: number = 3,
    delayMs: number = 1000
): Promise<T> {
    let lastError: Error | undefined;

    for (let attempt = 1; attempt <= maxRetries; attempt++) {
        try {
            return await operation();
        } catch (error) {
            lastError = error as Error;
            console.warn(`Attempt ${attempt} failed: ${lastError.message}`);

            if (attempt < maxRetries) {
                await new Promise((r) => setTimeout(r, delayMs * attempt));
            }
        }
    }

    throw lastError;
}

// Usage
const block = await withRetry(() => provider.getBlock(height));

Exponential Backoff

Exponential backoff provides a more sophisticated retry strategy by doubling the delay between each attempt. This approach reduces load on the node during periods of instability and increases the likelihood of success on subsequent retries. It is particularly effective for handling rate limiting or network congestion.

typescript
async function withExponentialBackoff<T>(
    operation: () => Promise<T>,
    maxRetries: number = 5,
    baseDelayMs: number = 1000
): Promise<T> {
    let lastError: Error | undefined;

    for (let attempt = 0; attempt < maxRetries; attempt++) {
        try {
            return await operation();
        } catch (error) {
            lastError = error as Error;

            // Exponential backoff: 1s, 2s, 4s, 8s, 16s
            const delay = baseDelayMs * Math.pow(2, attempt);
            console.warn(`Attempt ${attempt + 1} failed, waiting ${delay}ms`);

            await new Promise((r) => setTimeout(r, delay));
        }
    }

    throw lastError;
}

Circuit Breaker

The circuit breaker pattern prevents repeated requests to a failing node by tracking consecutive failures. When failures exceed a defined threshold, the circuit "opens" and immediately rejects subsequent requests for a specified period. After the reset timeout elapses, the circuit allows requests again, enabling the system to recover gracefully. This pattern protects both the application and the node from cascading failures.

typescript
class CircuitBreaker {
    private failures = 0;
    private lastFailure = 0;
    private readonly threshold = 5;
    private readonly resetTimeMs = 30000;

    async execute<T>(operation: () => Promise<T>): Promise<T> {
        // Check if circuit is open
        if (this.failures >= this.threshold) {
            if (Date.now() - this.lastFailure < this.resetTimeMs) {
                throw new Error('Circuit breaker is open');
            }
            // Reset after timeout
            this.failures = 0;
        }

        try {
            const result = await operation();
            this.failures = 0;  // Reset on success
            return result;
        } catch (error) {
            this.failures++;
            this.lastFailure = Date.now();
            throw error;
        }
    }
}

// Usage
const breaker = new CircuitBreaker();
const block = await breaker.execute(() => provider.getBlock(height));

Complete Example

typescript
import { JSONRpcProvider, OPNetError } from 'opnet';
import { networks } from '@btc-vision/bitcoin';
import pLimit from 'p-limit';

// Production-ready provider wrapper
class OPNetClient {
    private provider: JSONRpcProvider;
    private limiter = pLimit(20);
    private retryConfig = { maxRetries: 3, baseDelay: 1000 };

    constructor(url: string, network: typeof networks.bitcoin) {
        this.provider = new JSONRpcProvider(
            url,
            network,
            60000,
            {
                keepAliveTimeout: 60_000,
                connections: 256,
                pipelining: 4,
            }
        );
    }

    async getBlock(height: bigint) {
        return this.execute(() => this.provider.getBlock(height));
    }

    async getBalance(address: string) {
        return this.execute(() => this.provider.getBalance(address));
    }

    async getBalances(addresses: string[]) {
        return this.execute(() =>
            this.provider.getBalances(addresses, true)
        );
    }

    async close() {
        await this.provider.close();
    }

    private async execute<T>(operation: () => Promise<T>): Promise<T> {
        return this.limiter(async () => {
            let lastError: Error | undefined;

                for (let i = 0; i < this.retryConfig.maxRetries; i++) {
                try {
                    return await operation();
                } catch (error) {
                    lastError = error as Error;

                    if (error instanceof OPNetError) {
                        // Don't retry client errors
                if (error.code >= 400 && error.code < 500) {
                            throw error;
                        }
                    }

                    const delay = this.retryConfig.baseDelay * Math.pow(2, i);
                    await new Promise((r) => setTimeout(r, delay));
                }
            }

            throw lastError;
        });
    }
}

// Usage
const client = new OPNetClient('https://mainnet.opnet.org', networks.bitcoin);

try {
    const block = await client.getBlock(800000n);
    console.log('Block:', block.hash);
} finally {
    await client.close();
}