SharedArrayBuffer high-performance parallel
SharedArrayBuffer (SAB) allows multiple JavaScript threads (web workers) to share and manipulate the same memory space. This enables true parallel computation in browsers and Node.js.
┌─────────────────┐ ┌─────────────────┐ │ Main Thread │ │ Worker 1 │ │ │ │ │ │ ┌───────────┐ │ │ ┌───────────┐ │ │ │ Memory │◄─┼────┼──│ Memory │ │ │ │ Access │ │ │ │ Access │ │ │ └───────────┘ │ │ └───────────┘ │ └─────────────────┘ └─────────────────┘ │ │ └───────┐ ┌───────────┘ ▼ ▼ ┌─────────────────┐ │ SharedArrayBuffer│ │ (Shared Memory)│ └─────────────────┘
Why Use SharedArrayBuffer?
- True parallelism: Multiple threads work simultaneously
- Zero-copy data sharing: No serialization/deserialization overhead
- High-performance computing: Ideal for graphics, simulations, AI
Security Considerations
SAB requires secure contexts and specific headers:
// Server headers required for cross-origin isolation // Cross-Origin-Embedder-Policy: require-corp // Cross-Origin-Opener-Policy: same-origin
Basic Setup: Creating a SharedArrayBuffer
// Create a SharedArrayBuffer with 1024 bytes const sharedBuffer = new SharedArrayBuffer(1024); // Create a typed array view for easier manipulation const int32View = new Int32Array(sharedBuffer);
Atomic Operations: Thread-Safe Data Access
Atomic operations prevent race conditions when multiple threads access the same memory.
// Main thread const sharedBuffer = new SharedArrayBuffer(4); const array = new Int32Array(sharedBuffer); // Worker thread self.onmessage = function(e) { const sharedArray = new Int32Array(e.data); // Atomic add operation - thread safe! Atomics.add(sharedArray, 0, 1); // Atomic store (write) Atomics.store(sharedArray, 1, 42); // Atomic load (read) const value = Atomics.load(sharedArray, 0); };
Complete Example: Parallel Counter
Let's create a practical example with multiple workers incrementing a shared counter.
main.ts:
// Create shared memory const sharedBuffer = new SharedArrayBuffer(4); const counter = new Int32Array(sharedBuffer); // Create workers const worker1 = new Worker('worker.ts'); const worker2 = new Worker('worker.ts'); // Send shared buffer to workers worker1.postMessage(sharedBuffer); worker2.postMessage(sharedBuffer); // Wait for workers to complete Promise.all([ new Promise(resolve => worker1.onmessage = resolve), new Promise(resolve => worker2.onmessage = resolve) ]).then(() => { console.log('Final counter value:', Atomics.load(counter, 0)); });
worker.ts:
self.onmessage = function(e: MessageEvent<SharedArrayBuffer>) { const counter = new Int32Array(e.data); // Each worker increments 1000 times for (let i = 0; i < 1000; i++) { Atomics.add(counter, 0, 1); } self.postMessage('done'); };
Performance Optimization Techniques
1. Memory Alignment
// Align data for optimal CPU access const ALIGNMENT = 64; // Cache line size (typically 64 bytes) function createAlignedBuffer(size: number): SharedArrayBuffer { const alignedSize = Math.ceil(size / ALIGNMENT) * ALIGNMENT; return new SharedArrayBuffer(alignedSize); } // Usage const alignedBuffer = createAlignedBuffer(1024);
2. False Sharing Prevention
BEFORE (Problematic): ┌─────────────────────────────────────────────────┐ │ Thread 1: Counter A │ Thread 2: Counter B │ │ (Same cache line) │ (Same cache line) │ └─────────────────────────────────────────────────┘ AFTER (Optimized): ┌─────────────────────────┐ ┌─────────────────────────┐ │ Thread 1: Counter A │ │ Thread 2: Counter B │ │ (Separate cache lines) │ │ (Separate cache lines) │ └─────────────────────────┘ └─────────────────────────┘
// Prevent false sharing by padding data const CACHE_LINE_SIZE = 64; class PaddedCounter { private buffer: SharedArrayBuffer; private view: Int32Array; constructor() { // Allocate extra space for padding this.buffer = new SharedArrayBuffer(CACHE_LINE_SIZE); this.view = new Int32Array(this.buffer, 0, 1); } increment(): void { Atomics.add(this.view, 0, 1); } get value(): number { return Atomics.load(this.view, 0); } }
3. Bulk Operations Pattern
// Instead of many small atomic operations, use batching class BatchProcessor { private sharedBuffer: SharedArrayBuffer; private dataView: Int32Array; private localBuffer: number[] = []; private readonly BATCH_SIZE = 100; constructor(bufferSize: number) { this.sharedBuffer = new SharedArrayBuffer(bufferSize); this.dataView = new Int32Array(this.sharedBuffer); } addToBatch(value: number): void { this.localBuffer.push(value); if (this.localBuffer.length >= this.BATCH_SIZE) { this.flushBatch(); } } private flushBatch(): void { // Process batch locally first, then update shared memory once const sum = this.localBuffer.reduce((a, b) => a + b, 0); // Single atomic operation instead of 100 Atomics.add(this.dataView, 0, sum); this.localBuffer = []; } }
Advanced Pattern: Producer-Consumer
// Producer-Consumer pattern with circular buffer class CircularBuffer { private buffer: SharedArrayBuffer; private data: Int32Array; private meta: Int32Array; constructor(size: number) { // Buffer layout: [readIndex, writeIndex, ...data] this.buffer = new SharedArrayBuffer(8 + size * 4); this.meta = new Int32Array(this.buffer, 0, 2); this.data = new Int32Array(this.buffer, 8, size); } produce(value: number): boolean { const writeIdx = Atomics.load(this.meta, 1); const readIdx = Atomics.load(this.meta, 0); if ((writeIdx + 1) % this.data.length === readIdx) { return false; // Buffer full } Atomics.store(this.data, writeIdx, value); Atomics.store(this.meta, 1, (writeIdx + 1) % this.data.length); return true; } consume(): number | null { const readIdx = Atomics.load(this.meta, 0); const writeIdx = Atomics.load(this.meta, 1); if (readIdx === writeIdx) { return null; // Buffer empty } const value = Atomics.load(this.data, readIdx); Atomics.store(this.meta, 0, (readIdx + 1) % this.data.length); return value; } }
Performance Measurement
function measurePerformance(): void { const iterations = 1000000; const buffer = new SharedArrayBuffer(4); const array = new Int32Array(buffer); // Measure atomic operations console.time('Atomic operations'); for (let i = 0; i < iterations; i++) { Atomics.add(array, 0, 1); } console.timeEnd('Atomic operations'); // Compare with regular operations (in single thread) let regularCounter = 0; console.time('Regular operations'); for (let i = 0; i < iterations; i++) { regularCounter++; } console.timeEnd('Regular operations'); }
Common Pitfalls and Solutions
- Race Conditions: Always use atomic operations for shared memory access
- False Sharing: Pad data to cache line boundaries
- Memory Overhead: Reuse buffers instead of creating new ones
- Deadlocks: Use timeouts in atomic wait operations
Resources and Further Reading
- MDN Web Docs: SharedArrayBuffer
- ECMAScript Specification: Atomics Object
- Web Workers API: Worker
Browser Compatibility Table
Browser | Support | Notes |
---|---|---|
Chrome | 68+ | Requires cross-origin isolation |
Firefox | 79+ | Requires cross-origin isolation |
Safari | 15.4+ | Limited support |
Node.js | 8.10+ | Full support |
Conclusion
SharedArrayBuffer enables true parallel processing in TypeScript applications. Remember:
- Always use atomic operations for thread safety
- Optimize memory layout to prevent false sharing
- Batch operations to minimize atomic call overhead
- Follow security requirements for cross-origin isolation
With these techniques, you can build high-performance applications that leverage modern multi-core processors effectively.
// Final performance tip: Reuse typed array views const reusableView = new Int32Array(sharedBuffer); // Reuse this view instead of creating new ones
Happy coding! 🚀
SharedArrayBuffer TypeScript JavaScript parallel computing web workers performance optimization multithreading atomic operation
I hope this post was helpful to you.
Leave a reaction if you liked this post!