Skip to content

Performance

SheetKit delivers native Rust performance to both Rust and TypeScript applications. This page demonstrates how fast SheetKit is and explains the optimizations that make it possible.

How Fast is SheetKit?

Compared with ExcelJS and SheetJS (Node.js)

In the existing Node.js benchmark suite (benchmarks/node/RESULTS.md), SheetKit is consistently faster than both ExcelJS and SheetJS in representative read/write workloads:

ScenarioSheetKitExcelJSSheetJS
Read Large Data (50k rows × 20 cols)680ms3.88s2.06s
Write 50k rows × 20 cols657ms3.49s1.59s
Buffer round-trip (10k rows)167ms674ms211ms
Random-access read (1k cells from 50k-row file)550ms3.97s1.74s

Rust vs Node.js Overhead

SheetKit's Node.js bindings stay close to native Rust performance, and in several write-heavy paths they are faster:

OperationOverhead
Read operations (sync)~1.10x (~10% slower, typical)
Read operations (async)~1.10x (~10% slower, typical)
Write operations (batch)~0.90x (~10% faster, typical)
Streaming write1.21x (21% slower)
Buffer round-trip1.01x (near parity)

For most real-world workloads, Node.js performance remains close to native Rust.

Read Performance Comparison

ScenarioRustNode.jsOverhead
Large Data (50k rows × 20 cols)616ms680ms+10%
Heavy Styles (5k rows, formatted)33ms37ms+12%
Multi-Sheet (10 sheets × 5k rows)360ms781ms+117%
Formulas (10k rows)40ms52ms+30%
Strings (20k rows text-heavy)140ms126ms-10% (faster)

Write Performance Comparison

ScenarioRustNode.jsOverhead
50k rows × 20 cols1.03s657ms-36% (faster)
5k styled rows39ms48ms+23%
10k rows with formulas35ms39ms+11%
20k text-heavy rows145ms123ms-15% (faster)

Note: In some write scenarios, Node.js performs slightly better than Rust due to V8's efficient string handling during data construction.

Scaling Performance

Read performance remains consistent across different file sizes:

RowsRustNode.jsOverhead
1k6ms7ms+17%
10k62ms68ms+10%
100k659ms714ms+8%

Write performance scales linearly:

RowsRustNode.jsOverhead
1k7ms7ms0%
10k68ms66ms-3% (faster)
50k456ms332ms-27% (faster)
100k735ms665ms-10% (faster)

Raw Buffer Transfer and Memory Behavior

SheetKit reduces Node.js-Rust boundary cost by transferring sheet data as raw buffers instead of per-cell JavaScript objects. This transfer model keeps the FFI boundary coarse-grained, reduces object marshalling overhead, and lowers GC pressure in read-heavy paths.

Key Optimizations

1. Buffer-Based FFI Transfer

Instead of creating individual JavaScript objects for each cell, SheetKit serializes entire sheets into compact binary buffers that cross the FFI boundary in a single operation.

Before: Per-cell object transfer across the FFI boundary After: Single raw-buffer transfer for a sheet payload

This optimization:

  • Reduces read-side FFI overhead
  • Reduces allocation and GC pressure from per-cell object creation
  • Maintains full type safety

2. Internal Data Structure Optimizations

SheetKit's internal representation minimizes allocations:

  • CompactCellRef: Cell references stored as inline [u8;10] arrays instead of heap String
  • CellTypeTag: Cell types stored as 1-byte enums instead of Option<String>
  • Sparse-to-dense conversion: Optimized row iteration avoids intermediate allocations

These optimizations benefit both Rust and Node.js performance.

3. Density-Based Encoding

The buffer encoder automatically selects between dense and sparse layouts based on cell density:

  • Dense encoding for files with ≥30% cell occupancy
  • Sparse encoding for files with <30% cell occupancy

This ensures optimal memory usage for all file types.

Benchmark Environment

All benchmarks were performed on:

ComponentVersion
CPUApple M4 Pro
RAM24 GB
OSmacOS arm64 (Apple Silicon)
Node.jsv25.3.0
Rustrustc 1.93.0

Results are median values from 5 runs with 1 warmup run per scenario.

Benchmark Scope and Data

The numbers on this page are from SheetKit's own Rust and Node.js benchmark suites in this repository. Results vary based on data shape, feature usage, and runtime environment.

For benchmark methodology and raw data, see benchmarks/COMPARISON.md in the repository.

Performance Tips

For Read-Heavy Workloads

Use OpenOptions to load only what you need:

typescript
const wb = await Workbook.open("huge.xlsx", {
  sheetRows: 1000,      // Only read first 1000 rows per sheet
  sheets: ["Sheet1"],   // Only parse Sheet1
  maxUnzipSize: 100_000_000  // Limit uncompressed size
});

For Write-Heavy Workloads

Use StreamWriter for sequential row writes:

typescript
const wb = new Workbook();
const sw = wb.newStreamWriter("LargeSheet");

for (let i = 1; i <= 100_000; i++) {
  sw.writeRow(i, [`Item_${i}`, i * 1.5]);
}

wb.applyStreamWriter(sw);
await wb.save("output.xlsx");

For Large Files

Combine OpenOptions with StreamWriter:

typescript
// Read only metadata
const wb = await Workbook.open("input.xlsx", {
  sheetRows: 0  // Don't parse any rows
});

// Process with streaming
const sw = wb.newStreamWriter("ProcessedData");
// ... process data ...
wb.applyStreamWriter(sw);

Next Steps

Released under the MIT / Apache-2.0 License.