All figures derived from architecture + hardware baselines — no target TPS assumed. Execution-only upper bounds on 32-core AMD EPYC, 128 GB RAM, NVMe SSD.
This analysis surveys hardware configurations used by peer L1 chains for their published peak TPS benchmarks, models ACE Chain on equivalent hardware (32-core AMD EPYC, 128 GB RAM, NVMe SSD — matching Aptos Block-STM benchmark hardware), and computes per-scenario throughput ceilings stage by stage through the transaction pipeline.
Scope note: Unless stated otherwise, figures are execution-path-only upper-bound estimates. They exclude consensus, network propagation, and durable persistence. Real mainnet sustained TPS will be materially lower.
Directional sustained range: 10,000–30,000 TPS (consistent with industry patterns where mainnet sustained throughput is 3–10% of headline peak). Multi-shard projections: 4 shards ~17K sustained, 8 shards ~31K sustained.
Derived from architecture analysis of the ACE runtime codebase:
| Stage | Per-tx Cost | Parallel? | Source |
|---|---|---|---|
| Attestation check (rayon batched) | ~2–5 μs | Yes | pipeline/attest.rs |
| Write-set extraction + scheduling | ~1–3 μs | No (sequential) | scheduler.rs |
| Execution — Native transfer | ~60–90 μs | Yes (per batch) | dispatcher.rs |
| Execution — EVM simple call | ~200–300 μs | No (WriteSet::Global) |
evm/engine.rs |
| Execution — SVM transfer | ~50–100 μs | Yes | svm/engine.rs |
| Execution — BVM transfer | ~50 μs | Yes | bvm/engine.rs |
| State write (BTreeMap in-memory) | ~5–10 μs | Included in exec | state_tree.rs |
| State write (RocksDB persistent) | ~10–50 μs | Included in exec | rocks_state_db.rs |
| Merkle root computation | ~10–20 ms/block | No | state_tree.rs |
| ZK proof generation (GPU) | ~30 ms/tx, 1024 GPU threads | Async/pipelined | crypto/proof.rs |
| Scenario | Tx/Slot | TPS (MVP ceiling) | TPS (with optimisations) | Bottleneck |
|---|---|---|---|---|
| Pure native (in-memory) | 68K–136K | 170,000–340,000 | ~320,000 | State clone / CPU |
| Mixed 60/20/20 (in-memory) | 50K–118K | 125,000–295,000 | ~300,000 | EVM serialisation |
| Persistent (RocksDB) | 30K–37K | 75,000–93,000 | ~100,000 | Storage I/O |
| EVM-heavy (100%) | 1K–1.5K | 2,500–3,750 | ~5,000 | WriteSet::Global |
With varying EVM transaction share (375 ms execution budget, 32 cores, 85% effective parallelism):
| EVM Share | EVM Tx Count | Parallel Tx | Total Tx/Slot | TPS |
|---|---|---|---|---|
| 0% | 0 | 136,000 | 136,000 | 340,000 |
| 10% | 100 | 131,500 | 131,600 | 329,000 |
| 20% | 200 | 117,800 | 118,000 | 295,000 |
| 50% | 500 | 90,700 | 91,200 | 228,000 |
| 100% | 1,500 | 0 | 1,500 | 3,750 |
A single STARK/FRI proof replaces per-tx signature verification. This eliminates the #1 industry bottleneck entirely. On Solana/Firedancer, each SigVerify tile handles only 20–40K TPS; ACE needs zero SigVerify capacity.
Solana: each SigVerify tile handles 20–40K TPS. Ed25519 verification takes ~76 μs/sig. With PQC (ML-DSA-44): ~330 μs/sig — a 4.3× slowdown that directly reduces throughput.
One recursive STARK/FRI proof covers the entire block. Verification cost is constant regardless of block size or signature algorithm. No trusted setup required. Adding PQC has zero impact on verification throughput.
| Dimension | ACE Chain | Solana |
|---|---|---|
| Consensus model | BFT + PoH + ZK proof | Tower BFT + PoH |
| Slot duration | 400 ms | 400 ms |
| Soft finality | ~400 ms (⅔ stake-weighted votes) | ~400 ms (optimistic confirmation) |
| Hard finality | ~600 ms (ZK proof, target) | ~12 s (31 confirmations) |
| Block verification | O(1) (single ZK proof) | O(n) (per-tx sig verification) |
Hard finality is ~20× faster than Solana by design. This stems from the O(1) verification property: regardless of how many transactions the block contains, verification cost is fixed (~0.5 ms, 3 pairing checks).
| Metric | Solana | ACE (single shard) | ACE (4 shards) |
|---|---|---|---|
| Network annual cost | ~$90 M | ~$1.5 M | ~$3 M |
| Annual transactions | ~31.5 B | ~157.7 B | ~536 B |
| Cost per million tx | ~$2.86 | ~$0.0095 | ~$0.0056 |
Solana's largest hidden expense: $67.5 M/year in vote transaction fees (a structural cost borne by the entire network, fluctuating with SOL price). ACE's BFT votes produce no on-chain transactions — vote cost is zero.
All figures modelled on the same hardware class (32-core AMD EPYC, 128 GB RAM):
| Chain | Claimed Peak | Mainnet Sustained | ACE Modelled Peak | ACE Advantage |
|---|---|---|---|---|
| Solana | 65K (exec-only) | ~4K | 170K–340K | O(1) auth eliminates SigVerify bottleneck |
| Aptos | 170K (exec-only) | ~30K target | 170K–340K | Comparable execution; no Block-STM overhead for non-conflicting tx |
| Sui | 297K (PTB=100) | ~11K (PTB=1) | 170K–340K | Apples-to-apples PTB=1: Sui ~11K vs ACE 170K+ |
| Monad | 10K (claimed sustained) | — | 170K–340K | Consumer vs server hardware; architecture advantage on auth |
Critical caveat: ACE Chain's numbers are execution-only upper-bound estimates on equivalent hardware, extrapolated from MVP architecture — not measured from the current implementation. Headline peak TPS numbers across the industry are execution-only, single-machine tests; mainnet sustained throughput is typically 3–10% of the headline peak.
A single STARK/FRI proof replaces per-tx signature verification. Eliminates the #1 industry bottleneck entirely. Zero SigVerify capacity needed. No trusted setup.
Solana requires a serial SHA-256 chain consuming one full core. ACE has no such constraint — all cores are available for execution.
GPU-accelerated proving runs asynchronously while the next block is built. Proving never sits on the critical path of block production.
~600 ms via ZK proof (target) vs Solana's ~12 s (31 confirmations). A finality-quality difference — cryptographic proof vs probabilistic time window.
HKDF context isolation enables parallel shards with linear TPS scaling. 4 shards ~17K sustained, 8 shards ~31K sustained. No cross-shard state sync.
PQC verification decoupled from execution. ML-DSA-44's 2420-byte signatures don't affect VM throughput. Traditional chains face 85–90% TPS drop with PQC.
| Optimisation | Expected Impact | Difficulty |
|---|---|---|
| Copy-on-write state snapshots | 50–80% reduction in parallel batch overhead | Medium |
| Incremental Merkle trees | 50–70% faster state root computation | Medium |
| EVM write-set static analysis | Remove WriteSet::Global for simple EVM transfers |
High |
| Recursive proof aggregation | Remove MAX_PROOF_BUNDLE_ENTRIES ceiling |
High |
| Pipelined block execution | Overlap execution with previous block's proving | Medium |
| Custom state DB (replacing RocksDB) | 2–5× I/O throughput | Very High |