Native, EVM, SVM, BVM, and TVM engines share a unified identity and state tree — no wrapping, no bridging, no fragmentation.
The n-VM dispatcher routes transactions to the correct virtual machine engine based on opcode ranges. Each VM operates on a shared state tree, so cross-VM interactions are native operations — not bridge calls.
All virtual machines share a single state tree. This is the key architectural decision that eliminates the need for wrapped tokens, bridge protocols, and cross-chain messaging. When you transfer assets between VMs, it is a balance change on a unified token ledger — not a lock-mint-burn cycle.
ERC-20 tokens on ACE's EVM are the same tokens accessible from the SVM. No WETH, no wrapped USDC. One token, one ledger, multiple VMs.
Cross-VM transfers are native balance operations. No bridge contracts, no relayers, no multi-sig custodians, no bridge exploits.
One idcom maps to all VM address formats simultaneously. Your EVM address and your SVM pubkey represent the same account.
Transactions that span multiple VMs execute atomically within a single block. Either all succeed or all revert.
Each VM has its own native address format. ACE Chain's idcom (identity commitment) deterministically maps to every VM's address format. The same identity, the same account, across all execution environments.
| VM | Address Format | Derivation |
|---|---|---|
| EVM | 20-byte address (0x...) |
keccak256(idcom)[12:] |
| SVM | 32-byte public key (Base58) | HKDF(idcom, "solana") |
| BVM | 33-byte compressed key | HKDF(idcom, "bitcoin") |
| TVM | 20-byte address (T...) |
keccak256(idcom)[12:] |
All mappings are deterministic and stateless. Given an idcom, any node can compute the corresponding address for any VM without additional lookups. This means wallet software, block explorers, and indexers can resolve cross-VM addresses locally.
ACE Chain provides full EVM execution via REVM (Rust EVM). Solidity contracts deploy as-is with no modifications. The msg.sender is derived from the user's idcom, providing a stable address regardless of which signature algorithm was used.
Deploy existing Solidity contracts without modification. All opcodes, precompiles, and gas mechanics are compatible.
Works with Hardhat, Foundry, Remix, and all standard EVM development tools. Point your RPC at ACE Chain and deploy.
Full support for ERC-20, ERC-721, ERC-1155, and other token standards. Existing token contracts work without changes.
Every EVM transaction is verified with the authorization layer first. PQC signatures protect contract interactions automatically.
Solana programs execute on ACE Chain with access to the unified account model. Programs interact with ACE state through the standard Solana program interface, with accounts resolved from idcom-derived addresses.
SPL token operations, program-derived addresses (PDAs), and cross-program invocations (CPIs) work as expected. The SVM engine shares the same token ledger as all other VMs — so an SPL token transfer and an ERC-20 transfer can affect the same underlying balance.
Adding a new VM to ACE Chain requires implementing exactly one interface: the idcom to native address mapping. Once that mapping exists, the new VM automatically inherits the full infrastructure stack.
Authorization layer verifies signatures before VM dispatch. New VMs get post-quantum security for free.
Credential batching and STARK proofs apply to all VMs equally. O(1) verification regardless of VM type.
The unified token ledger, account model, and state tree are available to every VM without additional integration.
Every transaction flows through a 5-stage pipeline. Stages are independent and can execute in parallel for non-conflicting transactions.
The attestation stage (signature verification) is decoupled from execution. This means PQC signature verification — which involves larger signatures — never blocks the execution pipeline. Transactions with verified credentials proceed directly to scheduling and execution.
Non-conflicting transactions (those touching different accounts) execute in parallel within the same block. The scheduler performs conflict detection based on read/write sets, enabling high throughput without sacrificing determinism.