Summary
Our goal for Arc is to provide 99.99% uptime and for upgrades to ship fast, seamlessly, and frequently, minimizing interruptions to user flows. In support of this goal, we underwent an extensive testing regimen employing chaos-testing and massively parallel workload generation, using tools we’ll strive to open-source as the network and code matures. We also built a feature to make Arc’s block production as regular and as close as possible to 500ms per block.
As the Economic OS for the internet, Arc prioritizes reliability and predictability. We aspire for Arc Public Testnet to be capable of achieving four nines of availability (i.e., 99.99% uptime) and for users to experience predictable behavior in how the network settles flows with sub-second determinism. Arc is designed to function like an atomic clock, engineered to tick with absolute precision, no matter the load, latency, or broader conditions.
In this short update, we give insights on some of the measures we took to contribute to this goal. It is still early days, however. We anticipate many upgrades to come, planned and unplanned, and numerous lessons to be learned as we iterate on hitting the four nines uptime capability goal and as we march towards mainnet readiness in 2026.
Testing the network
Testing Arc prior to making the network publicly available has been an intense exercise. Here’s what we did.
Motivation and background
To recall, Arc comprises two high-level components: (1) the execution layer based on Reth, and (2) the consensus layer based on the Malachite consensus engine. While Reth has been in production in many networks and is currently at version v1.9.3 (as of early December 2025), Malachite is a more recent codebase and is at pre-v1 stage. Additionally, we’ve built a custom middleware that sits between the consensus engine and the execution layer. This middleware handles the translation between the two, and can be seen as an “application” on top of Malachite. The middleware is the newest component in the Arc codebase. There are ongoing audits for various parts of the stack, and also an extensive testing regimen for the Malachite engine itself. To complement these existing testing approaches and audits, we went to great lengths by pushing the network to its limits.
Testing overview
Two important questions drove us in the testing exploration:
- Is the network able to tolerate various faults or unforeseen situations?
- Is the network well-parametrized so that performance remains predictable regardless of user load?
The two questions are complementary: the first is about internal unpredictable problems (e.g., latency, bugs, software crashes, operator error or misconfiguration) whereas the second is about exogenous unpredictability (e.g., spam, thundering herd or coordinated access storm, pushing the block byte and gas limits or p2p buffer limits). To model, reproduce, and test against such potential issues, we made use of three tools, which we’ll strive to open-source as the codebase matures:
- Quake: a flexible testnet orchestrator and end-to-end testing tool for evaluating Arc
- Guzzler: a gas-intensive contract and testing that we can use to exhaust gas in a block
- Spammer: a massively-parallel workload generator that can exhaust block capacity in bytes and p2p buffer limits
Quake
Quake is an end-to-end testnet orchestration, fuzzing, and chaos-testing tool for the Arc blockchain. We built it to evaluate reliability under realistic and adversarial conditions. It provisions Docker-based multi-node networks for Arc’s consensus layer (Malachite) and execution layer (Reth) from TOML manifests, and offers a CLI to build, start, inspect, and control the testnet. Quake can emulate inter-region network latency, apply targeted or randomized perturbations (e.g., disconnect, kill, pause, restart, upgrade) to nodes or containers, and generate sustained transaction load via the Spammer RPC load generator. Together, these capabilities let us reproduce faults, validate upgrades, and measure recovery, throughput, and stability before production. For those familiar with the CometBFT QA process and e2e testing framework, Quake is a next generation tool purpose-built for Arc.
Guzzler
Guzzler is our purpose-built gas saturation tool for EVM load testing. It pairs a tiny Solidity contract with a k6-driven transaction generator to reliably consume nearly all the gas provided to each transaction while leaving a small safety buffer to avoid out‑of‑gas reverts. This lets us pack blocks tightly and stress the full transaction lifecycle — mempool admission, block building, execution, and fee market behavior — under realistic, repeatable pressure. Guzzler provides knobs for gas limits and target gas buffer, and makes it easy to benchmark execution client performance, compare configurations, and detect regressions in throughput and latency.
Under the hood, Guzzler offers two gas‑burning modes. The first busy‑loops while gasleft() exceeds a caller‑supplied threshold, steadily “burning down” gas to the desired buffer. The second adds a metered staticcall to a state‑writing function (which intentionally reverts in static context) before topping off with the loop, exercising revert paths and gas budgeting more realistically — without changing state. In practice, we use Guzzler to send many high‑gas transactions per second with a small buffer (e.g., ~600 gas) and observe how the node fills blocks, handles contention, and maintains responsiveness. The result is a simple, deterministic way to saturate blocks and evaluate EVM stack performance end‑to‑end.
Spammer
Quake includes a tool custom-built for Arc, that we informally call “Spammer.” This is a high-throughput transaction generator designed to push the Arc network to its limits under controlled, reproducible conditions. Spammer pairs a Transaction Generator with a Sender: the generator continuously creates signed transactions from a partitioned set of prefunded accounts, buffering them in a channel, while the Sender fans them out round‑robin to one or more Reth RPC endpoints. A shared Rate Limiter enforces global throughput targets (i.e., TPS ceilings and total caps). We also have a Result Tracker, which reports real‑time success/failure statistics. This lets us scale load horizontally (by adding as many spammers as we want), shape it precisely, and observe the network’s steady‑state behavior versus saturation.
For Arc to be well‑parametrized and remain predictable under user loads, we run targeted load profiles and tune settings iteratively. We can adjust workload generation rate, test duration, and degree of parallelism in generators; vary account set size and partitioning; and toggle nonce strategies to emulate real usage and adversarial patterns. Because the events and transaction load are reproducible, it’s a very useful tool in catching regressions and keeping performance consistent as the network evolves.
At the moment, Quake is used in daily tasks with engineers employing it in a range of situations, for example, to rehearse upgrades, spin up testing scenarios for edge cases, demonstrate new features, and, most importantly, as part of nightly tests and general QA prior to major upgrades and milestones. In the future, we would like to refine these tools towards obtaining deterministic simulation testing for Arc end-to-end.
Stable block times
If you look at Arc’s testnet block explorer, you will notice something interesting: every block finalizes quite predictably with a latency of 500ms. This predictability is by design, but it is not a common feature in blockchains, and implementing it reliably can be tricky. We document here our approach to solving this for Arc testnet.
The Tendermint protocol which underlies Arc is optimistically responsive. This consensus protocol finalizes each block as fast as the network conditions allow. In practice, this means that there’s numerous network conditions that make the block finalization time highly variable. Among others, those conditions are:
- Spontaneous network conditions (i.e., the actual network delay between different pairs of validators and among a super-majority of 67% of validators)
- Geographic distribution of validators (ours were located across North America, Europe, and Asia)
- Block proposer selection algorithm (currently set to round-robin)
- Block utilization rate and network congestion rate, which determines overhead from the execution layer
Put differently, by default, the Arc network would finalize blocks with a latency varying somewhere between 100ms and 300ms. We wanted to avoid this behavior of block latencies fluctuating, and instead the Arc team decided to experiment with throttling block production rate so that it becomes regular and predictable.
Current solution for block regularity
In the current solution, we implemented a simple feature that runs at any Arc node as part of the block production lifecycle. First, all nodes are parametrized with the same target block time of 500ms. Let’s call this target block time. Whenever a node initiates the production of a new block, it marks down the current local time, called start_block_time. As soon as a node finishes producing that block — i.e., the node observes sufficiently many pre-commits for the same block, as part of the Tendermint algorithm — it then compares the current local time with start_block_time and obtains the elapsed time for that block (i.e., the actual block finalization latency at the level of consensus). The node finally computes the difference between the target time and the elapsed time. If the difference is larger than 0, then the node sleeps for that time, which is called wait-time; otherwise, the node does not sleep at all. The node thereafter proceeds to participate in the production of the next block.
While functional, the solution for this feature is imperfect because nodes need to detect accurately if they are lagging or not (if nodes are up-to-date with the network, then they should activate the regulation algorithm; if they are lagging, they should not). This solution also requires synchronized clocks across nodes, which is an extra assumption. Currently, it seems like the need for synchronized clocks is unavoidable in any solution, though we’re continuing to evaluate this requirement as well as the pros and cons of this feature more broadly as we march toward mainnet in 2026.
Visualizing waiting times for regulating block latencies
For a visual representation, here we binned the distribution of wait-time values from the Arc public testnet, using 20 bins, for 10,000 observations taken on November 7, 2025. Most nodes wait between 200ms and 400ms in these observations, pointing out that the actual block latency in the network was often between 100ms and 300ms. The network is widely geo-distributed, and comprises 10 validators, which explains the relatively low block latency. We have been slowly growing the number of validators over time since the network’s genesis, covering more and more geographical regions as well, and will continue doing so.

With this feature, despite natural network variations, Arc testnet finalizes blocks with high predictability at a latency of 500ms. As we add more validators and the network utilization grows, we expect that this feature may not be necessary, or we might parametrize it with a different target block time. If you’re a team deploying on Arc and have opinions about this feature, we’d love to hear from you.
In closing
We went to great lengths to offer a smooth experience with Arc testnet, and build confidence that Arc exhibits predictable behavior with sub-second determinism. We aspire for Arc to be capable of providing four nines of availability, and for our upgrades to ship fast and seamlessly, minimizing interruptions to user flows. We’ve documented here some of the technical approaches we took to achieve this. As we progress toward mainnet, we will continue our efforts to keep Arc predictable and reliable.
If you’re interested in building on Arc, visit the Arc docs to get started. To get in touch with Arc core developers and community, join our Discord.
References
- Arc Docs: https://docs.arc.network/
- Arc Testnet explorer: https://testnet.arcscan.app/
- “The Latest View on Tendermint's Responsiveness”, Nenad Milosevic, Informal Blog https://informal.systems/blog/tendermint-responsiveness
- https://github.com/circlefin/malachite/discussions/1119
- “The latest gossip on BFT consensus”, Ethan Buchman, Jae Kwon, Zarko Milosevic, https://arxiv.org/abs/1807.04938
- https://antithesis.com/resources/deterministic_simulation_testing/
Arc testnet is offered by Circle Technology Services, LLC ("CTS"). CTS is a software provider and does not provide regulated financial or advisory services. You are solely responsible for services you provide to users, including obtaining any necessary licenses or approvals and otherwise complying with applicable laws.
Arc has not been reviewed or approved by the New York State Department of Financial Services.
The product features described in these materials are for informational purposes only. All product features may be modified, delayed, or cancelled without prior notice, at any time and at the sole discretion of Circle Technology Services, LLC. Nothing herein constitutes a commitment, warranty, guarantee or investment advice.

