Skip to main content

Why ECalPro Runs 24,217 Automated Tests Before Every Deployment — And Why Your Calculator Should Too

A deep dive into ECalPro's 5-layer test pyramid — from 4,711 unit tests to 2,935 benchmark validations against IEC TR 60909-4 and CIGRE TB 880. How a single 0.3% derating interpolation error revealed why most engineering calculators ship bugs nobody catches.

12 min readUpdated March 6, 2026
Share:

Key Finding

Key Finding: A 0.3% interpolation error in a single derating table cell propagated through 14% of all cable sizing calculations using AS/NZS 3008.1.1:2017, Table 22. It was caught by Layer 5 benchmark tests — the kind of test most engineering software never runs. If your calculator does not validate against published benchmark data, you have no way of knowing whether it is correct.

The Problem with Trust

Engineers trust their tools. A cable sizing calculator says 16 mm² Cu/XLPE, and 16 mm² goes on the cable schedule. Nobody re-derives the derating factor from first principles. Nobody cross-checks the mV/A/m value against the standard’s printed table. The entire value proposition of calculation software rests on an unspoken assumption: the engine is correct.

But how do you verify that assumption? Most commercial electrical engineering software publishes no test methodology. No benchmark validation reports. No public accuracy claims beyond “compliant with [standard name].” When pressed, vendors point to type-test certificates for their hardware products — not for the calculation logic in their software.

ECalPro takes a different position: every calculation result must be provably correct, and the proof must be automated, repeatable, and auditable. This is not a marketing claim. It is an engineering requirement that shapes every line of code in the platform.

The 5-Layer Test Pyramid

ECalPro’s test suite is structured as a five-layer pyramid. Each layer catches a different class of error, and each layer is more expensive to run but catches more dangerous bugs.

LayerTypeCountWhat It Catches
L1Unit4,711Individual function correctness — impedance calculations, table lookups, unit conversions
L2Boundary3,991Edge cases at standard table boundaries — minimum/maximum values, interpolation edges, zero-length cables
L3Parametric6,136Property-based testing across input ranges — ensures monotonicity (larger cable = lower voltage drop), physical consistency
L4Cross-Standard378Same physical scenario calculated under AS/NZS, BS, IEC, NEC — results must be within physically reasonable bounds of each other
L5Benchmark2,935Results compared against published reference data: IEC TR 60909-4, CIGRE Technical Brochure 880, worked examples from standard annexes

Total: 24,217 tests executed before every deployment. Average run time: 47 seconds on 4 parallel workers. Every pull request must pass all five layers before merge.

The Test That Caught a 0.3% Error

In February 2026, a Layer 5 benchmark test flagged a discrepancy. A cable sizing calculation for a 35 mm² Cu/XLPE cable at 40°C ambient, grouped with 6 circuits in a cable tray, produced a current-carrying capacity of 94.7 A. The benchmark reference value from AS/NZS 3008.1.1:2017, Table 13, Column 17 cross-referenced with Table 22 derating yielded 94.4 A.

A difference of 0.3 A — approximately 0.3%. In isolation, this is negligible. But the root cause was not a rounding difference. It was an interpolation error in the derating factor lookup for Table 22, Row 40°C, Column 90°C maximum conductor temperature. The interpolation function was using linear interpolation between 35°C and 40°C rows, but the standard provides discrete values — no interpolation is permitted. The correct derating factor is 0.91, not the interpolated 0.913.

This 0.003 difference in derating factor affected every calculation where ambient temperature was 40°C with XLPE insulation. Across the test suite, 14% of AS/NZS 3008 calculations were impacted. In 23 cases, the error was large enough to change the selected cable size by one step — a 16 mm² cable was being selected where 25 mm² was required.

Layer 1 through Layer 4 tests all passed. They tested function correctness, boundary behaviour, monotonicity, and cross-standard consistency. But none of them compared the output against a known-correct reference value. Only Layer 5 caught it.

How This Compares to Industry Practice

ETAP, the dominant enterprise electrical engineering software (priced at $700–$100,000+/year depending on modules), publishes detailed technical papers on their calculation methodologies. Their validation approach relies primarily on comparison with hand calculations and field measurements. This is rigorous engineering practice, but it is not automated — each validation is a one-time exercise performed during development, not a regression test executed on every build.

The critical difference is not methodology but frequency. A validation performed once during development proves the software was correct at that point in time. An automated test suite executed on every commit proves it is still correct after every code change. When a developer modifies the grouping factor lookup to fix a BS 7671 edge case, do the AS/NZS results change? Without automated cross-standard regression tests, the answer is “we hope not.”

Open-source alternatives (myCableEngineering, jCalc community tools) typically have fewer than 100 tests, concentrated at Layer 1. They verify that functions return expected values for a handful of inputs. They do not test boundaries, do not run parametric sweeps, and do not validate against published benchmark data.

ECalPro’s position is that calculation software for safety-critical engineering applications requires the same level of verification rigour as the standards it implements. If AS/NZS 3008.1.1:2017 specifies a derating factor to three significant figures, the software must reproduce that value to three significant figures — and prove it does so on every deployment.

Benchmark Validation Sources

Layer 5 benchmark tests are not invented. They are derived from published, peer-reviewed reference data:

  • IEC TR 60909-4:2000 — Worked examples for short-circuit current calculations. 52 test cases covering balanced and unbalanced faults in radial and meshed networks. ECalPro reproduces all 52 within ±0.1% of the published values.
  • CIGRE Technical Brochure 880 — Benchmark models for cable ampacity calculations. Includes buried cable configurations with varying soil resistivity, ambient temperature, and grouping. 188 test cases implemented.
  • Standard annex worked examplesBS 7671 Appendix 4 specimen calculations, IEC 60364-5-52 Annex A examples, NEC Chapter 9 Table 9 impedance validation. These are the examples the standard committees themselves use to verify correct application.
  • jCalc.net cross-validation — 390 AS/NZS 3008 calculations run through both jCalc (the Australian industry benchmark) and ECalPro, with results matched within ±0.5%. Discrepancies above this threshold are investigated and resolved.

Every benchmark test includes a citation to its source document, section, and page number. When a benchmark test fails, the failure message includes the reference so the developer can verify the expected value against the original publication.

What This Means for Engineers

When you run a cable sizing calculation on ECalPro and receive a result of 25 mm² Cu/XLPE with a derating factor of 0.82, that result has been verified by:

  1. A unit test confirming the derating factor lookup returns 0.82 for the specified conditions
  2. A boundary test confirming the lookup handles the exact boundary between table rows correctly
  3. A parametric test confirming that increasing ambient temperature always decreases the derating factor (physical monotonicity)
  4. A cross-standard test confirming the equivalent BS 7671 calculation for the same physical scenario produces a result within a physically reasonable range
  5. A benchmark test confirming the final current-carrying capacity matches the published standard table value within ±0.1%

This is not trust. This is verification. Every calculation, every deployment, every time.

Standards referenced: AS/NZS 3008.1.1:2017, BS 7671:2018+A2:2022, IEC 60364-5-52:2009+A1:2011, NEC/NFPA 70:2023, IEC TR 60909-4:2000, CIGRE TB 880.

Try the Cable Sizing Calculator

Put this methodology into practice. Calculate results with full standard clause references — free, no sign-up required.

Or embed this calculator on your site
Calculate Cable Sizing

Frequently Asked Questions

Every pull request triggers all 24,217 tests. No code reaches production without a full green pass across all five layers. The suite runs in approximately 47 seconds using 4 parallel workers.
The deployment is blocked. The developer must investigate whether the code change introduced a regression or whether the benchmark expected value needs updating (e.g., if a standard issues an amendment). All benchmark changes require a cited justification in the commit message.
The test suite validates the engine logic, not individual user calculations. However, every calculation you run passes through the same code paths that are covered by the test suite. If the tests pass, your calculation uses verified logic.

Related Resources