Vetting Lab

Scoring Methodology

How we evaluate AEC technology tools — transparently, consistently, and without vendor influence.

Every tool reviewed in the Vetting Lab is assessed against a standardised 100-point framework spanning five dimensions. Scores are derived from publicly available evidence — vendor documentation, independent user reviews on G2 and Capterra, regulatory filings, procurement case studies, and industry reporting. We do not accept payment for reviews, and no vendor relationship influences a published score. All editorial judgements are made by a practising structural engineer with direct experience in AEC digital delivery.

5 Dimensions
20 Pts each
100 Total pts
5 Score bands
0 Paid reviews

The Five Dimensions

D1

AEC Workflow Fit

20 points — weighted 20%

This dimension asks the most fundamental question: does the tool actually map to how architecture, engineering, and construction projects are delivered? AEC workflows are distinct from generic business processes — they involve multi-disciplinary coordination, regulatory compliance gates, long project timelines, and handover requirements that most software categories do not account for. A tool with impressive general capabilities but poor AEC specificity scores poorly here, regardless of its market position. We assess where the tool sits in the project lifecycle (design, procurement, construction, or operations), which disciplines it serves, and whether its core function addresses a documented pain point in AEC delivery rather than a pain point adapted from another industry.

Evidence criteria

  • Named AEC project lifecycle stage coverage
  • Discipline-specific feature depth (structural, MEP, civil, etc.)
  • Alignment with established AEC delivery workflows
  • Regulatory or compliance-aware feature design
  • Multi-disciplinary coordination capability
  • Use case grounding in documented AEC pain points
D2

User Evidence

20 points — weighted 20%

Vendor claims are a starting point, not evidence. This dimension evaluates the quality, volume, and verifiability of independent user testimony — the kind that cannot be purchased or curated by the vendor. We look primarily at third-party review platforms (G2, Capterra, Trustpilot where relevant), published case studies with named organisations and measurable outcomes, and procurement-level endorsements from identifiable AEC firms. A high score here requires not just quantity of reviews but specificity — vague praise scores far lower than a review that names the tool’s impact on a concrete project metric. We also weight recency, as user sentiment for rapidly evolving software can shift significantly within 12 to 18 months.

Evidence criteria

  • Volume and recency of third-party platform reviews
  • Specificity of user testimony (named outcomes, metrics)
  • Verified AEC firm case studies with measurable results
  • Named client endorsements from identifiable organisations
  • Consistency of sentiment across independent sources
  • Evidence of repeat adoption or enterprise expansion
D3

Vendor Stability

20 points — weighted 20%

AEC projects run for years. Software that is acquired, pivoted, or discontinued mid-project creates serious delivery risk — a consideration that most generic software evaluation frameworks underweight for our industry. This dimension assesses the commercial and organisational durability of the vendor behind the tool. We examine funding history and runway signals, ownership structure, headcount trajectory, product release cadence, and any known acquisition or strategic partnership activity. Established vendors with decades of AEC focus score differently from early-stage startups — we do not penalise innovation or emerging entrants, but we do score the risk profile honestly, which is information practitioners need to make responsible procurement decisions.

Evidence criteria

  • Funding history and capital runway signals
  • Ownership structure (independent, PE-backed, acquired)
  • Team size and headcount growth trajectory
  • Product release frequency and update cadence
  • Known acquisition or strategic partnership signals
  • Years operating in the AEC market
D4

Tech Integration

20 points — weighted 20%

No AEC technology tool operates in isolation. The digital infrastructure of a modern AEC project typically includes a Common Data Environment, BIM authoring tools, project controls platforms, and an expanding set of field and operations software. A tool’s real-world value is significantly affected by how well it connects to this ecosystem. This dimension evaluates documented integration capability — not promises on a marketing roadmap, but demonstrated connections to the tools and data formats that AEC firms actually use. Particular weight is given to open API availability, support for industry-standard file formats and exchange protocols (IFC, BCF, COBie), and native integration with the market-dominant platforms in BIM and CDE categories.

Evidence criteria

  • Open API availability and developer documentation quality
  • Native BIM platform integrations (Revit, Archicad, Tekla, etc.)
  • CDE integrations (ACC, Procore, Aconex, etc.)
  • Support for IFC, BCF, COBie and open standards
  • Data export flexibility and format coverage
  • Marketplace or third-party integration ecosystem
D5

Value Transparency

20 points — weighted 20%

AEC firms operate under cost pressure and procurement governance. A tool that cannot clearly communicate its pricing structure, total cost of ownership, or the measurable value it delivers creates procurement friction — and, more importantly, makes it harder for practitioners to make responsible buying decisions. This dimension assesses pricing clarity, the availability of ROI evidence, and the transparency of the vendor’s commercial model. Hidden fees, mandatory multi-year contracts with opaque terms, and the absence of any published pricing all reduce the score. We also look for documented productivity or cost outcomes in published materials — not projected benefits, but recorded results from identified deployments.

Evidence criteria

  • Publicly available or clearly communicated pricing
  • Transparent total cost of ownership (licensing + implementation)
  • Published ROI data or productivity outcome evidence
  • Trial or freemium access availability
  • Contract flexibility and exit provisions
  • Absence of deceptive or opaque commercial practices

Score Bands

70–84
Conditionally Recommended

Credible tool with demonstrated AEC value, but with identified gaps in one or two dimensions. Suitable for shortlisting with targeted additional due diligence in the weaker areas.

55–69
Watch List

Promising capability but with notable evidence gaps, vendor risk signals, or limited AEC-specific validation. Monitor for development; not recommended for immediate adoption without further evaluation.

40–54
Caution

Significant concerns across multiple dimensions. Adoption carries material risk. Only suitable for low-stakes pilot evaluation with clearly defined exit criteria and limited data exposure.

Editorial independence & methodology disclosure

All Vetting Lab reviews are conducted independently by a practising structural engineer. Scores are derived from publicly available evidence only — no hands-on product testing, no vendor-provided data rooms, and no paid or sponsored review arrangements. Vendor partnerships on other AECO.digital platforms (if any) provide visibility, not editorial influence, and are disclosed separately.

Scores reflect the evidence available at the time of publication. AEC technology markets move quickly; we note review dates and update scores when significant new evidence emerges. A score should be read as a snapshot, not a permanent verdict.

Scroll to Top