Structural Audit Methodology

Structural Audit Methodology

This page explains the technical framework behind the structural audits described on the main service page. It is written for designers and publishers who want to understand how the analysis works and how findings are produced.

This methodology is published to allow clients to understand exactly how findings are produced and evaluated.

All audits use the same underlying framework. What varies is scope: how much of the framework is applied and which parts of a product are examined.

Jump to:

At a Glance

Structural audits are designed to answer one question:

Does this product, as written, structurally deliver what it claims without relying on unstated assumptions or hidden GM labor?

Key characteristics of the framework:

  • Findings are grounded in the text itself, with specific page citations
  • Structural gaps are treated as risk factors, not stylistic flaws
  • Verification (Tier 1) is separated from inference (Tier 2) and context (Tier 3)
  • Products are evaluated only against what they claim to be and do

This framework does not assess enjoyment, originality, balance as felt in play, or aesthetic quality.

How Audits Are Scoped

Each audit begins by fixing three constraints:

  1. Product type (what the product claims to be)
  2. Declared scope (what it claims to support)
  3. Audit scope (which parts of the framework are applied)

These constraints remain fixed for the duration of the audit. Findings describe what is present in the text, not what the product could or should have been.

Small omissions early in a system—missing procedures, undefined terms, or unstated assumptions—tend to cascade into repeated GM patchwork, inconsistent rulings, or stalled sessions. This framework treats those omissions as structural risks rather than isolated errors.

Audit Types

All audit types draw from the same framework. They differ only in scope and synthesis.

Diagnostic Audit

Applies one or two Tier 1 (mechanical verification) checks to a narrowly defined question or subsystem.

  • Answers whether a specific section is structurally complete, coherent, or usable as written
  • Produces functional status findings only (present, incomplete, missing, assumption-dependent)
  • Does not include synthesis, inference, or product-level verdicts

Targeted Structural Audit

Evaluates a defined section (e.g., combat system, character creation, adventure, supplement).

  • Executes all Tier 1 audits within scope
  • Applies relevant Tier 2 analyses where the material meaningfully affects long-term play, incentives, or GM workload
  • Produces local findings only; no global product verdict is inferred

Full Structural Audit

Executes the complete framework across the entire product.

  • Includes Step 0 (product type identification), all Tier 1 audits, applicable Tier 2 analyses, and synthesis
  • Produces a product-level assessment of scope delivery and functional viability
  • Conclusions are derived solely from documented findings

Expanded comparative positioning and contextual pattern analysis are available as a separate Tier 3 service.

Reading Targeted Audit Verdicts

Targeted Structural Audits use functional verdict categories rather than tier placement because they evaluate individual subsystems, not complete products.

Functional verdict categories:

  • Structurally delivers — The subsystem functions as intended under normal use without external support
  • Conditionally delivers — Functions reliably under specific conditions or play styles
  • Delivers with structural debt — Functions but requires undocumented GM labor
  • Structurally fragile — Functions but destabilizes under normal use patterns
  • Structurally absent — Declared intent lacks structural implementation

Orientation to Full Audit Tiers

Important: The following orientations describe functional similarity, not direct equivalence.

Subsystem-level verdicts do not map cleanly onto product-level tiers because:

  • Subsystems may rely on external scaffolding
  • Structural debt affects subsystems and products differently
  • A subsystem can perform reliably while still degrading whole-product cohesion

These mappings are provided as approximate orientation, not as rankings:

  • Structurally delivers → Comparable to high-performing subsystems within their category
  • Conditionally delivers → Comparable to reliable subsystems under constrained conditions
  • Delivers with structural debt → Comparable to capable subsystems requiring compensatory GM labor
  • Structurally fragile → Comparable to innovative but unstable subsystems
  • Structurally absent → Comparable to subsystems unsuitable for the declared intent

How to Interpret Audit Outputs

Audit findings are marked using four status symbols. These indicate structural risk, not quality or preference.

✓ Present and complete — Low risk
The required structure exists and functions as written within the product’s declared scope. No hidden dependencies or compensatory GM work are required. This area is unlikely to generate usability, expectation, or delivery problems.

⚠ Present but incomplete — Moderate risk
The structure exists but is underspecified, fragmented, or dependent on interpretation. The product will function, but this area is likely to create friction, inconsistent rulings, or additional GM workload—especially over sustained play.

✗ Missing — High risk
A required structure for the product’s declared role is absent. This gap will force GM invention, external references, or scope reduction to proceed. Missing elements at this level frequently lead to breakdowns, negative reviews, or misaligned expectations.

? Assumption-dependent — Variable risk
The product relies on unstated knowledge, prior editions, genre conventions, or discretionary judgment to function. Risk varies by audience: experienced tables may compensate smoothly; newer or broader audiences are likely to encounter friction or failure.

These markers describe what is structurally present in the text, not how a particular table might adapt in practice. Business risk increases as findings move from ✓ toward ✗, especially when multiple ⚠ or ✗ findings compound within the same subsystem.

Step 0: Product Type Identification

Every audit begins by identifying what the product claims to be and what role it is intended to serve.

This step fixes the evaluation frame before any structural analysis occurs. All subsequent findings are interpreted relative to the product’s declared role, not against an abstract ideal or a different category of game.

Product type is determined using the product’s own framing: introduction, back cover text, product description, or equivalent publisher-facing statements. Where multiple claims exist, the claim that imposes the strictest functional requirements is used. Where claims are incompatible, that incompatibility is documented as a finding.

Typical product categories include:

  • Standalone complete game
  • Core player book within a multi-book system
  • Core GM book
  • Setting or campaign book
  • Adventure or scenario
  • Expansion or supplement
  • Reference compendium
  • Toolkit or universal engine

This classification determines which audits apply and how results are interpreted. A product is evaluated only on whether it delivers what its category reasonably requires.

Once product type and declared scope are fixed, they remain constant for the duration of the audit.

Tier 1: Mechanical Verification

Tier 1 audits evaluate whether a product functions as written.

These checks focus on completeness, rule coverage, internal coherence, and basic usability. Findings are grounded in direct inspection of the text and document concrete omissions, contradictions, or unresolved dependencies.

Tier 1 answers a foundational question:

Can this product be used as it claims, without relying on external material or undocumented GM invention?

Tier 1 does not assess intent, balance, or player experience. Where judgment is required, the assumption itself is documented rather than resolved.

Structural Completeness

The Structural Completeness audit evaluates whether the product contains all subsystems required to deliver its declared scope.

This audit verifies presence and closure, not quality: whether required pieces exist, and whether they form a complete loop appropriate to the product’s role.

Structural Completeness is evaluated relative to product type. Requirements differ for a standalone game, a supplement, an adventure, or a reference book. The audit documents missing or underspecified subsystems and any dependencies on external material or unstated assumptions.

  • Core activity loop: Procedures required to play the game as described (e.g., action resolution, conflict handling, consequence application).
  • Character lifecycle (if applicable): Clear procedures for character creation, progression, and either advancement endpoints or campaign closure.
  • GM-facing procedures (if applicable): Explicit guidance for preparation, adjudication, challenge creation, and consequence escalation, rather than reliance on implied experience.
  • Content support: Tools, generators, or prebuilt material sufficient to support the intended duration and style of play.
  • Sustainment or closure mechanics: Either mechanisms that regenerate play (e.g., procedural content, faction systems) or explicit endpoints that define completion.

Each required subsystem is assessed and documented with:

  • Functional status (present and complete / present but incomplete / missing / assumption-dependent)
  • Specific elements that are missing or underspecified, where applicable
  • Dependencies on external material or unstated assumptions

Structural Completeness treats omissions as structural risk factors. Missing or underspecified procedures, especially early in a system, tend to force downstream patchwork during play. The audit documents these gaps so their impact can be evaluated explicitly rather than absorbed implicitly by the GM.

A product can pass Structural Completeness without being elegant or innovative. It fails only when required subsystems for its declared role are absent, non-functional, or dependent on material outside its declared scope.

Rule Coverage and Dependency Mapping

The Rule Coverage and Dependency Mapping audit evaluates whether the rules provided are sufficiently defined, connected, and self-contained to function as written.

Where Structural Completeness asks whether required subsystems exist, this audit asks whether those subsystems are usable without interpretive patchwork.

The focus is functional closure: whether a reader can follow rules from declaration to execution without encountering undefined terms, unresolved references, or phantom dependencies.

The audit examines the following areas, where they are required by the product’s declared role:

  • Definition coverage:  All game-specific terms, resources, states, and procedures used by the rules are clearly defined before or at first use.
  • Rule dependency resolution:  Cross-references resolve correctly, and rules do not rely on undefined or missing procedures elsewhere in the text.
  • Forward and circular references:  Rules that point ahead (“see Chapter X”) or rely on later material are checked to ensure the referenced content exists, is sufficient, and does not create circular dependencies.
  • Phantom dependencies:  Rules that only function if the reader supplies external knowledge—such as genre conventions, prior editions, or general TTRPG experience—are identified and documented.
  • Assumption-based procedures:  Cases where the rules defer resolution to “the GM decides” or similar language are flagged when they replace what would otherwise be a required procedure.
  • Orphaned mechanics:  Mechanics that are introduced but never integrated into core play loops, advancement, or resolution procedures are identified.
  • Resolution completeness:  Core procedures (e.g., action resolution, combat, advancement, consequences) are checked for ambiguity, missing edge cases, or multiple incompatible interpretations.

Each issue identified is documented with:

  • Location in the text
  • What is missing, undefined, or unresolved
  • Whether the issue breaks functionality or creates friction
  • Whether the resolution requires external material or discretionary judgment

Rule Coverage and Dependency Mapping treats unstated assumptions as structural risks. A rule that requires repeated judgment calls or genre knowledge to function as intended is not considered complete at this tier.

A product can pass this audit while still expecting active GM judgment. It fails only when required rule connections are absent, unclear, or dependent on information outside the product’s declared scope.

Internal Consistency

The Internal Consistency audit evaluates whether the product’s rules and subsystems agree with one another and produce compatible outcomes.

This audit focuses on contradictions, misalignments, and mechanical drift across chapters, examples, and advancement stages. It verifies that rules say the same thing everywhere they appear and interact coherently in play.

The audit does not assess design intent or balance; it documents incompatibilities that interfere with learning, execution, or sustained use.

The audit examines the following areas, where they are required by the product’s declared role:

  • Direct contradictions:  Conflicting instructions for the same situation, including rules that differ across chapters or examples that contradict the stated procedures.
  • Terminology consistency:  Terms, resources, and states are used consistently throughout the text and do not change meaning between sections without explicit redefinition.
  • Subsystem interaction:  How major subsystems interact in play (e.g., combat, social resolution, advancement, resource management), including whether their assumptions and inputs align.
  • Reward and advancement alignment:  Advancement triggers, rewards, and progression systems do not undermine or negate other core mechanics.
  • Mechanical drift over time: Early-game and late-game rules remain compatible, and subsystems do not become irrelevant, dominant, or contradictory at different character power levels or advancement stages.
  • Example fidelity:  Worked examples follow the written rules exactly and do not introduce alternate procedures or hidden assumptions.

Each inconsistency is documented with:

  • Locations of the conflicting elements
  • The nature of the inconsistency
  • Severity (game-breaking, significant friction, minor inconsistency)
  • Likely impact on play or learning

Internal Consistency treats small contradictions as compounding risks. A single unclear interaction may be manageable, but repeated inconsistencies increase cognitive load, undermine player trust, and force ad hoc reconciliation by the GM.

A product can pass this audit while still requiring judgment calls. It fails only when contradictions or misalignments materially interfere with learning, execution, or sustained play within its declared scope.

Onboarding Efficiency

The Onboarding Efficiency audit evaluates how effectively the product supports a new reader in reaching functional play.

This audit examines the path from opening the book to running or playing a first session. It verifies whether required information can be located, understood, and applied without excessive page-flipping, prior system knowledge, or guesswork.

The focus is practical accessibility, not teaching style or prose quality.

The audit examines the following areas, where they are required by the product’s declared role:

  • Pages-to-playability:  The minimum material a new GM or player must read to create characters and run or participate in a functional first session.
  • Information sequencing:  Whether core resolution mechanics are explained before they are required, and whether later rules depend on concepts that have not yet been introduced.
  • Forward references and page jumps:  Instances where the reader is directed to other sections to complete a procedure, including whether those references are necessary, resolvable, and proportional.
  • Procedure containment:  Whether core procedures (e.g., action resolution, combat, advancement) are presented in a single location or fragmented across multiple chapters.
  • Assumed knowledge:  Explicit or implicit assumptions about genre familiarity, prior TTRPG experience, or familiarity with specific rule engines.
  • Example placement and density:  Whether worked examples exist for major procedures and whether they appear at first use rather than being deferred or isolated.

Each friction point is documented with:

  • Location in the text
  • What the reader must already know to proceed
  • Whether the issue blocks play or merely slows learning
  • Whether resolution requires external knowledge or repeated cross-referencing

Onboarding Efficiency treats unnecessary friction as a structural risk. A system that technically functions but requires readers to infer missing steps, reorder chapters mentally, or rely on prior experience imposes hidden costs on new tables.

A product can pass this audit while still being dense or demanding. It fails only when the required information for functional play is absent, assumes external knowledge, or creates circular dependencies that cannot be resolved within the text.

Usability Architecture

The Usability Architecture audit evaluates how the product is organized for active use once play has begun.

Where Onboarding Efficiency focuses on first use, this audit examines preparation, reference, and live play. It verifies whether rules can be found and applied at the table without excessive delay, reinterpretation, or memory load.

The focus is structural organization and navigation, not layout aesthetics or writing quality.

The audit examines the following areas, where they are required by the product’s declared role:

  • Reference structure:  The presence and adequacy of tables of contents, indexes, internal cross-references, and (for digital products) bookmarks or hyperlinks.
  • Procedure discoverability:  Whether commonly referenced procedures can be located quickly and reliably, including whether headings, callouts, and section breaks reflect actual usage patterns.
  • Information clustering:  Whether rules that are used together are presented together, or fragmented across chapters in ways that require repeated cross-referencing.
  • Table and chart placement:  Whether frequently used tables appear at the point of need, are duplicated where appropriate, or are isolated in appendices without sufficient guidance.
  • Redundancy and consolidation:  Whether critical reference material is repeated where useful or unnecessarily scattered, forcing the reader to reconstruct procedures from multiple locations.
  • Use-case alignment:  Whether the book’s structure supports distinct modes of use—during play, during prep, and between sessions—without requiring different mental maps of the text.

Each usability issue is documented with:

  • Location in the text
  • The task being attempted (e.g., resolving combat, adjudicating a condition)
  • The navigation steps required to complete it
  • Whether the issue creates delay, ambiguity, or reliance on memory

Usability Architecture treats slow or unreliable rule retrieval as a structural risk. A rule that exists but cannot be found when needed effectively functions as a missing rule during play.

A product can pass this audit while still being dense or reference-heavy. It fails only when its organization consistently prevents timely access to required procedures within its declared scope.

How Tier 1 Findings Are Recorded

Tier 1 findings are recorded in a structured, traceable way. Each finding corresponds to an identifiable condition in the text and can be traced to specific locations.

Tier 1 documents what is present, what is missing, and what is assumed. It does not resolve ambiguities or propose fixes. Where multiple interpretations are possible, the ambiguity itself is recorded as a finding.

Structural Completeness findings are recorded as status classifications. Each required subsystem is assessed and documented with:

  • Functional status (present and complete/present but incomplete/missing/assumption-dependent)
  • Specific elements that are missing or underspecified, where applicable
  • Dependencies on external material or unstated assumptions

The other four Tier 1 audits (Rule Coverage, Internal Consistency, Onboarding Efficiency, and Usability Architecture) record findings as documented issues rather than status categories. Each issue is documented with:

  • Location in the text
  • Nature of the issue (e.g., undefined term, unresolved dependency, contradictory instruction)
  • Whether the issue breaks functionality or creates friction
  • Whether resolution requires external material or discretionary judgment

Tier 1 does not resolve ambiguities or propose design fixes. Where multiple interpretations are possible, the ambiguity itself is recorded as a finding.

Tier 1 findings describe what is present in the text, not how the system should work in play. No inference is made about intent, balance, or player experience at this tier.

Tier 2: Reasoned Inference

Tier 2 audits examine how the mechanics and systems verified in Tier 1 are likely to function over time.

Where Tier 1 establishes whether a product works as written, Tier 2 evaluates what those mechanics encourage, what burdens they place on the GM, and what patterns of play they structurally support.

The question for Tier 2 is:

What do these mechanics encourage over time, and what workload do they impose?

Tier 2 findings are grounded in documented rules and reward structures. They draw logical conclusions from how the system operates, without relying on playtesting or subjective preference.

Tier 2 audits are applied only where relevant to the product’s declared role and scoped material. Inapplicable criteria are explicitly marked and do not count against the product.

Incentive Alignment Analysis

The Incentive Alignment Analysis evaluates whether the product’s mechanical incentives reinforce the behaviors and priorities it claims to support.

This audit analyzes what actions the rules reward, discourage, or make costly; not how players “should” behave or what a group might prefer.

Incentive Alignment draws on Tier 1 findings to examine how advancement, rewards, penalties, and risk structures interact with stated goals. Where incentives consistently push play in a different direction than the text promises, that misalignment is documented.

The audit examines the following areas, where applicable to the product’s declared role:

  • Stated goals vs. reward mechanisms:  What the text claims the game is about, compared to what actions grant advancement, resources, or other benefits.
  • Primary reward triggers:  Which behaviors are directly rewarded (e.g., combat success, exploration, social interaction, risk-taking, survival) and which are weakly supported or unrewarded.
  • Risk–reward coherence:  Whether the level of mechanical risk associated with certain actions aligns with the game’s intended tone and themes.
  • Pillar parity:  The relative mechanical depth and reward support for different modes of play (e.g., combat, social, exploration), where such distinctions are relevant.
  • Character option incentives:  Whether character abilities, builds, or advancement paths reinforce the core gameplay loop or introduce incentives that bypass or undermine it.
  • Long-term incentive stability:  Whether early-game incentives remain consistent at later advancement stages, or whether reward structures drift as characters progress.

Each alignment issue is documented with:

  • The stated goal or theme involved
  • The mechanic or reward structure that creates tension or alignment
  • The nature of the misalignment (if any)
  • Likely pressure points created by the incentive structure

Incentive Alignment treats reward structures as directional forces. Mechanics that consistently reward certain actions create systemic pressure toward those behaviors, even when alternative playstyles are thematically encouraged.

A product can pass this audit while still allowing a wide range of playstyles. It fails only when its core reward structures systematically undermine its stated goals or require ongoing GM intervention to counteract mechanical incentives.

Campaign Arc Viability

The Campaign Arc Viability analysis evaluates whether the product’s structures can support the duration and style of play it claims to enable.

This audit examines whether the rules, content density, and progression systems provide sufficient support for sustained play, repeated sessions, or defined campaign arcs.

Campaign Arc Viability does not predict whether a campaign will succeed. It documents what forms of long-term play the system structurally supports, based on its verified mechanics and procedures.

The audit examines the following areas, where applicable to the product’s declared role:

  • Content density and regeneration:  The amount of playable material provided and whether the system includes procedures that generate new content (e.g., random tables, faction systems, downtime mechanics) rather than relying solely on finite authored material.
  • Progression framework:  Whether character advancement, resource growth, or system complexity continues meaningfully over the stated play range, or plateaus early.
  • Structural endpoints or sustainment loops:  Whether the game provides clear campaign endpoints (retirement, victory conditions, arc completion) or renewable loops that support indefinite play.
  • Power curve compatibility:  Whether challenge structures, resolution mechanics, and subsystems remain compatible across advancement stages, without introducing contradictions or invalidating earlier mechanics.
  • Supported campaign structures:  What styles of long-term play the system structurally supports (e.g., episodic, arc-driven, sandbox), based on its procedures and content scaffolding.

Each viability concern is documented with:

  • The structural element involved
  • How it supports or limits sustained play
  • The point at which support degrades, if applicable
  • Whether the limitation is explicit, implicit, or unaddressed

Campaign Arc Viability treats longevity as a structural property. The analysis examines what mechanisms for sustained play exist in the text, not whether the game should support longer campaigns.

A product can pass this audit while being designed for short arcs or finite campaigns. A system that lacks content regeneration, advancement continuation, or clear endpoints is not penalized when those limitations match its stated scope. It fails only when it claims to support long-term or open-ended play but lacks the structures required to do so within its declared scope.

GM Support Density

The GM Support Density analysis evaluates whether the product provides sufficient actionable support relative to the demands it places on the GM.

This audit examines how much of the system’s operation is explicitly structured through procedures, tools, and decision frameworks versus how much is deferred to GM discretion.

GM Support Density does not assess writing style or advice tone. It documents where the system relies on implied expertise or sustained GM invention to function as intended.

The audit examines the following areas, where applicable to the product’s declared role:

  • Procedure-to-flavor ratio:  The balance between actionable procedures (e.g., prep frameworks, resolution guidance, generators) and descriptive or atmospheric content.
  • Decision support:  Whether the text provides concrete frameworks for adjudicating common situations, setting difficulty, pacing play, and handling consequences, rather than deferring repeatedly to GM discretion.
  • Prep burden:  The amount of work required to prepare sessions using the material as written, including whether tools support low-prep or improvisational play.
  • Teaching vs. assuming:  Whether the product explains how to run the game it presents, or assumes prior GM experience and system literacy.
  • Operational guidance:  The presence of step-by-step instructions, examples of GM decision-making, and reusable tools that can be applied across sessions.

Each support gap is documented with:

  • The GM task involved
  • What guidance is provided (if any)
  • What work is implicitly required of the GM
  • Whether the gap is intentional, incidental, or unclear

GM Support Density treats GM workload as a structural factor. A system that consistently replaces procedures with implied expertise increases cognitive load and prep requirements, even when the rules are otherwise complete.

A product can pass this audit while targeting experienced GMs or minimalist styles. It fails only when the level of GM support provided is insufficient for the complexity and scope the product claims to support, forcing sustained GM invention beyond what the text provides to function as intended.

Completeness vs. Stated Scope

The Completeness vs. Stated Scope analysis evaluates whether the product delivers what it explicitly claims to provide.

This audit compares stated promises—regarding play style, supported activities, or campaign structure—to the procedures, rules, and content actually present in the text.

Where claims exceed structural support, the gap is documented as a scope delivery risk rather than a design flaw.

The audit examines the following areas, where applicable to the product’s declared role:

  • Extracted scope claims:  What the product explicitly promises regarding play style, genre focus, campaign type, level or power range, and supported activities, based on introductions, back-cover text, and product descriptions.
  • Claim-to-support mapping:  Whether each stated claim is backed by concrete procedures, rules, or content rather than descriptive intent alone.
  • Page and system allocation:  Whether the amount of mechanical support devoted to a claimed feature is proportionate to its prominence in the pitch, based on page allocation and mechanical depth relative to other features.
  • Ornamental vs. functional systems:  Whether advertised features are mechanically integrated into core play loops or exist as optional, shallow, or disconnected elements.
  • Hidden dependencies:  Whether fulfilling a stated claim requires external books, genre knowledge, or GM invention beyond what the product acknowledges.

Each scope gap is documented with:

  • The specific claim involved
  • The supporting material provided (or missing)
  • Whether delivery is complete, partial, ornamental, or absent
  • The impact of the gap on the intended experience

Completeness vs. Stated Scope treats overpromising as a structural risk. Claims that are broader than their mechanical support distort expectations and place additional burden on the GM to compensate.

A product can pass this audit while having a narrow or specialized scope. It fails only when it claims to support experiences that it does not structurally enable within its declared boundaries.

Modularity Assessment

The Modularity Assessment evaluates how tightly the product’s subsystems are coupled and how changes propagate through the system.

This audit examines whether components can be removed, replaced, or reused without breaking core functionality, based on how rules and procedures depend on one another.

Modularity is treated as a structural property, not a value judgment. Highly integrated systems can be robust but resist alteration; loosely coupled systems trade integration for flexibility.

The audit examines the following areas, where applicable to the product’s declared role:

  • Subsystem coupling:  The degree to which major subsystems depend on one another to function (e.g., whether combat resolution assumes a specific advancement model or resource economy).
  • Excision viability:  Whether individual subsystems (e.g., magic, social mechanics, domain play) can be removed or ignored without causing cascading rule failures.
  • Replacement interfaces:  Whether subsystems expose clean inputs and outputs that allow alternatives to be substituted without rewriting unrelated rules.
  • Core engine vs. embedded flavor:  The extent to which procedures are system-agnostic versus inseparable from setting, genre assumptions, or narrative framing.
  • Portability and reuse:  Whether specific mechanics, frameworks, or tools can be repurposed in other contexts without carrying large portions of the system with them.

Each modularity constraint is documented with:

  • The subsystem or rule involved
  • The nature of the coupling
  • The scope of impact when modified or removed
  • Whether the dependency is explicit, implicit, or incidental

Modularity Assessment treats tight coupling as a design trade-off, not a flaw. Highly integrated systems can be robust and expressive, but they resist alteration and reuse.

A product can pass this audit while being monolithic by design. It fails only when it implicitly invites modification or reuse but lacks the architectural boundaries required to support it without extensive reengineering.

Synthesis and Findings

Synthesis integrates completed Tier 1 and Tier 2 findings into a coherent assessment of the product in its declared role.

This stage introduces no new observations. All conclusions are derived from documented findings and make explicit how multiple structural issues interact or compound.

Synthesis addresses questions that individual audits do not resolve in isolation, such as whether gaps reinforce one another, whether trade-offs appear intentional, and whether the product’s structures collectively deliver on its stated scope.

Synthesis focuses on four areas:

  • Functional viability:  Whether the product can be used as intended without structural breakdowns, missing dependencies, or unresolved contradictions.
  • Scope delivery:  Whether the product delivers what it claims, given its verified mechanics, incentives, and support structures as documented in Tier 1 and Tier 2 audits.
  • User profile and assumptions:  What level of experience, system literacy, or GM workload the product implicitly assumes, based on documented findings rather than stated intent.
  • Structural trade-offs:  What the product prioritizes and deprioritizes, and whether those choices are coherent with its goals.

Where Tier 1 and Tier 2 findings point in different directions, synthesis does not resolve the tension by preference. It documents the interaction—for example, a system that is mechanically complete but incentive-misaligned, or thematically focused but structurally narrow.

If an audit or criterion was not applicable due to product type or scope, this is explicitly noted and does not count against the product.

Synthesis produces the following structured outputs:

  • A scope delivery verdict (fully delivers/partially delivers/fails to deliver)
  • A functional integrity assessment
  • A clear description of who the product is structurally suited for
  • Major strengths and limitations are grounded in audit findings

Synthesis does not rank products, assess originality, or recommend design changes.

Tier 3: Contextual Analysis Methods

Tier 3 analysis operates on completed structural findings from Tier 1 and Tier 2. It does not introduce new verification steps or reinterpret earlier results.

Tier 3 provides optional comparative and pattern-based context; it does not change Tier 1–2 findings.

Tier 3 situates documented structural patterns within a broader design and market context. All comparative and pattern-based claims are grounded in identifiable mechanical similarities rather than thematic or aesthetic resemblance.

Tier 3 analysis is optional and offered as a separate service. It is most relevant for publishers evaluating positioning, differentiation, or portfolio fit.

When performed, Tier 3 consists of the following analytical methods:

Comparative Positioning

Comparative Positioning situates the audited product within a broader corpus of published TTRPG systems.

This analysis examines structural similarities and divergences based on mechanics, progression models, and content scaffolding—not genre labels or thematic tone.

Comparative claims are grounded in documented mechanics. Outputs describe structural proximity (e.g., “most similar to X and Y”) rather than qualitative ranking or market performance.

Comparative positioning examines structural similarity across published systems, using factors such as:

  • Core resolution and progression architecture
  • Content scaffolding and sustainment mechanisms
  • Degree of mechanical specialization versus generality

Outputs may include classification statements such as “most structurally similar to [X, Y, Z],” grounded in documented mechanics rather than thematic similarity.

Failure-Mode Hypothesis

Failure-Mode Hypothesis identifies plausible stress points based on structural patterns observed across similar designs.

This analysis examines where comparable systems commonly encounter pressure—such as subsystems that are present but weakly integrated, or structures that tend to accumulate GM workload over time.

All findings are explicitly labeled as inferential and pattern-based. They do not predict table outcomes and do not substitute for playtesting.

It examines patterns such as:

• Structural patterns common to similar systems that tend to create pressure points
• Subsystems that are present but weakly integrated into core play loops
• Areas where structural gaps or complexity commonly require compensatory GM work

These observations are used to generate labeled hypotheses, not predictions.

Methodological Commitments and Boundaries

This framework is designed to produce consistent, traceable findings grounded in the text of the product itself.

All audits apply the same published criteria, evaluate products only against their declared role, and document findings with specific textual references. Verified conditions, reasoned inference, and contextual interpretation are explicitly distinguished.

The framework prioritizes structural clarity over prescriptive design guidance.

I commit to:

  • Evaluating products only against what they claim to be and do, not against external ideals or preferred playstyles
  • Applying the same criteria consistently across products of the same type and scope
  • Documenting all findings with specific references to the text rather than generalized impressions
  • Distinguishing clearly between verified conditions (Tier 1), reasoned inference (Tier 2), and contextual interpretation (Tier 3)

This framework does not evaluate:

  • Enjoyment or “fun”: Whether a group will enjoy a game, find it exciting, or prefer its style
  • Artistic or aesthetic quality: Writing voice, artwork, layout aesthetics, or thematic appeal beyond their functional role
  • Originality or innovation: Whether mechanics are novel, derivative, or familiar within the broader TTRPG landscape
  • Balance as felt in play: Whether encounters feel fair, characters feel equally powerful, or outcomes feel satisfying in practice
  • Table dynamics or social factors: Group chemistry, GM style, player expectations, or interpersonal dynamics
  • House-ruled or modified play: Outcomes that depend on altering the rules beyond what the product provides
  • Market performance or audience reception: Sales, popularity, reviews, awards, or community sentiment

The goal is structural clarity, not prescriptive design guidance: to make visible what the product does, what it assumes, and where risks or constraints arise from the text as written.