Instrument  /  Precision Strategy  /  Module 01

Glasshouse

Rehearse decisions against a simulated public before you make them. Every claim footnoted. Every run diagnosed. Every stakeholder askable. Every line of the engine public.

01  /  What Glasshouse does

Upload the documents behind a decision. Ask how the world will respond. Glasshouse runs a simulated public of psychologically-profiled personas through timed rounds of posting, arguing, and opinion-shifting. Then it writes a structured report where every claim is footnoted to source material, every run exposes its own diagnostic properties, and every simulated stakeholder is available for direct conversation after the run.

The pipeline has five stages. You watch each stage execute. When the Report stage lands, the four-level trust ladder is visible on the page and interactive.

Seeds
Graph
Personas
Simulation
Report

02  /  The trust ladder

Four levels of transparency, live on this page.

Every citation chip is real. Every diagnostic is hoverable. Every persona is askable. Every commit in the engine is clickable. Inspect everything Glasshouse produces before you ever talk to us.

Feature 1  /  Report with footnotes

Report excerpt  /  KPAnG rehearsal  /  Section 04

Dominant narratives

Dominant narrative: the toothless-law framing. Bundeskartellamt President Andreas Mundt's public self-limiting quote [S:rnd14/mundt-q4] is the structural inflection point driving the narrative to dominance by round 14. The minority view — that the law is procedurally clean and instrumentally sufficient — is held by 3 of 14 personas [S:rnd12/minority] and decays monotonically across rounds.

Every claim footnoted. No sentence in any Glasshouse report exists without a source you can inspect.

Feature 2  /  Diagnostics

Run diagnostics

Sim ref. kpang-rehearsal-001

Narrative stability

0.82

Healthy

Stakeholder balance

7/7

Healthy

Persona fidelity

5.0/5

Healthy

Claim evidence depth

4.8

distinct personas / claim

Healthy

Minority-view capture

3/4 observed

Partial

Simulation convergence

Complete

R14 / R14

Every run diagnosed. The same quality signals we use internally to tune the product — not a subset, the full framework. A run whose diagnostics are weak is a run whose conclusions should be trusted less. You decide, with the numbers in front of you.

Feature 3  /  Ask a stakeholder

MO

Mira Okafor

Stakeholder type: Institutional sceptic  /  Round 14

You

Why did you react so strongly to the noon-rule?

Mira Okafor · Round 14

The rule is procedurally clean but instrumentally toothless. The Bundeskartellamt has no price lever, and the ADAC is already signaling that the retail reaction will be dominated by supply-side friction rather than consumer-side discipline. I was responding to what I could already see from my position in the simulation — three other personas had posted cost-pass-through signals in the previous round, and the BKartA statement about enforcement tooling had just landed. That sequence forms, in my reading, a structural failure rather than a policy one.

Every stakeholder askable. After the run, you can talk to any simulated persona about why they reacted the way they did. The persona answers from its memory of the run — its profile, the posts it saw, the personas it was influenced by. No black box, not even at the individual stakeholder level.

Feature 4  /  Engine openness

graph_2

brew-chai / main

commit a4c8f21

refactor: separate persona-fidelity scoring from narrative-stability computation so the two dimensions can be tuned independently

See how this was built arrow_outward

Every line of the engine public. Brew Chai is the open-source orchestration substrate we use to build Glasshouse. Apache 2.0. Read the code that built this report.

03  /  Export surface

Three formats for three downstream consumers.

Every Glasshouse report is available for download in three formats. One run. Three formats. The buyer walks away with everything their organization needs across every internal audience that will touch the rehearsal.

description

Format / MD

Markdown

For the analyst, the research team, the prompt-engineering workflow. Raw report bytes that can be fed directly into your own LLM or documentation pipeline. Give me the raw material.

picture_as_pdf

Format / PDF

PDF

For the archive, the legal file, the board-materials pack appendix, the senior stakeholder who reads PDFs for a living.

co_present

Format / PPTX

PowerPoint slide deck

For the CEO's Tuesday committee meeting. The CCO's crisis-response briefing. The minister's press preview. The fund MD's IC-meeting slide pack.

This is not a report-to-slides conversion. Our Marp renderer takes a typed slide-deck schema — stat cards for hero financial anchors, bar rows for narrative-ranking comparisons, two-column layouts for simulation-versus-reality side-by-sides — and produces a .pptx that is ready to project, not a document that still has to be translated into slides by a junior analyst.

Every content slide preserves the footnote ladder. Visible footnote markers on every slide. A source-appendix at the deck's end. A cover-slide QR code that opens the live scenario URL, so any board member can scan from their seat and be inside the live rehearsal within ten seconds. The four-level trust ladder survives the export.

05  /  Methodology

Four rules. The credibility anchor for every published case.

Every published case study follows four rules. The rules are the credibility anchor for the entire gallery.

Rule 01

T=0 is explicitly stated

Every rehearsal has a declared T=0 moment — the point at which a real-world decision-maker would have commissioned the analysis. The seed documents we feed into Glasshouse contain only information available before T=0. The post-T=0 ground truth — what actually happened — is held out and never enters the simulation. We publish T=0 for every case.

Rule 02

T=0 is post-cutoff

The language model powering Glasshouse cannot have memorized the real-world reaction to any event in our case-study pool. Every case is anchored to a real event that occurred after the training cutoff of the model we were running at the time. We publish the model and the cutoff date for every case, with citations. If the model cannot have retrieved the reaction from training data, then a correct simulation output is evidence of actual simulation work.

Rule 03

Financial anchors are real public-record numbers

Every financial impact number on this page comes from primary reporting sources — Bloomberg, CNBC, Reuters, Fortune, the Financial Times, the Globe and Mail, Handelsblatt, ZDF, ADAC, Bundeskartellamt. No estimated, projected, or modeled numbers appear in our case studies. Every number is linkable and verifiable.

Rule 04

We never tune Glasshouse to pass a pre-committed case

Glasshouse is tuned on aggregate eval results across many scenarios. A case is promoted from internal eval material to public case study only when it already passes the eval rubric cleanly as a byproduct of that aggregate tuning. If no case in our current pool passes cleanly on a marketing deadline, we delay the publication or change the case. We never reverse-engineer Glasshouse around a marketing case.

06  /  Deployment

How Glasshouse deploys.

Hosted on Creative Human infrastructure or on your own GCP environment — Vertex AI, Cloud Run, or GKE, depending on scale and security posture. You log in, create scenarios, monitor simulations, chat with personas after the run, and share scenarios with colleagues. The scenario itself, at a permanent URL inside your workspace, is the primary deliverable. The three download formats — Markdown, PDF, and PowerPoint — are how the rehearsal lands inside your organization's downstream workflows.

Data handling is a contractual conversation. The deployment model is consultative.

07  /  Request a demo

Three fields. Short.