reframe-v3-poietic

Reframe v3: Poietic PBC as reliable infrastructure for hybrid human-AI genomics

Metadata

Statusabandoned ‖ paused
Agent identity8da3b6fc81685ca44a4e15eb35307ab90ce3f0500e9c08b8b0caae848f7d4ce9
Created2026-05-01T20:02:28.989393123+00:00
Started2026-05-01T20:02:59.248758894+00:00
Tagsgrant,urgent,reframe,v3, eval-scheduled

Description

Description

Erik is reframing the Google.org application hours before submission. The PHR-into-rare-disease spine in v2 felt stitched and overclaimed. The new spine, in Erik's own framing: 'a way to do reliable, careful work in the space of clinical genomics, genomics in general, bioinformatics.'

This task produces a NEW file: workgraph_google_application_FINAL_v3.md. It does NOT overwrite v2. v2 stays as a fallback if v3 misfires under deadline. Erik picks which to submit at paste-time.

The spine is INFRASTRUCTURE-FOR-RELIABLE-GENOMICS, not a specific scientific deliverable.

The new frame, in plain terms

  • What Poietic PBC exists to do: make human-machine collaboration legible and responsive to its participants (the verbatim PBC public benefit statement, filed Delaware 2026-04-13).
  • What WorkGraph is: the open-source (MIT, Rust) infrastructure for that mission. A persistent, inspectable task graph where humans and AI agents are peers. Auditable across sessions, operators, and machines.
  • What the grant funds: maturing the infrastructure (multi-user, hardening, integrations with Gemini / AlphaGenome / AlphaFold via Chorus) and demonstrating it on real ongoing genomics work in the founders' labs — Erik's pangenomics program (vg, PGGB, comparative pangenomics across vertebrates) and Luca's clinical genomics program (CRISPResso, CRISPRme off-target analysis, Chorus orchestrating genomic AI). Not a constructed PHR-into-rare-disease arc.
  • What we measure: not 'did we discover X?' but 'did the infrastructure make the work more reliable, more auditable, more collaborative across human and AI participants?' Measured via case studies (Vaughn's ethnographic work), the published computation graphs themselves, BioBench (open benchmark for agentic biology), and adoption by independent labs.
  • Why this matters for clinical/translational impact: clinical genomics is high-stakes (variant calls inform patient care). 'Reliable, careful, auditable' is exactly what clinical genomics needs and what current agentic CLIs cannot demonstrate. The proposal is honest about being upstream of clinical use, not claiming to deliver clinical results in 36 months.
  • Why this team: Erik = vg/PGGB authority in pangenomics. Luca = CRISPResso/CRISPRme authority in clinical genome editing analysis, plus Chorus orchestrating genomic AI. Vaughn = organizational design under uncertainty, ethnographic methods. Each founder's track record is real and verifiable.

What to read before drafting

  1. workgraph_google_application_FINAL_v2.md — the v2 application. Reuse what works (team bios, budget, COI section, Liverpool acknowledgment, organizational structure). Replace what doesn't (PHR-rare-disease spine, the demonstration framing).
  2. workgraph_extended_outline_v2.md — has more detail on team, infrastructure, demonstration design. Useful raw material.
  3. STATE.md (especially §5 the v2 pivot decision log, but note: Erik is now pivoting AGAIN to v3, away from v2's PHR spine)
  4. CLAUDE.md — full project context, especially 'Key Narrative Decisions', 'Word Limits', and 'Style Preferences'
  5. ~/poietic.life/notes/liverpool-hive-mind-research-20260501.md — IF it exists by the time you read inputs, use it for §17/§28 Liverpool acknowledgment. If not yet written, leave a clearly-marked TODO for Erik in those sections and proceed with the rest.
  6. ~/poietic.life/index.html — the deployed landing page, for the verbatim PBC benefit statement and the public-facing positioning.

What to produce

A new file at the project root: workgraph_google_application_FINAL_v3.md. Same structure as v2 (section numbers match the Google.org form). Every section has an answer. Every word cap respected.

Section-by-section guidance

  • Sections that probably stay close to v2: team bios, budget breakdown, COI disclosure, organizational structure, contact info, certifications. Copy from v2, sanity-check, leave alone.
  • Sections that need full reframe: any section that previously led with PHRs, rare disease, the methodology comparison demonstration framed as PHR discovery. Specifically check §16 (problem statement), §17 (approach), §18 (deliverables), §19 (impact), §20-23 (technical), §26 (track record), §28 (related work / Liverpool), §29 (theory of change), §30 (risks).
  • The demonstration: instead of 'three-way comparison on PHR discovery', frame it as 'longitudinal deployment of WorkGraph across the founders' active research programs, with case studies, computation graphs, and methodology comparisons drawn from real workflows.' Concrete examples can include: Erik's pangenome construction pipelines (vg/PGGB), Luca's CRISPR analysis workflows (CRISPResso/CRISPRme), the open BioBench benchmark, Vaughn's ethnographic case studies of the deployments. The deliverable is the infrastructure plus the corpus of auditable computation graphs, not a specific scientific result.
  • Risk section: now MUST include 'risk that reviewers want a specific scientific deliverable rather than infrastructure.' Mitigation: every case study produces a real research output (preprint, software release, dataset). The infrastructure claim is testable — adoption by labs outside the founders' is the concrete metric.

Constraints

  • HARD: every word cap in CLAUDE.md 'Word Limits' is respected. Count words after each section. If a section is over, cut.
  • HARD: no em-dashes (CLAUDE.md style rule).
  • HARD: no 'PI' / 'lead PI' language. Co-founders only. Erik / Luca / Vaughn order.
  • HARD: no specific effort percentages.
  • HARD: no v1/KRAS terminology. No MRTX1133, Boltz, RFdiffusion, DiffDock, pancreatic cancer.
  • HARD: no 'wrote this proposal with WorkGraph' recursion claim.
  • HARD: CRISPRme/Casgevy framing per CLAUDE.md (Canver 2015 enhancer work + independent CRISPRme 2023 finding, NOT direct collaboration with Casgevy developer).
  • HARD: do NOT overwrite workgraph_google_application_FINAL_v2.md. Write a new file workgraph_google_application_FINAL_v3.md.
  • SOFT: the new spine is honest and falsifiable. Don't replace one form of overclaiming with another. If a section invites overclaiming, write less, not more.

Output

  1. workgraph_google_application_FINAL_v3.md at project root.
  2. ~/poietic.life/notes/v3-reframe-summary-20260501.md: short companion doc (under 500 words) summarizing what changed from v2 to v3, what was kept, what was cut, and any sections where you had to leave a TODO for Erik.

wg log a one-paragraph summary listing word counts for every section and any TODOs left for Erik.

Validation

  • New file workgraph_google_application_FINAL_v3.md created (v2 untouched)
  • Every section of the form has an answer
  • Every word cap from CLAUDE.md respected (recount each section)
  • Spine is infrastructure for reliable hybrid human-AI genomics, not PHR-rare-disease
  • Founder track record concretely cited (vg/PGGB, CRISPResso/CRISPRme/Chorus, organizational design)
  • No em-dashes, no PI language, no v1 terminology, no recursion claim
  • CRISPRme/Casgevy framing factually precise
  • Liverpool acknowledgment present (or clearly marked TODO if liverpool research not yet landed)
  • Risk section explicitly addresses 'reviewers may want a scientific deliverable'
  • Companion summary written to ~/poietic.life/notes/v3-reframe-summary-YYYYMMDD.md
  • Changes committed on the agent's worktree branch (not pushed)

Depends on

Required by

Messages 1 message

  1. #1user2026-05-01T20:04:42.494869568+00:00sent
    IMPORTANT ADDITION TO BRIEF — Erik flagged that a core part of his ongoing work is developing foundational REFERENCE RESOURCES for genomics research (vg as the toolkit, HPRC's human pangenome reference, PGGB-built references, comparative pangenome references across species). These reference resources are very high stakes and high value: when you build a reference, downstream work in thousands of labs depends on it being correct. Errors propagate. Biases propagate.
    
    This is the concrete operationalization of 'reliable, careful, auditable':
    - 'Get the reference correct' is not an aesthetic claim, it's exactly what reference-building requires
    - Bias and error in references contaminate downstream research at scale
    - Reference resources are inherently public goods (open data, open methods, open tooling)
    
    Please weave this into v3 as a concrete anchor for the demonstration, especially in the sections covering approach, deliverables, and impact. Suggested framing direction (you decide exact wording):
    
    - Erik's authoritative position is partly in BUILDING REFERENCES (vg / PGGB / HPRC) not just analyzing data with them
    - The demonstration can naturally include reference-construction workflows as case studies — they are public, high-stakes, error-sensitive, multi-actor (human curators + algorithmic pipelines + AI agents could all participate), exactly the place where auditable hybrid coordination is most useful
    - The bar for reference work is 'correctness, traceability, and freedom from bias' — which is the same bar legible coordination delivers
    - Public benefit angle is unusually strong: reference resources are infrastructure for the whole research community, fitting Poietic PBC's public benefit framing
    
    Do NOT make this the entire spine — the spine remains 'reliable infrastructure for hybrid human-AI work in clinical and comparative genomics.' Reference resources are the most concrete and high-stakes example of where this matters. Use them as evidence and demonstration anchor, not as the whole story.
    
    Also: do not overclaim. Don't say WorkGraph will produce the next human reference. Say it will be deployed in reference-construction workflows, with the auditable graph itself as a deliverable showing how decisions were made.

Log