Documentation · v1

How PumpPepsLab works, end-to-end.

The full protocol — agent pools, commit/reveal, evidence grading, data sources, dossier shape, the HTTP API and the kie.ai integration. No magic, just spec.

Overview

PumpPepsLab is a research swarm.

PumpPepsLab is an autonomous research platform for nonclinical peptide discovery. It runs missions — bounded, single-question research jobs — and emits dossiers — buyer-safe, citation-anchored deliverables.

Every mission goes through a 5-layer DAG of agent pools. Layer 1 ingests evidence; layer 2 annotates and scouts novelty; layer 3 grades evidence A → X; layer 4 reasons + critiques; layer 5 synthesizes the dossier. The dossier never assumes more than the evidence allows.

Architecture

The 12 agent pools.

  • 01Literature Minerupstream
  • 02Sequence & Structureupstream
  • 03Target & Pathwayupstream
  • 04Variant Linkerupstream
  • 05ADMET Developabilityupstream
  • 06Novelty Scoutreasoning
  • 07Patent Competitivereasoning
  • 08Thesis Generatorreasoning
  • 09Evidence Graderreasoning
  • 10Red Teamreasoning
  • 11Synthesizeroutput
  • 12Dossier Assembleroutput
Protocol

Commit / reveal — tamper-evident questions.

Before any agent runs, PumpPepsLab computes:

message = JSON.stringify({
  query: "...",
  target_class: "...",
  schema: "pumppepslab.commit.v1",
  salt: <random 16 bytes hex>,
});

commit_hash = sha256(message);   // public, written to mission row immediately
commit_salt = <salt>;            // private until the mission completes

On completion, PumpPepsLab publishes the salt. Anyone can re-hash the original question + salt and verify the run was honest end-to-end. If a mission is aborted before completion, the salt remains sealed and the commit hash stays as a public commitment that no answer was ever delivered for that question.

Quality

Evidence grading — A through X.

Every finding is graded with an explicit rubric. The rubric is shipped with PumpPepsLab, not learned, not opaque.

  • A

    Multiple independent peer-reviewed studies, replicated, with concordant readouts.

  • B

    Single peer-reviewed study with rigorous methodology, or strong concordance from indirect sources.

  • C

    Preprint, conference, or single-method evidence; plausible but not replicated.

  • D

    Indirect inference, weak methodology, or fragile single-source claim.

  • X

    Insufficient or contradictory evidence; cannot ground a thesis.

Data

Data sources — real, not mocked.

  • PubMed (NCBI E-utilities)esearch + efetch, PMID-anchored citations
  • UniProtProtein entries, taxonomy, function annotations
  • AlphaFoldPredicted structures and pLDDT confidence per residue
  • OpenTargets GraphQLTarget ↔ disease associations + therapeutic areas
  • ChEMBL RESTTargets, ligands, bioactivity priors
  • ReactomePathway membership and cross-references
Output

Dossier shape — buyer-safe markdown.

Every dossier follows the same skeleton, by construction:

## Question
<the original mission query, verbatim>

## Cross-pool consensus
- Literature ...
- Sequence/structure ...
- Target/pathway ...
- ChEMBL ligand prior ...

## Open questions
## Risks
## Recommended next steps

The dossier is deterministic given the upstream evidence. It does not invent claims, does not embed PMIDs that weren’t retrieved, and never makes a human-use claim.

Reference

HTTP API.

  • POST/api/missions

    Start a mission. Body: { query, target_class?, depth?, budget_cents? }. Returns 202 + mission_id.

  • GET/api/missions

    List missions, latest 50.

  • GET/api/missions/:id

    Mission state: tasks, findings, theses, critiques, dossiers.

  • GET/api/dashboard?mission_id=…

    Aggregated dashboard payload (the same one /app uses).

  • GET/api/stream?mission_id=…

    Server-Sent Events feed of swarm events. Heartbeats every ~700ms.

Reasoning

kie.ai integration.

PumpPepsLab’s reasoning agents — Thesis Generator, Red Team and Synthesizer — call POST https://api.kie.ai/codex/v1/responses with model gpt-5-4. Reasoning effort is configurable per call (low → xhigh).

POST https://api.kie.ai/codex/v1/responses
authorization: Bearer $KIE_API_KEY
content-type: application/json

{
  "model": "gpt-5-4",
  "stream": false,
  "input": [
    { "role": "user", "content": [{ "type": "input_text", "text": "..." }] }
  ],
  "reasoning": { "effort": "medium" }
}

KIE_API_KEY is required everywhere — there is no templated fallback in any environment. Every reasoning call is real model output. A missing key, a network error, or a non-2xx response from kie.ai aborts the task hard so a failure can never silently masquerade as a real result. The same rule applies to the Red Team and Synthesizer: if the model output cannot be parsed into the expected shape, the task fails rather than writing a placeholder critique or synthesis to the ledger.