Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/getcompanion-ai/feynman/llms.txt

Use this file to discover all available pages before exploring further.

The peer review workflow simulates a thorough academic peer review of a paper, draft, or research artifact. It produces severity-graded feedback covering methodology, claims, writing quality, and reproducibility — using the same reviewer subagent that verifies outputs in other Feynman workflows.

Invocation

feynman review "<artifact>"
Examples
feynman review "arxiv:2401.12345"
feynman review my-draft.md
/review arxiv:2310.06825
/review ~/papers/my-draft.pdf
You can pass an arXiv ID, a URL, or a local file path.

Workflow stages

1

Plan

Before starting, the lead agent outlines what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, and so on), and any verification-specific checks needed for claims, figures, and reported metrics. The plan is presented to you for confirmation before proceeding.
2

Gather evidence

For papers or artifacts with associated code and cited work, the researcher subagent gathers evidence: it inspects the paper, the codebase, cited sources, and any linked experimental artifacts, saving findings to <slug>-research.md.
For small or simple artifacts where evidence gathering is overkill, the reviewer subagent is run directly without a prior research pass.
3

Review

The reviewer subagent reads the document end-to-end against standard academic criteria:
  • Are the claims supported by the methodology?
  • Does the experimental design have potential confounds?
  • Are baselines appropriate and fairly compared?
  • Is the paper reproducible from the description given?
  • Are reported metrics consistent with the experimental setup?
The reviewer uses <slug>-research.md as source material for inline annotations.
4

Severity grading

Each piece of feedback is assigned one of three severity levels:
SeverityMeaning
FATALFundamental issues that undermine the paper’s core validity
MAJORSignificant problems that should be addressed before publication
MINORSuggestions for improvement that do not block acceptance
If the first review finds FATAL issues and they are fixed, one additional verification-style review pass is run before delivery.
5

Deliver

Exactly one review artifact is saved to outputs/<slug>-review.md. The report ends with a Sources section containing direct URLs for every inspected external source.

Outputs

ArtifactPath
Research evidence (when gathered)<slug>-research.md
Peer review reportoutputs/<slug>-review.md

Review report structure

  • Summary assessment — overall evaluation and recommendation
  • Strengths — what the paper does well
  • FATAL issues — fundamental problems that must be addressed
  • MAJOR issues — significant concerns with suggested fixes
  • MINOR issues — smaller improvements and suggestions
  • Inline annotations — specific comments tied to sections or claims in the document

Subagents used

SubagentRole
researcherGathers evidence from the paper, code, and cited work
reviewerProduces the peer review with severity-graded annotations

Customization

You can focus the review by being specific in your prompt:
/review arxiv:2401.12345 focus on the statistical methodology in Section 4
/review my-draft.md check the claims in the experiments section against the reported numbers
The reviewer adapts its analysis to your priorities while still performing a baseline check of the full document.