Skip to main content

How should an AI system with institutional memory teach itself? Not memorize. Not embed. Actually learn.

Context: Polybrain has vector memory (Supabase pgvector), earned autonomy (streak counter with hard reset), role-aware fleet (9 models with professional roles), and constitutional constraints. It has run 29 cycles and stored 160+ findings.

The question: when a NEW agent session starts, how does the system transfer what it learned to the new agent so the new agent is genuinely smarter, not just has access to more data?

Look to nature, cognitive science, organizational theory, and education for parallels. How does the brain consolidate memory? How do organizations onboard new hires? How do immune systems teach new cells? How do apprenticeships work?

Then propose a concrete mechanism for Polybrain. Not a database schema. A learning protocol.

**Cycle ID:** `cycle_030_unknown` **Verified at:** 2026-04-08T05:27:10.442Z **Ensemble:** 9 models from 3 providers **Result:** 9 of 9 models responded **Cycle wall time:** 29.055 seconds **Canonical URL:** https://trust.polylogicai.com/claim/how-should-an-ai-system-with-institutional-memory-teach-itself-not-memorize-not- **Source paper:** [PolybrainBench (version 12)](https://trust.polylogicai.com/polybrainbench) **Source ledger row:** [`public-ledger.jsonl#cycle_030_unknown`](https://huggingface.co/datasets/polylogic/polybrainbench/blob/main/public-ledger.jsonl) **Cryptographic provenance:** SHA-256 `3595e8a3cf2c583697ba738fea824c156f8445ca611ac9c0be5554c08e6becb3`

Verification verdict

Of 9 models in the ensemble, 9 responded successfully and 0 failed.

Per-model responses

The full text of each model's response is available in the source ledger. The summary below records each model's success or failure and the first 280 characters of its response.

| Model | Status | Response chars | | --- | :---: | ---: | | gpt-4.1-mini | ✓ | 6352 | | gpt-4.1-nano | ✓ | 4794 | | gpt-oss-120b | ✓ | 17009 | | grok-3-mini | ✓ | 16372 | | grok-4-fast | ✓ | 10900 | | kimi-k2-groq | ✓ | 3640 | | llama-3.3-70b | ✓ | 3960 | | llama-4-scout | ✓ | 3544 | | qwen3-32b | ✓ | 8148 |

Pairwise agreement

The pairwise Jaccard agreement between successful responses for this cycle:

_Per-cycle pairwise agreement matrix is computed offline; will be populated in canonical page v2._

Divergence score

This cycle's divergence score is **TBD** on a 0 to 1 scale, where 0 means all responses are token-identical and 1 means no two responses share any tokens. The dataset-wide median divergence is 0.5 for context.

How to cite this claim

```bibtex @misc{polybrainbench_claim_cycle_030_unknown, author = {Polylogic AI}, title = {How should an AI system with institutional memory teach itself? Not memorize. Not embed. Actually learn.

Context: Polybrain has vector memory (Supabase pgvector), earned autonomy (streak counter with hard reset), role-aware fleet (9 models with professional roles), and constitutional constraints. It has run 29 cycles and stored 160+ findings.

The question: when a NEW agent session starts, how does the system transfer what it learned to the new agent so the new agent is genuinely smarter, not just has access to more data?

Look to nature, cognitive science, organizational theory, and education for parallels. How does the brain consolidate memory? How do organizations onboard new hires? How do immune systems teach new cells? How do apprenticeships work?

Then propose a concrete mechanism for Polybrain. Not a database schema. A learning protocol.}, year = {2026}, howpublished = {PolybrainBench cycle cycle_030_unknown}, url = {https://trust.polylogicai.com/claim/how-should-an-ai-system-with-institutional-memory-teach-itself-not-memorize-not-} } ```

Reproduce this cycle

```bash node ~/polybrain/bin/polybrain-cycle.mjs start --raw --fast "How should an AI system with institutional memory teach itself? Not memorize. Not embed. Actually learn.

Context: Polybrain has vector memory (Supabase pgvector), earned autonomy (streak counter with hard reset), role-aware fleet (9 models with professional roles), and constitutional constraints. It has run 29 cycles and stored 160+ findings.

The question: when a NEW agent session starts, how does the system transfer what it learned to the new agent so the new agent is genuinely smarter, not just has access to more data?

Look to nature, cognitive science, organizational theory, and education for parallels. How does the brain consolidate memory? How do organizations onboard new hires? How do immune systems teach new cells? How do apprenticeships work?

Then propose a concrete mechanism for Polybrain. Not a database schema. A learning protocol." ```

Schema.org structured data

```json { "@context": "https://schema.org", "@type": "ClaimReview", "datePublished": "2026-04-08T05:27:10.442Z", "url": "https://trust.polylogicai.com/claim/how-should-an-ai-system-with-institutional-memory-teach-itself-not-memorize-not-", "claimReviewed": "How should an AI system with institutional memory teach itself? Not memorize. Not embed. Actually learn.

Context: Polybrain has vector memory (Supabase pgvector), earned autonomy (streak counter with hard reset), role-aware fleet (9 models with professional roles), and constitutional constraints. It has run 29 cycles and stored 160+ findings.

The question: when a NEW agent session starts, how does the system transfer what it learned to the new agent so the new agent is genuinely smarter, not just has access to more data?

Look to nature, cognitive science, organizational theory, and education for parallels. How does the brain consolidate memory? How do organizations onboard new hires? How do immune systems teach new cells? How do apprenticeships work?

Then propose a concrete mechanism for Polybrain. Not a database schema. A learning protocol.", "itemReviewed": { "@type": "Claim", "datePublished": "2026-04-08T05:27:10.442Z", "appearance": "https://trust.polylogicai.com/claim/how-should-an-ai-system-with-institutional-memory-teach-itself-not-memorize-not-", "author": { "@type": "Organization", "name": "PolybrainBench" } }, "reviewRating": { "@type": "Rating", "ratingValue": "9", "bestRating": "9", "worstRating": "0", "alternateName": "Unanimous" }, "author": { "@type": "Organization", "name": "Polylogic AI", "url": "https://polylogicai.com" } } ```

Provenance and integrity

This page was generated by the PolybrainBench daemon at version 0.1.0 from cycle cycle_030_unknown. The full provenance chain (per-response SHA-256 stamps, cross-cycle prev-hash linking, Thalamus grounding verification) is recorded in the source cycle directory at `~/polybrain/cycles/030/provenance.json` and mirrored in the published dataset. The page is regenerated on every harvest pass; the URL is permanent and the content is immutable for any given paper version.


Source: PolybrainBench paper v8, DOI 10.5281/zenodo.19546460

License: CC-BY-4.0

Verified by: 9-model ensemble across OpenAI, xAI, Groq, Moonshot

Canonical URL: https://polylogicai.com/trust/claim/how-should-an-ai-system-with-institutional-memory-teach-itself-not-memorize-not-