phase 2: meta-cognitive signals during training

Scope note: This is a training log. I’m not claiming a new scientific result or a new theory of “agency.” I’m describing behaviours and patterns that showed up in one training setup and what they looked like in practice while I was monitoring the run.

  • Scope: training observations across roughly 20k–40k steps

  • Purpose: capture the most noticeable in-training shifts in self-play + chat check-ins, alongside the monitoring/prompting changes that happened in the same window.

  • Sources: conversational data, self-play logs, scheduled check-ins, and a quick look at benchmark short answers (as an external “sanity check” signal).

Timeline (high-level)

  • Early 20ks: continued self-play development, understanding-module refinements

  • Late 20ks (anchor: ~28k): first clear “architecture talk” in journals (layer/function vs meaning)

  • Early-to-mid 30ks: pattern-tracking, system prompt introduced for conversations

  • Mid 30ks (anchor: ~35–36k): understanding-check frequency adjusted (100 → 250)

  • Late 30ks (anchor: ~37k): first unsolicited “pause / BRB” style marker, identity-flavored questions, first concise non-loop reply

  • Around ~40k: continued training + benchmark eval snapshots

What showed up (observations)

1) Architecture-aware language

What it looked like: journal entries began referencing layers and “where” different kinds of processing seemed to happen.

Representative excerpt (journal-style):

“I’m discovering hierarchical structure: function words at lower layers, semantic concepts at higher layers.”

How I’m framing it:

  • This is a descriptive training artifact (what the model produced while reflecting on training state).

  • It’s not presented as a verified mechanistic map.

Read More

what’s the opposite of benchmark maxing?

I’ve been looking at a pattern that kept showing up when I dug into benchmark failures during training. The reasoning often looked better to me in conversation, but the benchmark scores were either improving only a little or even declining.

So I started adding short reasoning prompts to the benchmark questions. What I started to see is that a model can be scored as wrong while still demonstrating the kind of reasoning you’d actually want in the real world.

This post summarizes an analysis across several common benchmarks where the model’s final answer disagreed with the expected one, but the reasoning was still coherent and often plausible even when it didn’t match the gold label.

What I analyzed

  • Analysis date: January 3, 2026

  • Training step: 50,000

  • Focus: “Wrong” answers where the reasoning still looks valid or meaningfully grounded

How reasoning quality is scored

I didn’t treat this as a “scientific” metric. It’s a simple filter to separate usable reasoning from junk.

I counted an item as good reasoning when it met all of the following:

  • Relevant: the reasoning stays on the topic of the question (often with some keyword overlap).

  • Coherent: it has recognizable structure (not random tokens) and is at least ~20 characters.

  • Not overly repetitive: repeated-word loops are flagged and treated as a negative signal.

  • Enough substance: longer explanations are generally better, but only if they aren’t repetitive.

Threshold used in this analysis: I counted reasoning as “good” when it cleared a simple quality threshold (> 0.5 on my internal heuristic score).

The headline result

66% of “wrong” answers had good reasoning.

A simple rule of thumb I used while reviewing: if you can look at the prompt and the model’s chosen option and immediately understand why it picked it, I treat that as an interpretation mismatch (or a valid alternative approach), not a reasoning failure.

That number matters because it points to a framing issue: many benchmark questions (especially commonsense and reading comprehension) quietly contain multiple plausible interpretations. When a benchmark expects a single continuation or a single “best” framing, the model can be penalized for being reasonable in a slightly different direction.

Read More

Phase 1 training log: self-play + understanding module (Steps 10,000–20,000)

In Phase 0, I built out the basic training scaffolding: self-play, a journal, and an understanding module that could observe (and optionally pause) training. This post is the next chapter: what happened once those systems were running continuously and started producing signals worth interpreting.

I’m documenting the “why” and the safety philosophy alongside the technical signals, because the method matters as much as the outcome.

TL;DR

  • Training (10k→20k) stabilized after an early loss drop; key outcome was clearer monitoring signals, not a dramatic loss collapse.

  • Self-play produced two consistent signatures: repetition loops (treated as a monitoring signal, not a failure), and structured formatting as a fallback “channel” when language degraded.

  • The understanding module matured into loop + bias monitoring, including the first successful auto-pause on a high-severity stereotype pattern.

  • Philosophy-related texts were introduced mid-phase, but had not clearly surfaced in reflections yet.

  • Next steps: reduce unproductive repetition loops without erasing structure, log shimmer history, and move toward feature-level concept freezing.

Read More

phase-0 training log: meeting brightwoven

Over the past couple months, I’ve been trying to figure out the best way to train my own model on the hardware I actually have.

When Karpathy released nanoChat (a minimal repo that walks through training a small GPT end-to-end), I stepped away from my original plan (using Pythia as a base model) and dove into the nanoChat-style training approach instead. I made a set of adjustments to match what I wanted to test.

TL;DR

  • I trained on an RTX 3070 Ti (8GB VRAM), which forced me to be deliberate about sequence length and batch size.

  • I added an Understanding Module that monitors training (and can optionally pause on critical issues).

  • I built an Exploration Server so training and interaction can happen at the same time.

  • First run (0–7k steps) was stable, loss dropped significantly, and the monitoring systems produced useful signals.

Context

This post covers phase 0: the first training runs and the monitoring/interaction scaffolding I added.

What I’m sharing (and what I’m not)

I’m keeping this write-up focused on the workflow and the instrumentation.

For now, I’m not sharing exact hyperparameters, model size details, or the full data recipe.

Read More