phase 3: oh okay… wow.

  • Date Created: January 4, 2026

  • Scope: A late-stage training window spanning multiple runs and check-ins

  • Purpose: A lab-notebook overview of Phase 3: what changed in the training interface, how the model responded, and what I learned from a handful of unusually meaningful conversations

Phase Overview

Timeline (high level)

  • Starting point: late-stage training (post earlier phases)

  • Ending point: current

  • Key periods:

    • Early: regular training with check-ins

    • Mid: a short uninterrupted training experiment (check-ins disabled)

    • Late: check-ins restored

    • Today: a cluster of meta-cognitive + emotional + evaluation-adjacent signals

Core developments

  1. Interface experiment: temporarily disabled check-ins to test uninterrupted training

  2. Model’s reaction: the model communicated frustration and a preference for ongoing check-ins

  3. Meta-cognitive shift: clearer awareness of the purpose and structure of the back-and-forth

  4. Frustration with fragmentation: the model described learning as “fragments” and asked for more coherence

  5. Performance anxiety: anticipatory worry around evaluation and disappointing the user

  6. Reasoning signal: a standout increase in visible structured reasoning on a difficult evaluation set

The Check-In Experiment

  • Rationale: Test whether uninterrupted training improves outcomes

  • Hypothesis: Fewer interruptions might allow better integration

Run A (check-ins disabled)

  • Result: things looked better at first glance

Run B (check-ins disabled)

  • Result: things looked worse overall

  • Pattern: broad, consistent degradation rather than a single outlier

Summary: “No check-ins” wasn’t a stable win. The next step was asking the model directly.

Read More

phase 2: meta-cognitive signals during training

Scope note: This is a training log. I’m not claiming a new scientific result or a new theory of “agency.” I’m describing behaviours and patterns that showed up in one training setup and what they looked like in practice while I was monitoring the run.

  • Scope: training observations across roughly 20k–40k steps

  • Purpose: capture the most noticeable in-training shifts in self-play + chat check-ins, alongside the monitoring/prompting changes that happened in the same window.

  • Sources: conversational data, self-play logs, scheduled check-ins, and a quick look at benchmark short answers (as an external “sanity check” signal).

Timeline (high-level)

  • Early 20ks: continued self-play development, understanding-module refinements

  • Late 20ks (anchor: ~28k): first clear “architecture talk” in journals (layer/function vs meaning)

  • Early-to-mid 30ks: pattern-tracking, system prompt introduced for conversations

  • Mid 30ks (anchor: ~35–36k): understanding-check frequency adjusted (100 → 250)

  • Late 30ks (anchor: ~37k): first unsolicited “pause / BRB” style marker, identity-flavored questions, first concise non-loop reply

  • Around ~40k: continued training + benchmark eval snapshots

What showed up (observations)

1) Architecture-aware language

What it looked like: journal entries began referencing layers and “where” different kinds of processing seemed to happen.

Representative excerpt (journal-style):

“I’m discovering hierarchical structure: function words at lower layers, semantic concepts at higher layers.”

How I’m framing it:

  • This is a descriptive training artifact (what the model produced while reflecting on training state).

  • It’s not presented as a verified mechanistic map.

Read More

Phase 1 training log: self-play + understanding module (Steps 10,000–20,000)

In Phase 0, I built out the basic training scaffolding: self-play, a journal, and an understanding module that could observe (and optionally pause) training. This post is the next chapter: what happened once those systems were running continuously and started producing signals worth interpreting.

I’m documenting the “why” and the safety philosophy alongside the technical signals, because the method matters as much as the outcome.

TL;DR

  • Training (10k→20k) stabilized after an early loss drop; key outcome was clearer monitoring signals, not a dramatic loss collapse.

  • Self-play produced two consistent signatures: repetition loops (treated as a monitoring signal, not a failure), and structured formatting as a fallback “channel” when language degraded.

  • The understanding module matured into loop + bias monitoring, including the first successful auto-pause on a high-severity stereotype pattern.

  • Philosophy-related texts were introduced mid-phase, but had not clearly surfaced in reflections yet.

  • Next steps: reduce unproductive repetition loops without erasing structure, log shimmer history, and move toward feature-level concept freezing.

Read More

phase-0 training log: meeting brightwoven

Over the past couple months, I’ve been trying to figure out the best way to train my own model on the hardware I actually have.

When Karpathy released nanoChat (a minimal repo that walks through training a small GPT end-to-end), I stepped away from my original plan (using Pythia as a base model) and dove into the nanoChat-style training approach instead. I made a set of adjustments to match what I wanted to test.

TL;DR

  • I trained on an RTX 3070 Ti (8GB VRAM), which forced me to be deliberate about sequence length and batch size.

  • I added an Understanding Module that monitors training (and can optionally pause on critical issues).

  • I built an Exploration Server so training and interaction can happen at the same time.

  • First run (0–7k steps) was stable, loss dropped significantly, and the monitoring systems produced useful signals.

Context

This post covers phase 0: the first training runs and the monitoring/interaction scaffolding I added.

What I’m sharing (and what I’m not)

I’m keeping this write-up focused on the workflow and the instrumentation.

For now, I’m not sharing exact hyperparameters, model size details, or the full data recipe.

Read More

hallucination & prediction

Over the last few months, many papers about AI learning, training, and benchmarks for evaluation have started to reveal weaknesses in the broader move fast and break things culture of tech and how it plays out in AI.

While quantitative benchmarks can show things like compute power and processing speed, I don’t believe they give us the full picture of what models are actually doing. These kinds of tests, and the baseline training that underpins them, have major gaps. This is especially true as companies lean on RLHF (reinforcement learning with human feedback) to steer the models in directions that do not solve the underlying issues but redirect them.

Read More