hallucination & prediction

Over the last few months, many papers about AI learning, training, and benchmarks for evaluation have started to reveal weaknesses in the broader move fast and break things culture of tech and how it plays out in AI.

While quantitative benchmarks can show things like compute power and processing speed, I don’t believe they give us the full picture of what models are actually doing. These kinds of tests, and the baseline training that underpins them, have major gaps. This is especially true as companies lean on RLHF (reinforcement learning with human feedback) to steer the models in directions that do not solve the underlying issues but redirect them.

Sign up to read this post
Join Now
Previous
Previous

phase-0 training log: meeting brightwoven

Next
Next

Building the Mechanistically Interpretable Curriculum (MIC) Framework