ARTICLE AD BOX
A developer at OpenAI known as "Roon" on X explains why large language models never behave exactly the same way twice. Roon says a model's "personality" can shift with every training run, even if the dataset doesn't change. That's because the training process depends on random elements like reinforcement learning, so each run makes different choices in what's called "model space." As a result, every training pass produces slightly different behavior. Roon adds that even within a single training run, it's nearly impossible to recreate the same personality.
OpenAI tries to keep these "personality drifts" in check, since users often get attached to a model's unique quirks. This was especially true with the earlier "sycophancy" version of GPT-4o, which some users still miss. Roon, however, wasn't a fan. He even publicly wished for that "insufficiently aligned" model's "death" before deleting the post.

2 hours ago
1


