A four-part series about what happens when AI moves from controlled demonstrations into ordinary use. Each paper takes a borrowed word — memory, compaction, reasoning, agency — names the gap between what the word implies and what the feature delivers, and describes the infrastructure required before users can safely rely on it.
You are doing the work of believing it.
Read →What the friendly progress bar is actually doing to your conversation.
ComingWhy generated chains of thought are not what the word "reasoning" implies.
ComingWhy scripted tool-use loops are not what the word "agent" implies.
Coming