Category: Uncategorized
-
How a Token Becomes Meaning Inside an LLM – A Frank-said / GPT-said dialogue in plain physical cause-and-effect language
Frank said:Let me see if I’ve finally got the picture. The word cat goes into the system, gets turned into a token, and then somehow the artificial neural network turns that into meaning. But I want this explained like a physical process. Not abstract handwaving. I want to know: what actually causes what? GPT said:Good.…
-
Inside an LLM: From Prompt to Prediction
Course Description This course explains what happens inside a large language model after a user enters a prompt. Students move step by step through the inference pipeline: from raw text to tokens, embeddings, positional structure, self-attention, multilayer perceptrons, residual accumulation, final hidden states, logits, probabilities, and next-token generation. The course treats the LLM not as…
-
INSIDE AN LLM (EXPANDED)
Block 1: USER PROMPT This is the human-facing beginning of the entire inference event. A user types something in ordinary language, such as “The cat sat on the mat,” and to the human mind that already feels like meaning. But to the model, at this instant, it is not yet meaning in any computational sense.…
-
inside the llm
Here are explanations for each labeled processing block: Block 1: USER PROMPT This block marks the input stage of the inference process. The entire pipeline begins when natural language text, such as “The cat sat on the mat,” is provided. This raw data cannot be understood directly by a neural network. At this point, no…
-
Beyond the Simple Feature Detector Story – A Frank-said / GPT-said sequel on individual neurons, Multi-Layer Perceptrons, superposition, and polysemanticity
Frank-said:So far we have been saying that the Multi-Layer Perceptron, or MLP, behaves like compressed semantic circuitry and that its feature detectors fire when the incoming hidden state matches certain learned patterns. That feels right. But I also suspect it is too clean. Real neural networks are probably messier than the phrase “feature detector” makes…
-
Title: The Weight Landscape and the Living SignalA Frank-said / GPT-said Dialogue on Feature Detectors, Activation Flow, and Why ANN Weights Behave Like Compressed Semantic Circuitry
The model does not merely contain knowledge. It contains learned ways of transforming patterns into behavior. Frank-said: You said something important a moment ago: the weights are the stored pattern structure, and the matrix math is the way the current input interacts with that stored structure during inference. I want to push that much further.…