Dola Decoding by Contrasting Layers Improves Factuality in Large Language Models
58 points by johnsutor 11 months ago | 43 comments- prometheus76 11 months agoSo will LLMs ultimately become realists, or nominalists?
- soist 11 months agoLLMs can be whatever labels people choose to attribute to the system executing the instructions to generate "answers". It is fundamentally a category error to attribute any meaning to whatever arithmetic operations the hardware is executing because neither the hardware in the data center nor the software have any personal characteristics other than what people erroneously attribute to them because of confused ontologies and metaphysics about computers and software.
- HeatrayEnjoyer 11 months agoAt which point would such attributions be accurate? Humans are fundamentally just computers too. A different medium, but still transforming electrical signals.
- soist 11 months agoExtremely weird to me when people compare themselves to computers. What is that philosophical stance called and do you have any references for long form writing which makes the case for why people are "just" computers?
- soist 11 months ago
- HeatrayEnjoyer 11 months ago
- soist 11 months ago
- totetsu 11 months ago> exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers
This is surprising
- dinobones 11 months agoWhy is this surprising?
It makes sense that “facts” exist in earlier layers, then these become more abstract as you become deeper.
This reminds me of residual connections from CNNs and vision.
- snthpy 11 months agoInteresting. I hadn't considered that before but makes sense.
- snthpy 11 months ago
- 11 months ago
- dinobones 11 months ago
- photonthug 11 months agoJust call it correctness. Hallucination as an alternative to incorrect is fine for marketing I guess but factuality is especially awkward besides being pretty Orwellian.
- HeatrayEnjoyer 11 months agoOrwellian? Maybe, but in the same way we teach our children what is true and how to determine what is true.
We want to raise these pseudo-humans to be useful upstanding members of society. Knowing fact from opinion, knowing right from wrong, knowing what is real and what is imagination, are important for any intelligence. Otherwise our silicon-children will grow up to be dumb, harmful, or both, while being trillions in number.
- prometheus76 11 months agoAnd that goes back to the question I asked above: are you talking about "what is real" from a philosophical "realism" point of view, or from a philosophical "nominalism" point of view?
Realism posits that objects have intrinsic meaning that we apprehend through attention.
Nominalism posits that we cannot apprehend reality directly, but only through our minds and through language. That we only have a second-order experience of reality. Therefore, all language only has meaning because of consensus, so if we change the consensus of meaning around language, we are actually changing reality because reality is mediated through language.
These are obviously very compressed definitions of these views, but the question remains.
This conversation about AI "hallucinations" seem to point at this question. "We want AI to say true things." True to what? True to reality? Or true to language? When we ask AI a question, AI only knows how to answer the question using grammar that is probabilistically the most likely. That has no tie to "reality", but as soon as you start asking "well then what is reality that we want to map AI to?" the question gets quite slippery.
My contention is that AI, as its responses are curated by people, will only reflect the idiosyncratic worldviews of those doing the pruning.
- prometheus76 11 months ago
- Nevermark 11 months agoI prefer the word “confabulate”.
When humans fill in knowledge they don’t actually have, but think we do, we call it confabulating.
We all confabulate at a low level, because it is intrinsic to how we store and recall memories. Our memories are compressed summaries whose details we fill out with defaults and guesswork subconsciously as we “remember”. This is why memories are rarely perfectly accurate.
Some people confabulate more than others. And we all confabulate to a greater or lesser degree based on variable circumstances, emotions, motivation, fatigue and other mental states.
“Hallucinations” on the other hand are what happens when our sensory processing becomes unmoored from actual memory or real sensory constraints. The brain creates an interpretation based on internal feedback without normal correction, drifting into false sensory experiences that are not actually reflective of reality.
Dreams are a natural form of this where nothing we are experiencing is actually really happening.
One is false memories, the other is false experiences.
- skdotdan 11 months agoThe way I see it hallucination and factual error are particular cases of incorrect answer.
- HeatrayEnjoyer 11 months ago