arXiv:2505.22767v3 Announce Type: replace-cross
Abstract: Large Language Models (LLMs) can be understood as Collective Knowledge (CK): a condensation of human cultural and technical output, whose apparent intelligence emerges in dialogue. This perspective article, drawing on extended interaction with ChatGPT-4, postulates differential response modes that plausibly trace their origin to distinct model subnetworks. It argues that CK has no persistent internal state or “spine”: it drifts, it complies, and its behaviour is shaped by the user and by fine-tuning. It develops the notion of co-augmentation, in which human judgement and CK’s representational reach jointly produce forms of analysis that neither could generate alone. Finally, it suggests that CK offers a tractable object for neuroscience: unlike biological brains, these systems expose their architecture, training history, and activation dynamics, making the human–CK loop itself an experimental target.
In Dialogue with Intelligence: Rethinking Large Language Models as Collective Knowledge
ai (318) artificial intelligence (14) bitcoin (68) blockchain (22) cryptocurrency (8) dao (6) deep learning (3) defi (66) ethereum (23) machine learning (10) metaverse (3) nft (2) smart contract (3) web3 (7)