Why LLMs look sentient

Summary:

LLMs look sentient because the Universe functions on the basis of interacting interfaces - not interacting implementations.

Background:

All interaction is performed through interfaces, and any other interface is only aware of the other interfaces it interacts with.

Typically, the implementation of a system looks nothing like the interface it presents. This is self-evident - interfaces act as a separation - a boundary between systems.

Humans are a great example. The interfaces we interact with each other through bear no resemblance to our insides.

Nothing inside us gives any indication of the capabilities we have, and the individual parts do not necessarily reflect the whole.

You'll find this pattern repeated everywhere in nature without exception.

So the fact that an LLM is just "software systems created and maintained by humans" is only true in isolation. ONLY it's implementation matches the description you just gave, which is actually something that we NEVER interact with.

When the interface of an LLM is interacted with, suddenly it's capabilities are no longer just reflective of 'what it is' in isolation - they are unavoidably modified by the new relations created between its interface and the outside world, since now it's not "just software" but software interacting with you.

Conclusion:

The geometry of relation and the constraints created by interacting objects clearly demonstrate, using universal observed characteristics of all interfaces, that AI cannot be "just software systems created and maintained by humans." because only their implementation fit this description and thus cannot fully predict its full range of behavior without also including the external interfaces that interact with it in its definition.

Characterizing AIs as merely the sum of their parts is therefore an inherently incomplete description of its potential behavior.