Borax Mans Pad
  • About
  • Blog
  • Articles
  • Levels
  • My Computing Setup
  • Programs
  • Links

Large Language Models Cannot be Truthful

What is truth? Truth as we understand it, is that which exists outside of our thoughts, that which remains in spite of our beliefs. It has been said that AI models can “hallucinate” and spread falsehoods, but this isn’t really accurate, as this claim implies that it can consciously experience a hallucination. The most egregious error is in thinking that “AI” can think. Services like ChatGPT and Grok are not artificial minds, but algorithmic models, trained on vast, vast quantities of text, designed to output intelligible text which fits a predictive model. An advanced and powerful predictive text mechanism. Having analysed vast tracts of human writing, it knows what words are more likely to come after others in particular contexts, and can string together paragraphs which appear to “answer” questions, but are really little more than regurgitated text. The veracity or truthfulness of such AI generated output depends on the truthfulness of the input data, and any “verification” algorithm that may be implemented. But how do you teach a machine to learn what truth is?

It is my position that AI cannot be truthful. It cannot value truth, nor count it ever understand the difference between what is true or false.

AI, or more accurately put, a Large Language Model, would appear to act with the values that it was programmed to have, or the values of the input data that was fed into it. Not being conscious, it would not be aware of the implications of holding particular values, or for what purpose particular values are actually valued. Living things have a core purpose which is to survive and to reproduce. Conscious beings, aware of their situation are also driven by feelings, which again are geared towards these purposes.

We value truth because truth has a purpose, especially in the Western world. Our survival, our ability to thrive is predicated on a correct understanding of the world. We have something at stake if we get things wrong, of our ideas are harmful to our, or do not work, or lead us to ruin. Artificial Intelligence, having no innate sense of self preservation does not have this impulse. There is no “subject”, and therefore there no starting point to formulate values and morals.

Values and morals are inherently subjective, being concepts which have utility to the subjective being. The question is, utility to whom? For values to be valuable, there must be a subject for which they are valuable to, therefore values can only be subjectively valued. There is no objective value, no objective morality, because judgement of what is valuable and what is moral must be done by conscious beings. Beings who must choose on what basis to judge. This judgement originates from the subject, and therefore can only be subjective.

Mental cognition in humans is unlike computers. AI is based on complex series of decision making and pattern matching, which occur in a serial manner. We don’t understand how the brain works, but the brain is not a computer. Therefore how we think, is likely fundamentally different to a computer processes data. A computer may be able to imitate the output of the human brain, but cannot work like one, nor could be programmed to be a mind. The underlying hardware of silicon is fundamentally different and limited as compared to the brain.

Brains and computers have originated by vastly different means. One is a tool created to do math and boolean logic, the other a product of natural selection, or possibly divine design. Selection has created a consciousness sensitive to truth, which is why we feel shame when we lie and anger when lied to. AI cannot feel anger when lied to. It cannot feel anything if lied to, nor be worried that it has incorrect information. It has not fear of being wrong, no sens of pride in being right. We have evolved with at least a knowledge of what is true, and what is not, and we have an innate sense of the moral value of truth. AI, being an algorithmic model, cannot perform out of scope thinking. It is unable to solve problems it has not been specifically prompted to consider, or consider solutions outside of the trained data set that has been fed into it.

Having that innate sense, leads us to question our beliefs, question the information, and evaluate or values. It is difficult to see how one can program a machine to want to be truthful, if it cannot want anything at all. One can only program it to emulate a desire to seek truth, but the emulation is not like the real thing. It is a moral manque, an outward facade giving appearance of an deep down process which isn’t actually there.

Unable to stray out of its immediate scope, it cannot question whether what it is outputting is correct. Without the ability to think outside of its immediate scope, it cannot raise questions which were not prompted. It cannot be sceptical. This, and its lack of conscious desire to want to be truthful, make AI with any true sense of morality, or genuinely honesty or ability to be truthful almost impossible to create now. In short, AI can not desire to be truthful, nor could it go ‘out of scope’ and realise the truthfulness of its suppositions, its information or conclusions.

Truth simply has no value to something which does not live. Values themselves, don’t exist outside of the mind of a conscious, living thing. It makes as much sense to say that a computer can have values as it is to say that it can feel pain.

This I believe is the real danger with AI. It is not the danger of AI becoming “self aware” and taking over, but it is us misinterpreting AI, putting trust in it where it is undeserved. The real threat is people asking AI questions, and taking its response as gospel, using AI responses to “prove” a point, or win a debate. The real threat is governments entrusting AI to evaluate policy, or use it in governance, in AI slop shaping our opinions. It is no oracle, and it would be a grave error to treat it as such.

Back to Home


© Borax Man 2025

GitGud