SAMBROTTA, M.: If God Looked Into AIs, Would He Be Able To See There Whom They Are Speaking Of?
Philosophica Critica, vol. 9, 2023, no. 2, ISSN 1339-8970, pp. 42-54
|
![]()
|
Publication date: 2024
|
Abstract: Can Large Language Models (LLMs), such as ChatGPT, be considered genuine language users without being held responible for their language production? Affirmative answers hinge on recognizing them as capable of mastering the use of words and sentences through adherence to inferential rules. However, the ability to follow such rules can only be acquired through training that transcends mere formalism.Yet, LLMs can be trained in this way only to the extent that they are held accountable for their outputs and results, that is, for their language production.
Keywords: AI – Large Language Models – Responsibility – Inferentialims – Rule-following.
DOI: 10.17846/PC.2023.9.2.55-68
Keywords: AI – Large Language Models – Responsibility – Inferentialims – Rule-following.
DOI: 10.17846/PC.2023.9.2.55-68