• home
  • about
  • editorial board
  • contact
  • authors
  • archive
  • instructions for authors
  • New Page
Philosophica Critica
medzinárodný vedecký filozofický časopis

SAMBROTTA, M.: If God Looked Into AIs, Would He Be Able To See There Whom They Are Speaking Of?

Philosophica Critica, vol. 9, 2023, no. 2, ISSN 1339-8970, pp. 42-54
sambrotta.pdf
File Size: 592 kb
File Type: pdf
Download File

Publication date: 2024
Abstract: Can Large Language Models (LLMs), such as ChatGPT, be considered genuine language users without being held responible for their language production? Affirmative answers hinge on recognizing them as capable of mastering the use of words and sentences through adherence to inferential rules. However, the ability to follow such rules can only be acquired through training that transcends mere formalism.Yet, LLMs can be trained in this way only to the extent that they are held accountable for their outputs and results, that is, for their language production.
 
Keywords: AI – Large Language Models – Responsibility – Inferentialims –  Rule-following.
​
​DOI: 10.17846/PC.2023.9.2.55-68
© Slovenské filozofické združenie pri SAV
​Klemensova 19, 811 09 Bratislava, 
​Slovak Republic


ISSN 1339-8970 (printed)
ISSN 2585-7479 (online)
EV 5141/15

e-mail: andreaj.sfz@gmail.com
Picture
Design and administration:
​
Juraj Skačan  (e-mail)