This is an unusually long post, but also quite interesting in my humble opinion. The main takeaway is that large language models (LLMs), like any other computer programs, have an insurmountable problem with meaning. What does a phrase or a word truly mean? Can meaning be derived by a program