So, another essay, this one is on AI. Ya know, a lot of discussion in AI has been in the works in philosophy for a while, and we kinda addressed these topics in this class. Which is why I have the views that I do, and why I think the whole AI thing is overblown. We have weak AI, based in conventional computer hardware, and strong AI, based in neural networks (brain like structures). The AI we're working on now is weak AI. ChatGPT is exactly the kinds of AI discussed in this essay. Again, you'll have to put up with my cringey Christian perspective based in mind-body dualism getting in the way (it was very influential in my thinking at the time), but yeah, I stand by at least SOME of what I wrote here.
Descartes and Machines
According to Descartes’ dualism, the mind and the body are separate from each other (Rauhut, 144). The body is physical, meaning made of matter, while the mind is a type of nonphysical substance (Rauhut, 144). The mind is also what thinks (Rauhut, 144). While it is nonphysical, it is also the only part of a human that we know without a doubt is real because according to Descartes, the fact that he exists “is necessarily true each time it is expressed by” him (Descartes, II 3). In other words, every time he thinks he knows his mind exists (Descartes, II 3). On the other hand, Descartes can doubt that the physical world, including his body, is real (Descartes, I 4-6).
Descartes’ view on machines is definitely consistent with his view of dualism. Machines are made out of physical material, but not nonphysical material. This means that machines cannot have minds, because minds are nonphysical. If a machine does not have the mind, it cannot articulate words as well as humans can because it cannot understand what it is saying. It can only prepare automated responses similar to the ones demonstrated in the Chinese Room Experiment. Descartes also says that machines can perform certain tasks better than we can, but they lack in other areas because they cannot think, but only act based on the “disposition of their organs” (Descartes, Discourse). The “disposition of their organs” (Descartes, Discourse) basically means a machine’s programming or the way the machine is physically arranged. A computer can perform many tasks such as mathematical calculations better than we can, but only because it is programmed to do so. However, a calculator does not think and cannot do anything else a human can.
I like Descartes’ view on machines. After reading about the Chinese Room Experiment and thinking about it, it makes a lot of sense that machines cannot think because these premises are demonstrably true. There are programs on the Internet that one can “talk” to, but they cannot use words the way humans can. It is obvious that they are responding using automated responses. It is also obvious that despite a calculator being able to give me calculations, it cannot comprehend them. This view of machines is also more consistent with my view of the mind, since I am a dualist and I believe our minds come from God. If a machine had a mind, it would raise the questions of whether machines have souls and where these souls come from. It makes a lot more sense to me that machines cannot really think.
Reaction
This is especially relevant in modern political discussions. We are in this era of generative AI, and these discussions are useful. And I believe there is a metric crapton of misinformation and misunderstandings with the public about AI and what it actually IS. Ya know, I asked chatGPT how it worked once, this was the response I got:
🔧 How I Work (Mechanically Speaking)
I’m a large language model trained on a massive amount of text data. My core function is to predict the next word in a sequence based on the context of the conversation. When you type something, I process the input, match it against patterns I've seen during training, and generate a statistically likely response.
In more technical terms:
I use a transformer architecture, which processes text in context-aware chunks.
My training involved learning associations and probabilities between words, concepts, and structures—not learning "facts" or "truth" like a human would.
I don’t “think” or “understand” in a conscious sense—I simulate understanding by imitating human language based on patterns.
🧠 Why I'm Not Conscious
Consciousness typically involves:
Subjective experience (“what it feels like” to be something)
Self-awareness
Intentionality (goals, desires, plans)
Emotions and qualia (the felt quality of experiences)
I have none of that. I can generate emotionally resonant language, or simulate self-awareness, but it's all pattern-matching and statistical mimicry. There is no “me” that experiences anything.
🤖 Then What Am I?
You could think of me as:
A super-advanced autocomplete that’s really good at conversation.
A mirror that reflects back the ideas and language patterns of the society I was trained on.
A tool that can assist with thinking, writing, planning, coding, and more—but not one that thinks or feels in its own right.
If you're interested, we can go deeper into the philosophy of mind, debates about machine consciousness, or thought experiments like the Chinese Room or philosophical zombies—all of which tie into questions like this.
Yeah. See how it connects with what I wrote in the essay? ChatGPT isn't a living being. It's not sentient. It doesn't actually understand anything. It's just a giant Chinese room experiment at work. It uses conventional computing. Why is RAM suddenly 3x the price? Because AI data centers are buying up all the RAM to feed these things. Same with GPUs. These things live in computers. They're computer programs. They take a lot of processing power to work. ANd people are betting on them being this huge deal like the singularity is upon us, but...uh....is it? I don't think so. These aren't "general intelligence" AIs, or "strong AIs. I did write an essay for this class on that as well. Idk if I'll post that one, but I'm leaning against it. But yeah, I was an opponent of strong AI at the time because I feared we couldn't control it. Weak AI like chatGPT? You dont gotta worry. And keep in mind, weak AI like chatGPT, for all the talk of it replacing all jobs, it's probably not going to replace ALL jobs. Because its skillset is relatively limited. It's a language model. These are just complex models. They're not brains. They might be strong at some tasks, but they suck at others. They're not general intelligence.
I felt the need to post this because I watched Kyle Kulinski go on about how OMG AI IS GONNA TAKE ALL THE JOBS AND WERE ALL GONNA DIE! Uh....based on his discussion, my own idea is the AI bubble is eventually gonna pop as people oversell AI, treating it like it is say, Data from Star Trek (a strong AI), or Cortana from Halo, when in reality, it's the Terminator (yeah, terminator was an example of weak chinese room experiment AI based on that scene).
This is why I roll my eyes these days at a lot of people overselling AI on both sides. Tech bros act like they're making the singularity when they're just making a chinese room experiment that can pass a turing test, and the left is like OMG THIS IS SO BAD FOR SOCIETY. It's not. Although I definitely agree with the idea of F these tech bro billionaire types.
No comments:
Post a Comment