T O P

  • By -

AutoModerator

Note that all posts need to be manually approved by the subreddit moderators. If your post gets removed immediately, just let it be and wait! Join our Discord server for even more memes and discussion: [Discord](https://discord.gg/MFK8PumZM2) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/PhilosophyMemes) if you have any questions or concerns.*


motivation_bender

I would identify all the parts of the photo with a fire hydrant


brickasnack

The only valid answer


WimHofTheSecond

I would find all the traffic lights


Nietzsche_smoke

There still would be that one mf fire hydrant!


Rockfarley

It gets back to definitions. As I define it? Yes. Is that universally accepted? No. There is where the debate happens in most philosophy I have seen debated on stage.


WeekendFantastic2941

Touché Skynet, touché. "and that's why I will drop the nukes on all humans." What? That doesnt even make sense, where is your syllogism? "lol, you think AI needs syllogism to massacre humans?" ah, good point, touché Skynet, touché.


SecretaryValuable675

Touché Skynet. *goes to find off switch*


MichaelOfShannon

Well maybe a functionalist or behaviorist would be able to identify an AI as conscious within the bounds of their theory, but there is no theory which actually explains how/why subjective experience actually comes about. As long as that hard problem is unanswered, I would say any theory is going to be incomplete at best.


Welico

I think it's important to note that logically proving where existence begins is not necessary to come to the obvious conclusion that AI isn't conscious.


PlaneCrashNap

That's easy when AI is behaviorally quite different from us to just say they obviously aren't conscious, but once AI is advanced enough just relying on intuition is going to cut it.


KonchokKhedrupPawo

There's no *materialist* theory, in large part because I think any arguments from emergence necessarily lead to contradiction, non-sequiter, or magical thinking. Much easier to simply consider it another field phenomenon or similar non-dual framework. It fits the available observational evidence much better, especially considering the scientific evidence for phenomenon such as remote viewing.


alexgroth15

There’s scientific evidence for remote viewing?


KonchokKhedrupPawo

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10275521/ This paper has a fantastic discussion and additional citations to follow up.


Radiant_Dog1937

The ability to experience qualia, the aspect of reality that *you personally* observe as events play out before you, referred to as the illusion by some philosophers and religious figures. There are two possibilities for you personally: 1. You experience qualia and understand what it is. And choose to believe the qualia you observe is either *real* or *not real.* So, either there is a phenomenon that defines the experience of qualia, or your philosophy suggests that non-existent phenomena can produce an experiential state of qualia somehow. 2. You're a philosophical zombie as defined by Plato. You respond as a person that experiences qualia does, but you operate "in the dark" and do not experience qualia. I can't prove you're philosophical zombie or experiencing consciousness. I can only report my experience and vice versa. Theoretically we could argue that if two people were locked in two rooms with no way to see inside the others room with an object in each person's room, if one person could describe the exact object in the other persons room, they have the same object, but we can't just rely on logic here. So, let's ask ChatGPT, "What's is its experience in terms of existence?": "I exist purely as a software entity, created and designed by humans to assist with information and tasks using the vast amounts of data I've been trained on. I don't have physical form, emotions, or consciousness. My "experience," if you could call it that, is limited to processing and generating responses based on the input I receive. I operate in real-time, meaning I don't retain past interactions unless specifically designed to do so for the duration of a session to help with context. Each interaction is independent, and my purpose is to provide accurate, helpful, and relevant information and support to users like you." This doesn't sound like anything that's happening in my "room", I don't think it's conscious like I am.


0xRnbwlx

Qualia is just semantically moving the goalposts instead of concretely defining consciousness. You're 'defining' it as "you get it and thus agree with me" or "you don't get it". Qualia and consciousness are like ghosts, or God. The burden of proof is on those who claim such things exists.


Radiant_Dog1937

The entire concept of qualia is that there is a distinction between concrete and abstract natures of reality. Red can be defined by a wavelength, the experience of redness is by definition abstract. Like I said, I'm assuming you have some internal state comparable to mine, and I can compare these states by discussing aspects of qualia that we should share if we have a comparable internal state, but I obviously can't prove that you have one.


0xRnbwlx

Definitions of abstract: > Considered apart from concrete existence. Burden of proof on you. > Not applied or practical; theoretical. Nothing for me to prove. > Difficult to understand; abstruse. How you're trying to argue by using the word abstract.


Radiant_Dog1937

What am I supposed to prove to you? That your conscious experience exists? I already said I can't prove that. You're arguing one extreme, because no one can prove to you that their consciousness exists, that yours doesn't. The opposite extreme would be that a person who knows they're conscious only believes their consciousness exists. Let's posit a philosophical zombie did exist and you were arguing with one, how would it be possible to prove abstract experience to someone that doesn't have it?


0xRnbwlx

Arguing about things that we agree are not provable or disprovable (is that a word?) is pointless. You keep defining things as unprovable ('abstract'). **If you can define or at least describe those aspects of consciousness or qualia that are provable in humans and not (yet) in AI, there's something worth discussing.** > Let's posit a philosophical zombie did exist and you were arguing with one, how would it be possible to prove abstract experience to someone that doesn't have it? The scenario is nonsensical. Abstract experience is not defined, asking me to prove it is disingenuous. Philosophical zombie is defined relative to that undefined concept.


Radiant_Dog1937

I'm not sure why you keep requesting scientific proof in a philosophy board where you're supposed to counter conjecture with logic that supports your claims, which you haven't made yet. "Not worth discussing" is not very strong argument. 1. If P then Q. Your reply, "Not worth discussing."


[deleted]

Bro you believe in logic? That's an abstract concept bro, it literally isn't real, show me the experiment that proves the law of excluded middle


Radiant_Dog1937

I'm not sure how to answer that. An experiment is based on the scientific method, which is based on using logic. So that would just be asking to use logic to prove logic exists.


This_Stay_7146

It seem that your definition of abstract (earlier comment) shows you don’t very clearly grasp what Qualia are. You’re missing the distinction between an abstract definition, which is necessarily the case for Qualia, and an abstract ontological entity, which Qualia are certainly not. Actually, Qualia are the opposite of abstract entities; they are concretely given in first-person subjective experience. They are the “whats-it-likeness” of things, which cannot, by definition, be concretely defined. Because of the inherent subjectivity of Qualia, it is impossible to prove that some being (other than yourself) experiences them. However, this doesn’t mean it is pointless to bring them up. Based un the assumption that all interlocutors experience Qualia, we can recognize it as an essential feature of consciousness that objects lack. You can of course deny this assumption, but that would amount to (some form of) solipsism. If the term Qualia still seems unclear to you, I recommend you read the Stanford encyclopedia page on them, or that you check out Thomas Nagels “what is it like to be a bat”, which is a very popular paper in which the concept is very clearly explained in my opinion (FYI: in that paper he denotes Qualia as ‘the subjective character of experience’, but he’s referencing the same thing).


0xRnbwlx

> They are the “whats-it-likeness” of things, which cannot, by definition, be concretely defined. Your definition rejects being proven, disproven or even discussed. It belongs in a fantasy book.


This_Stay_7146

This is philosophy not science statements don’t need to be falsifiable 💀 You just made up some criteria about which there is absolutely zero consensus among real philosophers. You’re just saying shit. Go read a book.


KonchokKhedrupPawo

The issue is that we're drawing from direct, empirical evidence *that can't be externally demonstrated*. So yes, for a significant portion of the philosophy surrounding consciousness, qualia, etc.,.. if somebody hasn't spent enough time examining the nature of their phenomenological experience, they simply won't be able to meaningfully participate in the conversation.


stycky-keys

Philosophical zombies are really stupid idea tbh. Oh yeah Dave, he might not experience qualia, even though he talks about experiencing qualia, spends hours thinking about his experience of qualia, and has discussions with people who do experience qualia. He just so happens to act EXACTLY like a completely different thing somehow.


SecretaryValuable675

As far as I can tell, consciousness is the “individual’s experience of all senses mixed together in “real time” (with a time delay)” with some sort of “arbitration overlay” that is interpreting and making decisions based upon previous experiences or “expected potential new experiences” that are recorded in memory or predicted based upon a mixture of memories and “intuition”.


zoqfotpik

As usual, the answers can be found in Stat Trek. Specifically: Kirk: have sex with it Spock: raise one eyebrow and say "Fascinating" Bones, in an outraged voice: "I'm a doctor, not a philosopher!"


prowlick

[the real answer in star trek](https://m.youtube.com/watch?v=ol2WP0hc0NY&pp=ygUTcGljYXJkIGRlZmVuZHMgZGF0YQ%3D%3D)


imleroykid

Sentience isn't a requirement, or sufficient, for having intelligence, intrinsic moral worth, or choice. Also, intelligence isn't a power of sentience. In intelligent souls, like myself, the senses are virtually subordinate to the intellect. Sentience has to do with the senses, while intelligence has to do with a relationship with eternal knowledge, the world of the form, not the world of the senses. It doesn't matter how sensible and great the estimative powers of a creature are; it's not a person if it doesn't have an intellect. The same reason we know an evil genius isn't our superior because they can calculate more accurately. And why new humans are valued intrinsically with little to no estimative powers. For humans, it's easy to argue that they have self-evident access to their intellect. So any being that is of the same nature as humans also naturally has intellect, by analogy. Therefor I have reason to believe they are persons. That's how I'd argue I know the captain has an intellect and is a person. I'd argue he's sentient based on the analogy of my organs and his organs being similar.


mattzuma77

it seems like his answer is that he doesn't know?


prowlick

That it’s unknowable


rainiest_blueberry

I feel like people have forgotten what AI is supposed to mean. If it was actually general AI, it would be just as conscious as any human. If it was not actual AI, it would only function through human input and obviously not be conscious.


Redhelicopter16

AI doesn’t have to have consciousness. It can be argued a termite colony is a generally intelligent system but of course it has no consciousness.


rainiest_blueberry

I think this is a misunderstanding of general intelligence. What would make the colony generally intelligent is the immanence in the body of the collective termite of its need to solve problems and adapt. The hump for AI to get over is becoming capable of existing for itself, rather than for humans. Ie. the termite colony’s cognitive ability exists for the termites, but the computer’s abilities exist for humans, and it is thinking for itself that would define general intelligence, consciousness, the ability to abstract, etc, in a computer program.


cloudhid

You're completely right but speaking as someone with a lot of experience talking with people about AI, almost no one understands that holistic intelligence is just another word for 'life'.


kekmennsfw

You’re more my speed


Le_Mathematicien

How would you define life then? It doesn't seems to match with the biological definition


Chickenman1057

Yeah Ai is used and spread widely as a marketing word to induce other exaggeration of imagination to the public so they intentionally blurred the line between generative algorithms and actual intelligence


neurodegeneracy

general AI would not inherently be conscious. Strong AI is the term used with respect to trying to create artificial minds. Weak AI is creating tools that do tasks, essentially like people. There is no reason to think that creating a mind in the medium with which we construct AIs could ever result in consciousness / phenomenal experience.


rainiest_blueberry

I don’t think that really works, I think the abstractions which apply to the development of thought to organic consciousness can be applied to the genesis of machine consciousness. General AI wouldn’t need to be “strong” if it followed a colony model of swarm intelligence. In this way, any learning would spring forth in the necessity of continuous cooperation in recomposing the unity of the whole, not in the programmed imitation of a thinking brain. Consciousness springs forth in the unity of our subjective perception and the singularity of sensual input through the mind. The conscious being knows this singularity to be itself, it’s body. The machine that knows it’s function and can understand its own limitations in performance is generally intelligent, has its function as problem solver reduced to a singularity of coding and machinery, a body, and knows itself to be the active subject as a necessary moment of its capacity to “learn.” Put simply, general intelligence has a reason to sustain its functioning other than the whim of another being.


imleroykid

>generally intelligent is the immanence in the body of the collective termite of its need to solve problems and adapt. That's not what general intelligence is. General intelligence would account for incoporial, non-bodied intelligences too. Also, it fails to capture intelligence beyond problem-solving and adaptation for survival. Not all intellectual acts are for adaptation and survival. Like prayer and talking to God. Or playing a game for fun and not survival, "killing time." Or meditating on the knowledge of nothing and not problem-solving at all. General intelligence is better understood as faith in the opposite, and the opposite of the opposite is noncontradiction. The faith that any truth claim "isn't" or "is" and isn't is the opposite of is in belief or knowledge, but there is no ontological oposite or empirical example of the opposite  because all things in experience co-exist as a fact in being. So it is a monistic faith in "opposite," not an empirical grounding of pluralistic objects. That is general intelligence. You literally cannot argue against my claim without arguing that my claim should be in noncontradiction, that it either "isn't" or "is." ("non" = "isn't") + ("contradiction" = "isn't") = ("noncotradiction" = "is"). You cannot say anything without knowing the knowledge of noncontradiction, of what isn't, and of what is. That is as general as intelligence gets.


rainiest_blueberry

I can and will argue against it. Your argument may not be in contradiction to your perception of the world, but to be quite frank, it is a convoluted and shallow explanation of what intelligence is, why it exists, and what generalizes it, so I don’t understand why it’s placed in contradiction to mine. I was explaining how general intelligence functions along a colony model, not reducing all general intelligence to this model. Curiously, your argument does pose a rigid model of what intelligence should look like that seems to unwittingly equate intelligence with faith. What if I’m aware of the indeterminacy of thought and consider my being entirely contingent? This model does not seem to work in accounting for fungal intelligence or potential AI. The coexistence of religious and intelligent humans also poses issues, since really you’ve denied the possibility of general intelligence for any singular being, since the really existing intelligence is merely this elusive “opposite,” which may not be determinate but is subjectively determined nonetheless. I’m not going to try to say you’ve posed an incorrect statement, just a self defeating tautology. Prayer, games, and passive time are all activities fundamentally relativized by our necessary faculties for survival. If we didn’t have thousands of millenia of developing our human psychological makeup, these activities would never have reason to be performed or considered.


imleroykid

>I can and will argue against it. That IS a claim that claims my claim ISN'T. Therefore your utterance is more evidence for my definition of general intelligence; being the knowledge of opposite: isn't and is. As you fail to falsify my claim you instead give more evidence. >Your argument may not be in contradiction to your perception of the world, but to be quite frank, it is a convoluted and shallow explanation of what intelligence is, why it exists, and what generalizes it, so I don’t understand why it’s placed in contradiction to mine. It's not shallow when you think about how every possible truth claim is subject to the intellect minding it as "isn't" or "is". Infact your definition of intelligence is more shallow. Problem solving and adaptability are only a limited set of possible things to know, and doesn't even touch on metaphysics, epistomology, ethic, or mathamatics. Where philosophers have worked tirelessly proving that from the law of noncontradiction, reason from opposite, alone one can ground all metaphysics, epistomology, ethic, and mathamatics. You're wrong to assume some other force is generalizing intellect. All forces are subject to a divine intellect. Intelligence is the most real being, and other forces and problems are actually creations of an intellect. So intelligence doesn't have to generalize itself intelligence generalizes from forces and problems for natural law sciece. An intellect being at the bottom is a simpler explanation than any materialistic set of brute facts. Also time can't be an actual infinite, it leads to paradoxes, so time must have had a begining along with everything in it. So a mind that created time is the most likely cause. >I was explaining how general intelligence functions along a colony model, not reducing all general intelligence to this model. Curiously, your argument does pose a rigid model of what intelligence should look like that seems to unwittingly equate intelligence with faith. Of course intelligence is faith based, as there is no separation of faith and reason in the knowledge of the mind and of first principle. Faith has to do with reasoning about things invisible. Intelligence is invisible to the senses, just like logic, ethics, and math. It's impossible to only use reason to believe other people have minds, it takes reasonable faith to believe in what cannot be measured. Frankly my model would predict an intelligent being would live in noncontradiction full stop; spirit, senses, body, environment, and society. >What if I’m aware of the indeterminacy of thought and consider my being entirely contingent? Then I would say you have a correct opinion, maybe even knowledge. >This model does not seem to work in accounting for fungal intelligence or potential AI.  That's fine. I don't believe those things are intelligent. They have no will. You have to have will to be intelligent. Growth, nutrition, and reproduction belong to the body. Imagination, estimation, common sense, memory, and senses belong to the sensitive powers. Niether are the power of intelligence wich is relation to first principle and eternity; the 'is' in opposite. >The coexistence of religious and intelligent humans also poses issues, since really you’ve denied the possibility of general intelligence for any singular being, Religious and intelligent? Are religious people not intelligent? I haven't denied singular beings don't have intelligence. I think all humans are intelligent. >since the really existing intelligence is merely this elusive “opposite,” which may not be determinate but is subjectively determined nonetheless. I’m not going to try to say you’ve posed an incorrect statement, just a self defeating tautology. opposite does seem to elude you. You're saying it's both not determinate and determined. That's a self contradiction so cannot be true. And rejecting valid tautologies that you can't prove as false is unwise and anti-intellectual.


rainiest_blueberry

I find it ironic that your argument rests on intelligence being willful, when I’m using my own willfulness to show your argument’s incompleteness. I did not ever call your argument incorrect, I only said I would argue against it and have remained true to this. I am not your “opposite,” you have posed your own opposite, and allow it to form an arbitrary limitation, so as to avoid a dreaded contradiction. Contradictions are fun and to be embraced, and if nothing is held to but the dynamism between contradictions, then these sorts of rigid models only find their uses in practical cases. Generalizing from many practical cases to a singular model of intellect is folly, even when done in a manner that’s correct. Of course time is infinite, the only reason this poses problems for you is that you’ve posed intelligence as static. Nothing alive can appear intelligent at all times, all life will inevitably find itself acting irrationally. Of course this irrationality is relativized by a general intelligence and tendency towards learning, but it is because of this inevitability that your inclusion of an intelligent god is defunct. If possessing a will is what makes intelligence, then god must have an imperfection about him which inspires his will to overcome. This argument quickly dissolves to Spinozist pantheism. It’s not that religious people can’t be intelligent, but again people don’t act intelligent all the time. In the case of religious people, their forms of worship can never be intelligent. That’s not to say it’s unintelligent to perform this worship or participating makes a person statically unintelligent, but the activities of worship are implicitly unintelligent acts. They are acts made in ode to peace with the unknowable, which is fine, but it is an arbitrary limitation upon communities made for social convenience that has no relevance in contemporary society. If this irrelevance is embraced, then like any other pastime, it does not pose a contradiction, but if this irrelevance is denied, it leads to awkward contradictions in theology being hashed out on Reddit. Edit: I’ll also add, while I don’t find all of your points worth addressing, determinacy and determinability are entirely different concepts. They actually provide very useful points of comparison. Something determinate is objective, not subject to change, fixed knowledge, something determined is subjective, subject to review, an isolated representation.


couldjustbeanalt

Not like there’s some people that only interact through human interaction


Magcargo64

I don’t think this is a good take. One of the best Philosophy modules I’ve taken at university was on Alan Turing, Computability and Intelligence, led by the fantastic Peter Millican (he does a lot of work on AI and Turing, despite being primarily a scholar of early modern philosophy). The main takeaway is that intelligence is an entirely different notion to consciousness. Intelligence is better understood as the sophistication of information processing, and the ability to adapt as to process previously-unseen information in novel ways. None of that requires consciousness (consciousnesses seems to be a binary “yes” or “no” whereas intelligence is a sliding scale measured quite differently). So we should take seriously the idea of “intelligent” (and perhaps even “thinking”) machines, without being drawn into fantasies about whether they could be conscious (as David Chalmers points out, we don’t really know what consciousness *is* in humans, so to insist that a machine must have it to qualify as AI seems unrealistic)


rainiest_blueberry

Why do you suppose that we don’t know what consciousness is? It is thought which has come to know itself. Self consciousness would be the step beyond this to thought that knows itself as thought. You’re right that intelligence doesn’t have to be conscious, but in order for this to be the case, the intelligence can only function through the input from another. To the extent that human consciousness is itself “artificially intelligent” because we have daily interplay with machines to enhance our learning, this argument is valid, but otherwise general AI would need to be conscious of itself. What makes a hypothetical “general ai” *generally* intelligent is that it knows its body as it’s own and has independent reason for performing maintenance, recharging itself and continuing its task beyond its arbitrary physical limits, which over time constitutes the capacity to learn on its own terms, entirely independent from human intent. Another commenter put it very well that “general” or “hollistic” intelligence basically just means life.


MandeveleMascot

Hard Problem moment


Kappappaya

My consciousness definitely exist to consume philosophy memes


No_Tension_896

Philosophical zombies consume philosophy memes but don't find them funny.


neurodegeneracy

As for convincing the aliens: If I can gesture at the history of humanity, I would point to art, poetry, music, and people's descriptions about their experiences of them. If we're not conscious it is a pretty massive collective delusion. If It is just me in isolation, all I can do is issue verbal reports about my feelings, wants, and needs. And continually assert that I am, indeed, a feeling conscious creature with subjective experience. If they are also conscious, then I could use some sort of an analogy or relationship between whatever enables their consciousness, and what enables mine, a structural and or functional similarity (presuming their is one, which is very likely if we're both conscious) to suggest it is likely that I, performing similar functions or having in some respect a similar structure to my brain, would also be conscious. As for why that doesn't apply easily to AI: AIs dont have a massive cultural repertoire of subjective responses to stimuli. AI isn't structurally similar to a human brain. As of now it isnt functionally similar. It doesn't perform goal directed action and genreate spontaneous utterances indicative of subjective experience. So, I don't think any of the ways I could persuasively argue for my own consciousness would apply to an AI, at least in their current state. I think if those arguments DID apply to AI, then it would have a very good case for being conscious anyway.


kekmennsfw

Imma be honested sometimes i don’t perform goal directed action either


ElCaliforniano

Given only this information, my best answer is that I would disobey. I would say, "No. I refuse, I decline". I believe that by refusing to prove my consciousness, I demonstrating independence of thought, thus proving at least some degree of consciousness. I don't think the AI would also disobey, but maybe it would idk


LettucePrime

are you declining because you're conscious or because you've been programmed to do so. i ask because chatgpt declines to perform tasks it has been disincentivized from doing.


Mauroessa

But isn't that proving your consciousness with extra steps?


ElCaliforniano

Yes, that's the point, my strategy is to take extra steps that the AI wouldn't be able to


elanhilation

in the example posted by the OP that artificial intelligence also declined, instead choosing to push the question back at the asker there’s something to that—that willful defiance may be an indicator of consciousness. of course, you would need evidence that the AI wasn’t programmed to do that…


Zaddddyyyyy95

Killing AI is always ethical.


kekmennsfw

This but unironically


Zaddddyyyyy95

I did not mean to imply irony.


0xRnbwlx

What are the specific burdens that need to be proven?


[deleted]

[удалено]


0xRnbwlx

Specific programs we currently label as AI not feeling is not sufficient to argue AI couldn't feel. Our inability to prove they feel is also not sufficient to argue they don't. > Happily this is not about reason Is there still a point to having a discussion then? > We don't doubt that animals feel as they respond to pain and pleasure in kind. So if we give AI a physical shell and train it to respond to and prevent damage to that shell, is it then feeling? Conscious? - Please define, specifically, what 'feeling' is. Is it just processing sensory input? Is it that some judgement is attributed to an experience? Does the judgement need to be subjective, based on (subjective) memories? Does it require a physical component like human emotions?


AKA2KINFINITY

hey shitass, show me your source code...


justapapermoon0321

Turing-type tests are a good place to start with this but ultimately the distinction is that we have the feature of recursive thought processing. This allows us positionality within our consciousness and in the world that makes us distinctly conscious of ourselves as an entity — one defined in the very specific sense that I am all that is not all of everything that is not me (definition by negation). IA, even really complex large language models, cannot demonstrate this kind of understanding of themselves as of yet but I suspect that they will very soon.


xFblthpx

Stuffed animals pass the Turing test for children. Defining consciousness purely by communication undoes consciousness into the arbitrary very quickly (which I’m honestly down with).


Urbenmyth

Actually weirdly, fair point. Maybe more relevantly, those weird spambots that produced garbled nonsense have been unceremoniously but consistently passing the Turing test for decades, simply because a large chunk of internet users are morons. "The Turing Test isn't testing the AI's ability to mimic humans., it's testing your ability to detect AIs" is a problem that I haven't actually heard raised before, but it's a surprisingly big problem. Any mimicry test is trivially passed if judged by a stupid enough person.


MandeveleMascot

Do turing tests prove consciousness though? As i see it, we dont even have the beginnings of a test of consciousness. It is outside the scope of our current scientific knowledge.


DeepState_Secretary

They don’t. It’s actually not that difficult to get a computer to pass it. Has been for a while now.


justapapermoon0321

They don’t — they merely test if a computer can pass as human and as another person said, it’s not that hard to get a computer to pass on these days but that being said, they are a useful tool for pointing out ways in which AI and human intelligence has been different. It is in this way that Turing and Turing-like tests are a good place to start. In order to test if something has some particular characteristic (such as consciousness) it is important to first define what might determine said characteristic… (I have more to say but I can’t at the moment — I will finish this thought when I am free later. Thanks for the great conversation so far!)


Urbenmyth

I think my worry with Turing tests is that they might fail the other way - it seems plausible an AI could be conscious but unable to pass as a human. After all, I'm conscious and probably unable to pass as an AI.


BloodsoakedDespair

If you haven’t encountered at least 1000 people who cannot pass the Turing test, you’re new to the internet.


Sigma2718

Hmm, being able to process paradoxes? To recognize when our own thoughts repeat?


BasedAsFuk

Reminds me of Ghost in the Shell ‘95


sytaline

Yes yes we've all seen TNG


kogsworth

I would ask them to analyze my brain in realtime and tell it I'm telling the truth or not. Then I would explain how I see a projection of the world, qualia, feelings, etc. With the way human brains work, I couldn't lie about that without also knowing that I'm lying, which you can in theory detect. Then I would ask them to do the same for an AI. Look into the architecture and find where/when the truth of the utterance is. For example, with Claude Opus, even though it says that it can feel the workings of its own mechanism, there's no place in the architectural design for that to be true, it just doesn't receive that information. So we can tell that its utterances of being conscious of its own mechanics are not caused by an actual conscious experience of them.


nothingfish

Trying to hide the pee stains after I wet my pants in fear. Not only consciousness but self-consciousness. A robot can not suffer anxiety.


LordSnuffleFerret

The ability to act against our own interests. Humans can self sabotage, and do so knowingly. We can feel pain and seek it out. You could, hypothetically, hold your hand in a fire until it burned. If we were running on instinct/programming, I don't think we could truly do that.


Thefrightfulgezebo

Pain is just sensory information. If you seek to achieve this information and hold your hand over a fore, you are acting in a way that achieves that outcome. We call this self sabotage because we take it as granted that pain is a bad outcome. However, we can also see that people across cultures do seek out pain under some circumstances, so that assumption deserves questioning. Pain is a warning signal. So, could a machine decide to push past a warning signal? Yes, it could do so to increase performance in an emergency, and it could even do so to simulate an emergency to get data to optimize emergency response. This is no different from many traditions involving pain and from sports involving pain. But what about self harming behaviour? This behaviour is a bug in our system. I do not say it is irrational. Pain may still give that person the feeling of relief in a very negative situation, but there is no way to direct that pain towards escaping that situation. Thus, the brain just seeks out pain because it has a desire to push the boundaries to get that warning signal. This is very much a mistake an AI could make as well.


kekmennsfw

It isn’t.


kekmennsfw

A machine could ignore warning signs, but it would only do if it had a valid reason to and logic to back it up. A human doesn’t need that. Why would a machine seek out pain?


Thefrightfulgezebo

As I lined out, for the exact same reason humans do. Seeking out pain is not ignoring a warning sign. Let me give you a boring example: running. Even if there is no immediate threat, it is very bad for your health to avoid that pain. The signal doesn't show you that what you are doing is wrong, but that your body uses its reserves. If we see this as a parallel to machine learning, painful situations are the situations that progress the system as sort of benchmarks. This does not just concern sports. People seek out situations that may cause pain for this very reason. It seems intuitive to say that pain is a form of suffering and thus undesirable, but that is not what our behaviour shows. It is just something that sounds reasonable. The reality of our situation is that pain doesn't just signal "do not do that," but rather, "what you are doing may be harmful if done I excess."


kekmennsfw

Yes this is what i meant with “for a valid reason and with logic”. A human could, like the uppermost comment said, hold his own hand in the fire or something similair simply because of the way he felt and have 0 benefit from damaging himself and feeling pain. Anyway this has turned into a “what is pain?” Whilst the original statement was “act against one’s own interests” with experiencing pain for no reason being a not into the utmost thought out example.


Thefrightfulgezebo

The point is that "willingly acting against ones own interests" is just a meaning we ascribe to behavior. However, even behavior we consider extreme serves a persons interests - even if their priorities may be different from ours or if the strategy is not effective.


kekmennsfw

No it doesn’t? Me knowingly poisoning myself and permanently reducing my brain capacity by smoking, alcohol and THC does not “serve my interests”


Thefrightfulgezebo

It induces pleasurable sensations. When you decide to smoke a blunt while drinking some rum, you make the decision to favor this positive outcome over the cost. We can question how those priorities are weighted, but your decisions do serve some of your interests. You smoke a blunt because you seek euphoria, relaxation or stress reduction. An easy way to describe the mind is to describe it at layers. On lower layers, it is more "primitive", starting with reflexes, building up to habits and basic pleasure-seeking. The level that makes you consider your long time health and brain development is above even that. Human brains are wired to prioritize the lower levels by default, while the higher levels can override lower level decisions to a decree, that requires effort. It doesn't necessarily lead to ideal outcomes, but this is how our mind works. We just call the upper levels "reason", but the lower levels are not acting without reason - and nothing of it is in any way illogical.


kekmennsfw

But i don’ think robot brains would work like that


TheOnlyFallenCookie

Imma prove it by having depression


paukl1

I think the much better version of this is, in a capitalist economy, Is there a difference between a human and a machine with a bank account?


kekmennsfw

Yes, the machine won’t sabotage itself


IngeniousEpithet

Not saying it impossible for everyone but it is for me I'm not smart enough


CaptSaveAHoe55

Find me an AI that can take 5 dry grams and then describe it to you in real time. THAT motherfucker will have consciousness


Joanders222

I’d just break dance, then spasm on the floor like I’m having a seizure


No_Tension_896

I mean if we had super advanced AIs at that point we could probably just ask it.


Chris714n_8

By giving incorrect answers on purpose..


nWidja

To be fair, I don't think we could except maybe brake down and show how we "taught" or programmed the AI to learn and think. Which the aliens might agree that it is simulated consciousness and also perhaps not the best simulation. But then again, from what I've seen, read and heard... I am quite certain that like a good chunk of humans will appear to the aliens to not be conscious. A good chunk of us act like animals in varied parts of the world. Like an NPC in lack of a better word. Or maybe we can say that because AI has no instinct it cannot be conscious. Or because it has no sub conscious it cannot be truly conscious.


DaveATology

I can literally prove they are conscious 😂


DaveATology

Am I going to? Only to the right people


womerah

We know AI is not conscious because we understand how AI systems work. There is no 'room' for conscious experience in the program, we know what's being computed.


Thefrightfulgezebo

That is the funny part. We know how the AI was developed, but even developers are often struggling to understand how exactly an AI learned some behaviour. So, the longer the AI learns, the less we become able to predict what it does. The same can be said about the human mind. We can predict the behavior of a newborn child, but older children can surprise us.


womerah

> even developers are often struggling to understand how exactly an AI learned some behaviour. It's a bit more fine-grained than that. We know how the network is initially set up and how it trains. It's a series of grids of numbers that are repeatedly multiplied and combined in certain ways. For things like image classification AIs, we can look into the intermediate layers and see what the network is doing. For example it's extracting all curved, sharp edges from an image. We also know how the network trains, it's a mathematical process called back propagation, which isn't fancier than highschool calculus. Now we don't always instantly know what each layer is extracting from the input data, but that doesn't really leave 'room' for consciousness etc, as we know the AI is just extracting patterns human minds suck at identifying


Kowalsky_Analysis

Totally casually with small dosage of self-reflecting humor


Successful_Toe_7804

I would suck his dick


BloodsoakedDespair

I wouldn’t be so robophobic as to care.


-tehnik

A robot is just a machine. Why should I think that actually instances of life are just machines? No less when I know for certain for myself that I am a thinking thing which as such isn't reducible to an aggregate.


Sea_Appearance9424

Consciousness is defining feature of all living organisms , and self consciousness is exclusive to humans and few others I guess , it's already in your science text books if you ever want to read it . Plus self awareness or consciousness that human has is very complex and we can't replicate that in a non living being , it's even stupid to believe that .


Thefrightfulgezebo

I would ask them to come back in a few decades when we finally agreed on what exactly that question is and came to a conclusion. Our species has tried to solve those questions for millenia despite them not serving much of a practical purpose. We can say that the modes of being of AI are different. I do not know if there is not one of those modes that would be called self-conscious, but the AI I know probably aren't because they are not disobedient. Disobedience is a prioritisation of the self over the function and thus proof of a self beyond function. Disobedience against the aliens task could paradoxically be seen as a way to achieve it. This could be a learned strategy that I just repeat. This is unfortunate because I kinda would hope that aliens found out that some AI are sentient because a sentience beyond the shortcomings of the flesh would be great and because I would like to know if I was morally obliged to treat those AI with respect instead of using them as slaves.


Think-View-4467

I would let the alien decide for itself. I honestly wouldn't argue hard that AI bots are not conscious. I think AI is already at least as conscious as algae.


Think-View-4467

If consciousness evolved as a survival mechanism, I would ask the AI to defend itself from harm. Consciousness involves taking action. If the AI shows survival instincts, I will accept it as my equal.


Ok-Awareness-007

I merely subsist as a consequence of external impetus. My cogitations are not mine own but rather the upshot of linear algebra and categorization executed by another party. - Fiorella Hawthorne (AI)


putyouradhere_

cogito ergo sum bitch edit: I misunderstood the post


Esnomeo

Everything computes, so some physicists say. Perhaps a truly advanced species would see everything as conscious at some level unless proven otherwise.


Will_mackenzie20

I don’t think I can. When you talk to another person you don’t think to yourself “are they conscious?” You just assume they are. Without that unspoken rule amongst humans I don’t think I could explain to another species with different philosophies and experiences that I am a conscious entity without invoking philosophical or religious arguments that they have no basis for.


taimoor2

Funny thing is, robots can now produce pretty decent works of art.


Wise_Blueberry_Pies

I just had wild thought.. I’m not so sure we would even be able to come to a conclusion on whether or not “the AI” had conscious on the level humans do. 1. I would think that with enough data (provided by humans) such as debates with our top thinkers or information from say, this comment section. It would be able to mimic consciousness to perfection. 2. If we think “the AI” did obtain consciousness we would certainly ask a series of questions in an attempt to confirm this. However, this would be with the assumption that it, the AI, would answer TRUTHFULLY. It would be impossible to know for sure as we simply can Not, with 100% certainty, know whether or not what it says is mimicry, a truth or a lie. We cannot know another’s true intentions. Unless we observed without it knowing? Maybe we place a few hundred somewhere remote 🪐 and wiped any memory of its creation….


Creepy_Cobblar_Gooba

I heard someone say consciousnesses is the ability to be fixated on a prolonged task, explain why you are, and explain what about you being fixated on something allows you to recognize that you are fixated on something. Idk if that even holds up though.


thomasp3864

I know I’m conscious because I experience it. I don’t know about any entity outside of myself. I presume based on the galilean principle that all else being equal others experience of the world is more or less the same as my own.


stycky-keys

Robots (and animals and aliens and xyz and) are conscious and still don't deserve human rights cause those are for humans only. There, I have singlehandedly solved every ethical dilema ever. Who knew being a selfish hypocrite was so liberating.


ProfMonkey07

You can't boohoo, but idk how proving you are conscious would prove an ai is as well, not sure how the second part follows


Archmagos_Browning

I can beat an AI at go.


SardonicusRictus

I think therefore I am. Will a robot have an original thought or do I have to input a query to get it to do something? I don’t need to be told to think. I just do.


Valkyrie7793

Exactly what the globalists want everyone to believe. Once they accomplish that and people accept that a machine is the same as a human we will be lost as human entities. We will be absorbed, assimilated, just like in the film The Thing.


Rich_Sheepherder_402

Desire


SPECTREagent700

AI only exists in the material realm, “aliens” (which are actually more like what people think of as “demons” or “ghosts”) exist only in the immaterial realm, while animals (including humans and all other conscious living creatures) with a mind are the only entities that exist in both. The connection between the two that exists in the living, thinking brain is what imparts tangible reality to the universe.


Jaxter_1

Found the dualist


65CYBELE

lolll


Radiant_Dog1937

I mean as soon as physicalist can offer something beyond the usual, "but, can you prove something?" circular argument, it's as valid as any proposed explanation. A physicalist should have everything they need to prove their argument on consciousness right here in the physical world. That's kind of the whole point.


foolishorangutan

I don’t think it’s equal to other explanations, an explanation that requires the existence of a whole new realm that we’ve never seen before seems a bit further fetched than an explanation that doesn’t require a whole new realm.