The New Progeny



If a machine both appears and behaves in ways that are indistinguishable from humans, and the only way to describe them is as intelligent, thinking, deliberating, reflecting, or being upset, confused, or in pain, then there's no reason to deny the machine is conscious, because that appearance and behavior is the only evidence we have for saying humans are conscious.

Does an artificially-integrated human-like system necessarily assume consciousness? In principle this may not matter to those artifacts themselves, but in practice it will be to their advantage as self-interested functional unities to analyze human evaluation of their status, since this will crucially affect how they must interact with humans and how they can be expected to be treated.

Some will say that there cannot be criteria for states of consciousness in machines any more than there can be criteria for states of consciousness in humans.

But to think that anything artificially created cannot be conscious merely assumes what is in question.

Claiming that there cannot be a conscious robot because robot behavior would never be called conscious is like saying that there could never be a talking dog because we would never call such behavior talking. Can there ever be a conscious machine, regardless of how we describe it? Saying usage determines the limits of our concepts in future cases has to be proved, not merely assumed.

One can call an entity conscious only if it shows some specific behavior, and any claim that that an entity's behavior is conscious must have some justification for thinking that.  Consciousness is not a property which can be observed in behavior. To detect the presence of consciousness requires a valid inference.

Is a robot with all human behavioral capabilities conscious? The only The only way to describe uniquely human-like behavior is by using mentalistic language. Using these terms already assumes ascribing consciousness to anything to which it is consistently applicable. The only adequate way to describe a hypothetical machine whose behavior is indistinguishable from the behavior of a human is as being mental. You can't describe a machine that way and not also ascribe consciousness to it.

An objection is that a machine could never be conscious no matter how skilled and versatile it may be, and that a machine is by definition non-conscious.

But what's a machine? Any terms used to describe it must be consistent with it being a physical-only object, free of unwarranted anthropomorphism, and must also explain the machine’s powers of behaving purposively, learning, adapting, initiating activities, and using language.

What behavioral description would adequately describe a machine, but not adequately describe a human?

I could treat the machine like it's a black box, and describe it only in terms of input and output or stimulus and response. But stimulus-response theory cannot adequately explain purposive behavior in any being, human or not.

And behavior described by ordinary language often requires extremely complicated explanations in stimulus-response theory or information theory, which requires interpreting ordinary language explanations of behavior using descriptions compatible with regarding the robot as nothing more than a physical mechanism.

But as a complex communication system, the robot is an information-processing or data-handling system. So it would not be described in in terms of internal chemical or physical changes, or the state of its memory arrays, but rather in terms of signal processing. A communication system does things with information. So if we can fully explain the robot's information processing performance, we will have adequately explained its behavior.

Describing what a brain or computer does with information is not just a recounting of the sequences of physical symbols that represent the units the machine traffics in. It's an explanation that indicates the semantic function of those symbols, what is expressed by a sequence of symbols. A proper description of the signal processing carried out by a robot would be expressed in terms of what those processes symbolize, not merely in terms of their physical embodiments.

If the machine’s behavior in relation to the signals originating in its external environment were indistinguishable from what is characteristic of humans, then the machine is dealing with these signals as symbols.

 If the machine behaves as humans do, then those signals have the same symbolic importance for the machine as for humans, and therefore the machine deserves to be characterized as a human.

A set of symbols is effective only because of the meaning or semantic information it conveys. Symbolic contents of information processing are the effects of signal processing. So if receiving and processing signals transmitted to a data-processing control mechanism from sensory instruments is the perception and avoidance of an obstacle, or the performance of a combinatorial operation on several discrete signal sequences solve a problem in multiplication, we are expressing what a machine does with physical input in terms appropriate to describing the corresponding output. The meaning or content of signal processing is determined by its proper signaling effects.

Describing a machine in information processing terms, instead of in terms of internally-occurring chemical and physical changes, is on a higher level of abstraction than merely referring to inner mechanisms. An information-processing explanation, by abstracting from particular physical structures, can be completely neutral about whether the system is made of transistors, cardboard, or neurons. Information-processing depends on specific material configurations within a robot, but solving a math problem or generalizing a geometrical pattern occurs inside the machine only in a vague or metaphorical sense. The semantic characterization of a data-processing machine is concerned with inner processes only as it concerns their functional relevance in a physically-realized communication system. Like an ordinary-language account of mental activity, it pays no attention to the details of the physical embodiments of the processes being described.

A semantic account of an information-processing system is that the symbolic processes carried out by the machine can be described in terms used to describe the associated output. An adequate description of this output would have to include the fact that the machine’s overt behavior must be understood as the culmination of preceding and concomitant data-processing operations. So an adequate description of the machine’s information process-mediated behavior would have to mention not merely movements, but achievements as well, such as finding a square root or threading a needle, since these are the results of certain symbol-mediated interactions between the artifact and its environment. A robot’s apparently purposive behavior would have to be described in teleological terms, that is, in ordinary language. But in that that case, an ordinary-language description would state the semantic content or functional importance to the symbolic processes that mediate output that turns out to be indistinguishable from ordinary human behavior.

A machine that behaved like a human would show object-directed behavior, and this behavior would involve intentionality. The object to which it is directed does not have to be an objective reality. Thus a robot that can exhibit a specific and characteristic response to teacups with cracks in them, as distinct from all other teacups, might sometimes give the crack-in-teacup response when there is in fact merely a hair in the cup. Such behavior would be intentional, in the sense that the truth or falsity of its characterization as such would depend on something inside the machine, or at least on certain undisclosed features of the machine. So how should I characterize the kinds of internal processes that can bring out this kind of intentional behavior in a machine?

For any physical system to exhibit behavior that would be called intentional or object-directed, its inner mechanisms must assume physical configurations that represent various environmental objects and states of affairs. These configurations and the electrical processes associated with them would be presumed to play a symbolic function in mediating the behavior. A description in terms of the semantic content of these symbols would turn out to be an ordinary-language intentional description of purposive behavior. Conversely, a description of the state of an object expressed in terms of jealousy of a potential rival, perception of an oasis, or belief in the veracity of dowsing rods can be interpreted as the specification of a series of symbolic processes in terms  of their semantic content. And such an account is intentional because its truth depends not on the existence of certain external objects or states of affairs, but only on the condition of the object to which the psychological attitudes are attributed.

Detecting any bodily or external event by an organized system involves the transmission and processing of signals to and by a central mechanism. However, it is possible for this kind of event to be reported to or by the central mechanism “falsely”. There may be no such event at all. The first of these facts leads to descriptions in terms of the semantic content of messages and information, and the second provides the basis for the use of intentional idiom. these two types of description can be identical. Intentionality is a feature of communication systems just as the semantic content of the transmitted messages must be expressed in using intentional vocabulary.

Consequently, an adequate description of a bot that can exhibit behavior indistinguishable from the behavior of a human would amount to a semantic account or interpretation of its data-processing capacities. Moreover, this kind of description is mentalistic, at least to the extent that it exploits such verbs as those used to express the perceptual propositional attitudes. Therefore, intentional description is interpretable as a mentalistic way of reporting information processes. When we give an account of maze-running by a real or mechanical mouse, castling to queen’s side by a chess-playing machine, or running to a window by a dog at the sound of his master’s automobile, we may be using a form of mentalistic description to express the results of information processes. This kind of anthropomorphic description can be seen as merely a way to indicate what an organized system does with received and stored data. And if this kind of description is a legitimate way to specify behaviorally relevant sign processes, then an intentional description of a droid’s performance may indicate a commitment only to the validity of the use of a kind of abstract terminology for describing purposive behavior.

But the extension of intentional description to automata does not entail application of the full range of mentalistic description in accounting for the behavior of robots, because there are many types of mentalistic predication that are not intentional. There’s nothing intentional about a sudden bout of nausea or free-floating anxiety. The fact that we may be justified in describing an bot in terms of perceiving, believing, knowing, wanting, or hoping may not necessarily imply that we are also justified in describing it in terms of feelings and sense impressions.

Nevertheless, the language of sensations and raw feelings may be just as appropriate to describing a bot as the explicitly intentional idiom. First, acquiring and applying sensation talk is just as determined on the basis of overt behavior as the intentional vocabulary. Second, both types of mentalistic description play the same role in characterizing symbolic processes carried out by a communication system. These segments of mentalistic discourse are theoretic accounts of behavior.

Thoughts, desires, beliefs, and other propositional attitudes are in our language as theoretic posits or hypothetical constructs. Purposive behavior is expressing things such as thoughts. Thought episodes can be treated as hypothetical constructs. But impressions, sensations, and feelings also can be treated as hypothetical constructs.

Sense impressions and raw feelings are analyzed as common-sense theoretic constructs introduced to explain the occurrence of perceptual propositional attitudes. Feeling is related to seeing, and has its use in such contexts as feeling the hair on one’s neck bristle. In all cases the concepts pertaining to inner episodes are taken to be primarily and essentially inter-subjective, as inter-subjective as the concept of a positron.

There is in a person something like privileged access to thoughts and sensations, but this is merely a dimension of the use of these concepts which is based on and assumes their inter-subjective status. Consequently, mentalistic language is viewed as independent of the nature of anything behind the overt behavior that is evidence for any theoretic episodes. Anything objected to as a defect in the model, namely, that it may not really do justice to the subjective nature of concepts being extended as a result of technological development, but their fate could not be otherwise.

If we would use mental language to describe certain artifacts, does the extension of these concepts to machines imply ascribing consciousness. But what does ascribing consciousness mean. Maybe to believe that something is to have a certain attitude toward it. The difference between certain attitude toward is. The difference between viewing something as conscious and viewing it as non-conscious is in the difference in the way we would treat it. Hence, whether an artifact could be conscious depends on what our attitude would be toward a bot that could duplicate human behavior.

How we would treat such a believably human-like bot. If anything were to act as if it were conscious, it would produce attitudes in some people that show commitment to consciousness in the object. People have acted toward plants, automobiles, and other objects in ways that we interpret as presupposing the ascription of consciousness. We consider such behavior to be irrational, but only because we believe that these objects do not show in their total behavior sufficient similarity to human behavior to justify attributing consciousness to them. Consequently, the machine’s lack of versatility forms the ground of believing that consciousness is too high a prize to grant on the basis of mere chess-playing ability. On the other hand, anthropomorphism and consciousness-ascription in giving an account of a non-biological system may not always be so reprehensible. A person who views a bot as conscious is not irrational to the same degree as is associated with cruder forms of anthropomorphism.

As an illustration of the capacity of an artificially-created object to earn the ascription of consciousness, consider the French film entitled “The Red Balloon”. A small boy finds a large balloon which becomes his “pet”, following him around without being held, and waiting for him in the schoolyard while he attends class and outside his bedroom window while he sleeps. No speech or any other sound is uttered by either the boy or the balloon, yet by the end of the film the spectators all reveal in their attitudes the belief that the balloon is conscious, as they indicate by their reaction toward its ultimate destruction. There is a strong feeling, even by the skeptic, that one cannot “do justice” to the movements of the balloon except by describing them in mentalistic terms like “teasing” and “playing”. Using these terms conveys commitment to the balloon’s consciousness.

An objection might be that our attitude toward anything we knew to be artificially created would now show enough similarity to our attitude toward human beings to warrant the claim that we would actually be ascribing consciousness to an inanimate object. Think of an imaginary tribe of people who had the idea that their slaves, although indistinguishable in appearance and behavior from their masters, were all bots and had no feelings or consciousness. When a slave injured himself or became sick or complained of pains, his master would try to heal him. The master would let him rest when he was tired, feed him when he was hungry and thirsty, and so on. Furthermore, the masters would apply to the slaves our usual distinctions between genuine complaints and malingering. So what could it mean to say that they had the idea that the slaves were bots? They would look at the slaves in a peculiar way. They would observe and comments on their movements as if they were machines. They would discard them when they were worn and useless, like machines. If a slave received a mortal injury and twisted and screamed in agony, no master would avert his gaze in horror or prevent his children from observing the scene, any more than he would if the ceiling fell on a printer. This difference in attitude is not a matter of believing or expecting different facts.

There is as much reason to believe that a sufficiently clever, attractive, and personable robot might eventually elicit humane treatment, regardless of its chemical composition or early history, as there is to believe the contrary. If we would treat a robot the way these masters treated their slaves, this treatment would involve ascribing consciousness. Even though our concern for our robot’s well-being might go no further than providing the amount of care necessary to keep it in usable condition, it does not follow that we would not regard it as conscious. The alternative to extending our concept of consciousness so that robots are conscious is discrimination based on the softness or hardness of the body parts of a synthetic organism, an attitude similar to discriminatory treatment of humans on the basis of skin color. But this kind of discrimination may just as well presuppose the ascription of consciousness as preclude it. We might be totally indifferent to a robot’s painful states except as these have an adverse effect on performance. We might deny it the vote, or refuse to let it come into our houses, or we might even willfully destroy it on the slightest provocation, or even for amusement, and still believe it to be conscious, just as we believe animals to be conscious, despite the way we may treat them. If we can become scornful of or inimical to a robot that is indistinguishable from a human, then we are ascribing consciousness.

Under certain conditions we can imagine that other people are bots and lack consciousness, and, in the midst of ordinary intercourse with others, our use of the words, “the children over there are merely bots. All their liveliness is mere automatism,” may become meaningless. It could become meaningless in certain contexts to call a bot whose “psychology” is similar to the psychology of humans, a mere bot, as long as the expression “mere bot” is assumed to imply lacking feeling or consciousness. Our attitude toward such an object, as indicated both by the way we would describe it and by the way we would deal with it, would contradict any expression of disbelief in its consciousness. Acceptance of an artifact as a member of our linguistic community does not entail welcoming it fully into our social community, but it does mean treating it as conscious. The idea of carrying on a discussion with something over whether that thing is really conscious, while believing that it could not possibly be conscious, is unintelligible. And to say that the bot insisted that it is conscious but that one does not believe it, is self-contradictory. Insistence is a defining function of consciousness.

Epistemologically, the problem of computer consciousness is no different from the problem of other minds. No conceivable observation or deductive argument from empirical premises will be a proof of the existence of consciousness in anything other than oneself. To the extent that we talk about other people’s conscious states, however, we are committed to a belief in other minds, because it is false to assume that mentalistic expressions have different meanings in their first-person and second- and third-person uses. But if we assume that these expressions mean the same regardless of the subject of predication, then we must concede that our use of them in describing the behavior of artifacts also commits us to a belief in computer consciousness.

It makes no sense to say that a thing is acting as if it has a certain state of consciousness or feels a certain way unless there is some demonstrably relevant feature that supports the use of “as if” as a qualifying stipulation.

And to be unable to specify the way in which mentalistic descriptions apply to objects of equal behavioral capacities is to be unable to distinguish between the consciousness of a person and the consciousness of a bot.

If we find that we can effectively describe the behavior of a thing that performs in the way a human being does only by using the terminology of mental states and events, then we cannot deny that such an object has consciousness.

Consequently, consciousness is a property that is attributed to physical systems that have the ability to respond and perform in certain ways. An object is called conscious only if it acts consciously. To act consciously is to behave in ways that resemble certain biological paradigms and to not resemble certain non-biological paradigms. If a machine behaves in ways that warrant description in terms of intelligence, thinking, deliberation, reflection, or being upset, confused, or in pain, then it is meaningless to deny that it is conscious, because the language we use to describe that machine's behavior is itself the only evidence available for saying that a human has consciousness. One cannot build a soul into a machine, but once we have constructed a physical system that will do anything a human can, we will not be able to keep a soul out of it.


So the observations used to describe behavior are the only enduring evidence available for concluding that something has consciousness.

Consequently, once a physical system is constructed that will do anything observable that a human can and is therefore indistinguishable in behavior from a person, we will not be able to deny it consciousness, and therefore personhood.

To argue that it's impossible for machine intelligence to be or become a person is to argue that it's impossible for many beings to be conscious who are currently thought to be human.

In fact, if the requirements stated in the argument against machine consciousness are at some point no longer being met by certain people, then that same argument could be used to revoke their personhood, and thus deny their humanity.

If I list the observable requirements that are not met in machine intelligence but are required to attribute personhood, I end up eliminating certain groups of humans who for one reason or another don't fulfill all those requirements either.

Once the two classes of beings are observably indistinguishable---something usually ignored in reactions to this argument---you won't be able to tell whether you are talking about the machine or the human in making an argument either way---for or against personhood.

In that situation, the argument will not even be able to get started, since the being in question is observably the same as both possibilities, which is the key premise of the original problem. Since one cannot at that point even -begin- the argument with a predisposition either way, there would simply be nothing left that could be considered evidence for the distinction.

We already identify personhood by how objects appear and how they behave. It's the clearly stated indistinguishability situation as the core premise that forces the issue and reveals prior commitments that necessarily kick in by default in any possible specific instances of person-like entities encountered. How would it be possible to specify criteria for recognizing personhood or consciousness in any other way?

If you're hunting and you see something that could possibly be a human, you simply go by analogous or similar appearance to other objects already considered persons, combined with observed behavior of that something I see.

Since the whole basic initial premise is indistinguishability in terms of both appearance and behavior, how would you adjudicate personhood, or even identify the entity in question as one (machine) *or* the other (person)? If you can't tell the difference by both appearance and behavior, there's really no use in trying to maintain the distinction in any such instance. Given that you can't distinguish the entity as being merely physical or conscious to begin with, there's simply nothing else to go by.

But in that case, you'd end up having granted personhood to the machine by default, because of the ethical risk of denying personhood to a being that for all appearances and behaviors could very well be a person anyway. 

--Thanks to Michael Arthur Simon for sparking this view from his own similar idea. Much of this is redacted from Simon, Michael Arthur. "Could there be a Conscious Automaton?", American Philosophical Quarterly, Volume 6, Number 1, 1969, pages 71-78---but with very significant changes.

Freud Turns On Himself

Was the concept of God that Freud rejected determined by his relation to his father?

And was Freud's atheism also determined by that relationship, instead of his alleged universalities of human nature?

After all, Freud himself said that his views applied to all human theorizing.

Mind Wars

The empires of the future will be empires of the mind.

--Winston Churchill

Futureworld

The great powers of the world are not nation-states.

The starting point for global conquest is now the mind, and the cross-examination of our most basic assumptions is the new world war.

The Rise of Atheism

Strangely enough, it was the rise of Protestantism that laid the foundation for the later emergence of atheism in the twentieth century. It was believed by some that the only way to liberate humanity from its bondage to the past was to attack this oppression's ultimate cause---the church.

But since not everyone could defeat the church militarily or politically, some developed the strategy of attacking its ideas, thereby undermining the credibility of its teachings. Atheism was seen as able to weaken and destroy the church, as well as to liberate from its repression.

Atheism merely extended Protestantism's criticism of the Catholic Church, without any need to promote the attractions of a godless world.

But it developed a sophistication as a world view in its own right. Now it was much more than merely a newfound intellectual freedom. A brave new world lay ahead...


--Redacted from Alistair McGrath, The Twilight of Atheism

Bertrand Russell's Ignored Objection to Causal Arguments for God


Even if a case can be made for the claim that the world had a beginning in time, we are not entitled to infer that it was created. For it might have begun spontaneously. It may seem odd that it should have done so; 'but there is no law of nature to the effect that things which seem odd to us must not happen'.

--Redacted from A History of Philosophy by Frederick Copleston, Volume 8, pages 481-482. The quote is from Bertrand Russell's The Scientific Outlook, 1931, page 122.




Heraclitus and the Logos



God is the universal Reason or Logos, the universal law immanent in all things, binding all things into a unity and determining the constant change in the universe according to universal law. Humanity's reason is a moment in this universal Reason, or a condensation and channeling of it, and therefore humanity should strive to attain to the viewpoint of reason and to live by reason, realizing the unity of all things and the rule of unalterable law, being content with the necessary process of the universe and not rebelling against it, because it's the expression of the all-comprehensive, all-ordering Logos or Law. Reason and consciousness in humanity—the fiery element—are the valuable element.

--redacted from Copleston's A History of Philosophy, Volume I, page 43.