Thursday, November 29, 2007

ARCHNblog Basics: The Dualism Between Facts and Decisions

In which we briefly outline some of the philosophic underpinnings of ARCHNblogs critiques.

"(Unlike natural laws) norms and normative laws can be made and changed by man, more especially by a decision or convention to alter them...This decision can never be derived from facts (or from statements of facts), although they pertain to facts...Hence, the "critical dualism" is properly a dualism of facts and decisions....This dualism of facts and decisions is, I believe, fundamental. Facts as such have no meaning; they can gain it only through our decisions."

"It must, of course, be admitted that the view that norms are conventional or artifical indicates that there will be a certain element of arbitrariness involved, i.e. that there may be different systems of norms between which there is not much to choose...But artificiality by no means implies full arbitrariness. Mathematical calculi, for instance, or symphonies, or plays, are highly artificial, yet it does not follow that one calculus or symphony or play is just as good as any other. Man has created new worlds--of language, of music, of poetry, of science, and the most important of these is the world of the moral demands for equality, for freedom, and for helping the weak....Our comparison is only intended to show that the view that moral decisions rest with us does not imply that they are entirely arbitrary." - Karl Popper, Chapter 5, Nature and Convention, "The Open Society And Its Enemies"

Wednesday, November 28, 2007

Objet d'Art of the Week: "Beginnings"

In which the ARCHNblog takes a regular look at the products of Rand's aesthetic theories.

This week: "Beginnings" by Sylvia Bokor

Tuesday, November 27, 2007

Rand's Style of Argument 1: Epistemology

Guest blogger Neil Parille from Objectiblog takes a two-part look at Rand's typical standards of argument.

Ayn Rand’s two most important philosophic works in essay form are her “The Objectivist Ethics” and the essays on concepts that form the "Introduction to Objectivist Epistemology." In their critiques of these works, Gary Merrill and Michael Huemer have drawn attention to an important technique in Rand’s argumentation. Rand defends her position using as a background the supposedly failed attempts of previous philosophers, arguing that the credibility of her position is advanced because their positions are so blatantly false (if not pure evil). To the extent that Rand fails to accurately describe these opposing views, her case for Objectivism becomes that much less credible. (Some of what I say is indebted to the discussions of Merrill and Huemer.)

Rand begins her discussion in ITOE with a review of various philosophical traditions on the question of universals with an overview of five schools: extreme realism, moderate realism, nominalism, extreme nominalism and conceptualism. (p. 2.) We are, however, given only two philosophers (Plato and Aristotle) who hold any of these positions (extreme realism and moderate realism). Not a single representative is given for the nominalist, extreme nominalist and conceptualist schools. This makes it difficult for the reader to determine the accuracy of Rand’s description. It might be the case that they were wrestling with problems or encountered difficulties which Rand’s theory also has. Her readers will never know.

Rand returns to these schools later with slightly more elaboration. Rand says the following about nominalists and conceptualists: “The nominalist and conceptualist schools regard concepts as subjective, i.e., as products of man’s consciousness, unrelated to the facts of reality, as mere ‘names’ or notions arbitrarily assigned to arbitrary groupings of concretes on the ground of vague, inexplicable resemblances.” (p. 53.) This is interesting because Rand’s position that only particulars exist is (in the view of many commentators) a version of nominalism or conceptualism. Is it really the case that all nominalists and conceptualists consider concepts “unrelated to the facts of reality”? Is there not a single significant thinker in this tradition who considered concepts objective? Doing a bit of reading lately in John Dewey (who probably falls in conceptualist camp), I came across the following from his Nature and Experience: “Meaning is objective and universal . . . . It requires the discipline of ordered and deliberate experimentation to teach us that some meanings, as delightful or horrendous as they are, are meanings communally developed in the process of communal festivity or control, and do not represent the polities, and ways and means of nature apart from social control . . . the truth in classical philosophy in assigning objectivity to meanings, essences, ideas remains unassailable.” (Nature and Experience, pp. 188-89.) Maybe Dewey and the like are mistaken, but it hardly seems fair to imply that their motivation is the destruction of the human mind without some evidence.

Even if the various positions with respect to universals are sufficiently well known as to justify Rand’s cursory discussion, there is much in ITOE that calls out for explanation. Merrill points to an example which has became somewhat famous: “As an illustration, observe what Bertrand Russell was able to perpetrate because people thought they ‘kinda knew’ the meaning of the concept of ‘number’ . . . .” (pp. 50-51.) Because of Rand’s unwillingness to provide a citation or elaboration concerning what Russell perpetrated, even her point gets lost.

There are many other jabs in ITOE which are almost as egregious. Rand occasionally objects to “Linguistic Analysis,” without much of a description of this diverse movement. (pp. 47-48, 50 and 77.) She does, at least, name Ludwig Wittgenstein’s theory of family resemblance as an example of what is supposedly wrong with it. (p. 78.)

Curiously, Kant does not loom large in ITOE, or at least not in the way one would expect. Since Kant was the most evil man in history and universals the most important problem in philosophy, one might expect that Rand would discuss Kant’s theory of universals. When Rand does get around to discussing Kant, she attacks him for inspiring pragmatists, logical positivists and Linguistic Analysts (“mini-Kantians”). Her two sources for Kant are herself (a quotation from For the New Intellectual) and a quote from the now obscure Kantian Henry Mansel. (pp. 77, 80-81.)

What David Gordon says of Peikoff’s The Ominous Parallels is even more true of ITOE: it is “the history of philosophy with the arguments left out.”

Saturday, November 24, 2007

The Cognitive Revolution & Objectivism, Part 6

Reason. One of the greatest challenges for the critic of Objectivism involves trying to make sense of Rand's conception of reason. The chief difficulty is that Rand never explained, in empirical, testable terms, how to distinguish valid from invalid reasoning. This is, to be sure, the problem with all reasoning that goes beyond deductive logic. In deduction, the critic may examine the form of the argument in order to determine validity. Not so with non-deductive reasoning. In such reasoning, there is no reliable method for testing the conclusion by examining the structure or form of the argument.

For this reason, Rand's assertion that "Reason is man's only means of grasping reality and acquiring knowledge" is empirically empty. Since Rand provides us no way of distinguishing between what she regarded as valid and non-valid reasoning, there is no way to evaluate the extent to which her conception of reason clashes with the "natural reasoning" discovered by cognitive science.

By studying how people actually reason to solve problems of everyday life, cognitive science has discovered that deductive logic has little to do with the inferences people actually make:
[F]ormal logic is not a good description of how our minds usually work. Logic tells us how we should reason when we are trying to reason logically, but it does not tell us how to think about reality as we encounter it most of the time....
Logic enables us to judge the validity of our own deductive reasoning, but much of the time we need to reason non-deductively — either inductively, or in terms of likelihoods, or of causes and effects, none of which fits within the rules of formal logic. The archetype of everyday realistic reasoning might be something like this: This object (or situation) reminds me a lot of another that I experienced before, so probably I can expect much to be true of this one that was true of that one. Such reasoning is natural and utilitarian — but logically invalid....
Our natural reasoning is thus a kind of puttering around — the intellectual equivalent of what a child does when it messes around with some new toy or unfamiliar object. After a spell of mental messing around, we may put our reasoning into logical terms, but the explanation comes after the fact, the actual process took place according to some other method... [The Universe Within, p. 132-136]
In my JARS article, I quoted psychologist Paul E. Johnson's description of how experts in medicine, engineering, and other skilled professions think when solving professional problems. Here's the entire quotation:
I’m continually struck by the fact that the experts in our studies very seldom engage in formal logical thinking. Most of what they do is plausible-inferential thinking based on the recognition of similarities. That kind of thinking calls for a great deal of experience, as we say, a large data base. If anybody’s going to be logical in a task, it’s the neophyte, who’s desperate for some way to generate answers, but the expert finds logical thinking a pain in the neck and far too slow. So the medical specialist, for instance, doesn’t do hypothetical-deductive, step-by-step diagnosis, the way he was taught in medical school. Instead, by means of his wealth of experience he recognizes some symptom or syndrome, he quickly gets an idea, he suspects a possibility, and he starts right in looking for data that will confirm or disconfirm his guess.
I further suggested in the JARS article that the way in which professionals solve problems does not accord with Rand's notion of reason. Seddon responded to this by claiming that I equate Rand's view of logic with deductive logic. But if he had read more attentively the passage above, he'd see that his claim misses the point. When medical specialists "quickly get an idea" based on their wealth of experience, they are reasoning deductively, as follows:
Disease X is known to produce various symptoms, including A, B, and C.
This patient has symptoms A, B, and C.
Therefore, (I suspect) this patient has disease X.

This is clearly not an inductive inference. It's a deductive inference, but an invalid one: it commits the fallacy of the undistributed middle.

Now whether Rand's "reason" includes invalid deductions or not, I will leave for Rand's apologists to decide. The only point I wish to make is that human intelligence apparently cannot do without making use of invalid inferences. Indeed, the whole question of "validity," in everyday life, is of no very great concern. What is important is: (1) the experiential data base from which the invalid inferences are made; (2) the extent to which our conclusions are subjected to criticism (particularly empirical criticism, when that is possible).

This is not to say that we must always make use of invalid inferences, coupled with criticism of conclusions. There are fields of inquiry, particularly in the sciences, where strict deductive logic remains an important tool of analysis. But this does not apply across the board. Many of the problems of everyday life cannot be solved, in part or whole, by only using logically valid forms of reasoning. As John R. Anderson, a "leading theoretician of thinking" (according to Morton Hunt), puts it: "Naturalistic reasoning doesn't use the rules of general inference, but tends to be 'content-specific' — it uses rules that work in a particular area of experience." Morton Hunt sums up the position as follows:
[W]e are pragmatists by nature; what feels right we take to be right; were this not so, we would long ago have disappeared from the earth. Our pragmatism, our natural mode of reasoning, is not anti-intellectual but is the kind of effective intellectuality that was forged in the evolutionary furnace. (The Universe Within, 138)

Wednesday, November 21, 2007

The Cognitive Revolution & Objectivism, Part 5

Wittgenstein's "family resemblance." A philosophical idea that has proved influential in cognitive science is Wittgenstein's "family resemblance" critique of the classical view of concepts (of which Objectivism is a variant):
Consider for example the proceedings that we call "games." I mean board games, card games, ball games, Olympic games, and so on. What is common to them all? Don't say, "There must be something common, or they would not be called 'games' " - but look and see whether there is anything common to all. For if you look at them you will not see something common to all, but similarities, relationships, and a whole series of them at that. To repeat: don't think, but look! Look for example at board games, with their multifarious relationships. Now pass to card games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball games, much that is common is retained, but much is lost. Are they all "amusing"? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck; and at the difference between skill in chess and skill in tennis. Think now of games like ring-a-ring-a-roses; here is the element of amusement, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities, sometimes similarities of detail.
Now consider Rand's "essentialism" (i.e., her belief that all concepts have essential characteristics):
When a given group of existents has more than one characteristic distinguishing it from other existents, man must observe the relationships among these various characteristics and discover the one on which all the others (or the greatest number of others) depend, i.e., the fundamental characteristic without which the others would not be possible.
This view gratuitously assumes that all of reality is easily categorized into objects that have easily identified essential characteristics. But why should this be so? Was reality created for the convenience of the human mind? The Randian view not only oversimplifies reality to the point of serious distortion, it also seriously underestimates the ability of the mind (particularly the unconscious mind) to handle complexity. The mind has no difficult grasping the idea of games, even if it can' t explain why.


A while ago, Greg Nyquist wondered "Is Objectivism Dangerous?" He concluded that the answer was no, despite the often fantastic theorising and overblown rhetoric of some of its proponents.

However, here in New Zealand we have the interesting situation of the former leader of the Libertarianz, Objectivist Lindsay Perigo, on Tuesday taking the quite unprecedented step of calling for the violent overthrow of the presiding government, a coalition of several parties dominated by the Labour Party. Comparing the situation to the American War of Independence, he said:
"...the time for mere marches is past...It is the right and duty of New Zealand citizens to throw off this government, which has long evinced a desire—nay, a compulsion—to subjugate us to absolute despotism. We should not wait for the 2008 election...New Zealanders must now ask themselves if they are a free people—and if so, are they prepared to act accordingly? Which is to say, are they prepared forcibly to evict all tyranny-mongers from their positions of power?"
In a followup comment, "So, now to overthrow!" Perigo continues:
"Private feedback this evening indicates overwhelming acceptance that the time has come...Everything from this point until Showtime must of necessity be clandestine..."
He then seems to try to soften his original position with some of his trademark outre humour, before returning to his theme:
"All right, I'm getting flippant already. Seriously, don't fret at the absence of progress reports. Needs must that this project go underground. But help is on the way."
One commenter described himself as "shocked" yet applauded Perigo's stand, to which Perigo replied "Bravo to you! At least you took in its import. And yes, this is where we've got to..."

This is the first time in my recollection that the former leader of a longstanding political party has called for the overthrow by force of a sitting government. The issue that has apparently brought him to make these threats is the Electoral Finance Bill, a controversial attempt to stop anonymous election funding that is currently before Parliament. The bill appears flawed in many ways; but does not appear to be serious enough a threat to free speech to require revolutionary action.

Should we take this sort of literal call to arms seriously? Probably not. These days Perigo seems to largely act as a figurehead in the Objectivist/Libertarian movement, issuing various provocative pronouncements and leaving others to do the organizational donkey work behind the scenes. The Libertarianz themselves, while composed of some highly competent and intelligent people, also do not have much of a reputation for organizational prowess, and have struggled to gain political traction. A former media personality, Perigo's star has fallen significantly over the past decade or more since leaving employment in the state-run Television New Zealand. His naturally contrarian tendencies have made him an interesting figure in the political landscape, but other than sporadic fill-in media gigs, inflammatory web posts and a short run in a student newspaper column, his actual output has reduced to a trickle. How seriously he is taken in Objectivist circles is also uncertain.

It is most likely that this is either a simple cri de coeur, to be quietly regretted in the cold light of day, or a kind of publicity stunt intended to re-ignite media interest in his views. The danger, however, in these sorts of things lies not so much with Perigo himself, but more with some of the figures on the fringes of these movements who perhaps might take calls for some kind of "Showtime" more seriously than their author may have intended.

At any rate, I will email Bernard Darnton, current leader of the Libertarianz, for a reaction to these statements.

UPDATE: Perigo has now deleted his later "So, now to overthrow!" comment. However, I had archived it, and will put up a link to it later.

Tuesday, November 20, 2007

Site Update: The Categories

Due to the rapidly growing site content and visitor stats, over the next week or two we're going to sort out the blooming, buzzing manifold that is the ARCHNblog into more easily accessible categories in the sidebar. If there's any particular areas y'all want featured, let us know.

Monday, November 19, 2007

Objectivist Quote of the Week: Einstein "Corrupted"

"Since I'm not a scientist, I'm not competent to judge whether Dave Harriman's criticisms of Einstein are ultimately right or not. I can say with some confidence that although brilliant, Einstein was undoubtedly corrupted by Kantian philosophy. That's no small matter."
- Diana Mertz Hsieh, Noodlefood

(hat tip to Physicist Dave)

Sunday, November 18, 2007

Response to Anon76

Anon76 begins his extraordinary series of posts by trying to diminish the claims of cognitive science on the grounds that there exists no consensus on the prototype theory and that, as a matter of fact, there are "many, many" competing theories. This, however, is a palpable exaggeration. If you were to count every subtle variation of each of the main theories, then you might get "many, many," but what would be the point of that? And, indeed, even the main theories are hardly inclusive of each other. Exemplar theories are often considered a kind of prototype theory, while the theory-theory view, which comes close to Popper's suggestion of a concepts as "theory-laden," could easily be dovetailed into the prototype theory (thus, the hybrid views). Atomistic theories are a bit more complicated. But they're even further from Rand's view (in their strong version, they hold that many concepts are innate), so why Anon76 would want to bring them up is anyone's guess.

One theory Anon didn't bring up is the Classical view of concepts, sometimes called Definitionism, of which Objectivism is a variant. Definitionism dominated thinking in the West until the last thirty years or so, during which it has taken mortal blows from cognitive science and the growing appreciation of Wittgenstein's notion of family resemblances (which I will discuss in a future post).

Anon76, I fear, simply has no clue as to the point of my Cognitive Revolution posts. He seems to be under the illusion that I am presenting these excerpts from books on the Cognitive Revolution as irrefragible proofs which conflict with nearly everything Rand believed. But that is not my intention at all. I am well aware that the theories developed by the Cognitive Revolution are conjectural, as are all scientific theories. To be sure, they are conjectures based on extensive empirical research that has been held up to the scrutiny of peer review. Contrast this with the conjectures brought forth by Rand, which were presented to the world without a whiff of evidence and have never faced the critical rigors of scientific peer review.

Now none of us at ARCHNBlog are experts in the field of cognitive science. So the question arises: if we want to know something about human cognition, where should we turn? Should we look to Rand, who is not a recognized expert on cognition, whose views are largely speculative, who shunned debates with other experts and who provided no scientific evidence to support her speculations? Or should we turn to those who have spent their entire adult lives specializing in doing scientific research on human cognition and whose findings are subject to peer review? I don't think there can be any doubt as to how a reality-centric person would answer these questions. Whatever shortcomings cognitive science may or may not have, there is every reason to believe that its findings are sounder and closer to the truth than those of Rand.

Keeping this in mind, I was sorry to find Anon76 denigrating Morton Hunt as a mere "popular science writer." In his book, The Universe Within, Hunt has presented his research on cognitive science in a way that can be understood by the intelligent layman. His book includes a 17 page list of cited sources, listing over 150 citations. He interviewed dozens of cognitive sciences, some of whom even went as far as to demonstrate their research to Hunt. Two cognitive sciences read Hunt's book in manuscript and offered corrections. Contrast this with Rand's IOTE, which has no list of cited sources, no citations of scientific studies, and which was written without the assistance of any cognitive scientists. (Incidentally, the problem of Rand's poor scholarship has been devastatingly criticized by Gary Merrill here.)

The core of Anon76's response is a rather confused defense of "invalid concepts" bundled with an attack on Popper's rejection of essentialist definitions. Let us consider "invalid concepts," first of all. Anon76 writes "One would think that [Greg Nyquist's] admission that there are such things as inefficient concepts would tend to support AR's theory, since it is precisely the inefficiency that she cites in her allegation that a concept like Blue Eyed Blondes, 5'11" tall would be an invalid concept." Unfortunately for Anon76, this view does not easily accord with what Rand wrote elsewhere about invalid concepts. According to Rand, invalid concepts are "attempts to integrate errors, contradictions or false propositions." Even worse, Rand claimed that "An invalid concept invalidates every proposition or process of thought in which it is used as a cognitive assertion." (IOTE, 65) This view, however, is self-contradictory, as can easily be demonstrated by the following proposition:
Blue Eyed Blondes, 5'11" tall are mortal.
Now Rand claims that any assertion made with invalid concepts is itself "invalid." Yet this proposition, using a concept which, at least according to Anon76, Rand regarded as "invalid," yields a proposition that is both intelligible and true. The error here is to assume that what I have called an "inefficient" concept can't refer to something in reality.

Even more problematic is the kind of the concepts that Rand considered "invalid"—concepts such as polarization, consumerism, extremism, isolationism, meritocracy, and simplistic. To suggest that these concepts have the same epistemological status as "all bachelors and non-returnable bottles is preposterous. Try submitting such a view to a peer reviewed journal in mainstream philosophy or cognitive science and see how far you get.

We get into even murkier waters when Anon76 turns to Popper's critique of definitions. Now it is important to understand that Popper's main point of attack is against essentialist definitions, i.e., the sort of definitions advocated by Aristotle and Rand. Popper does not deny that science uses definitions, but they are "nominalist definitions," and as such are only shorthand symbols or labels for some proposed phenomenon or theory. Popper's main two arguments against essentialist definitions are (1) They lead to "verbalism," that is "specious and insignificant" arguments about words; and (2) They lead either to an infinite regress or to circularity. Since Daniel Barnes has already addressed issues with (2), I will confine my comments to their first argument.

Verbalism. Essentialist definitions lead to verbalism for the simple reason that there exists no objective, formalized method to distinguish between true and false, good or bad, or proper and "improper" definitions. Consequently, when there is a disagreement over definitions, it cannot but lead to an argument about words.

Anon76, in trying to explain what distinguishes a "valid" from an "invalid" definition, writes: "A definition is valid when it highlights an essential distinguishing characteristic, i.e., the characteristic that explains the greatest number of other distinguishing characteristics of a class of things." But as a practical matter, this simply doesn't cut it. Among other things, it fails to address problems raised by Wittgenstein in his notion of family resemblances (to be discussed in a future post). It also commits the naive blunder of overemphasizing distinguishing characteristics at the expense of all other characteristics, thereby making the utterly gratuitous assumption that what distinguishes one referent from another is more important, in terms of understanding the referent, then the non-distinguishing characteristics. This is clearly seen in the Objectivist definition of man as a rational animal. Yet as anyone who reads history or knows anything about social psychology can tell you, non-rational motivation is clearly at least as important, if not more so, in the understanding human nature, than is rationality.

Anon76 in another place makes the following extraordinary confession: "Once we have definitions, then we use them to know what we mean," he writes. "But we know what we mean before we define a concept--otherwise we wouldn't know what to define. There are a thousand philosophical paradoxes that result without this fairly obvious point." But if we already know what we mean before we have the definition, then what is the point of having the definition? On this account, it's not so much paradoxical as redundant. If you know what you mean and other people know what you mean, trotting out a definition is merely an exercise in barren pleonasm.

We can guess what the point is by observing how Rand used definitions in practice. Albert Ellis long ago pointed out the problem of what he called "definitional thinking" in Objectivism — that is, assuming the point at issue in your definitions as a tactic of debate. We see this sort of "definist fallacy" in its clearest form in Rand's discussion of the term selfish, which she insist has only one meaning (i.e., her meaning). But as a matter of fact, a term's meaning is determined by the person using it. If a person uses the word selfish to mean "not having consideration for other people," then that is what he means by it and there's an end. There is no such thing as a "one and only true meaning" for a term, because terms are instrumental. If you have any doubt about this, just look at any unabridged dictionary — the Oxford English dictionary, for example. There you will find that most words have multiple definitions and can be used in many different senses. In the very best dictionaries, there will be examples, taken from literature, illustrating the particular usage defined.

Wednesday, November 14, 2007

The Cognitive Revolution & Objectivism, Part 4

Concept-Formation. Rand's theory of concept-formation could be summed up as follows: in the pursuit of one insight, Rand committed a number of errors. The insight is something Rand called "unit economy," which cognitive scientists refer to as "achieving human scale" or attaining "cognitive efficiency." Rand's errors have two main sources: her blank slate view of man's "cognitive mechanism" and her denial of the originative functions of the "cognitive unconscious."

First, let us examine a mainstream cognitive view of concept formation, compliments of science writer Morton Hunt:
People often think it sophisticated to speak disparagingly of "pigeonholing," but our tendency to classify new experience is an essential component of human intellectual behavior.... And that's because we pigeonhole our experience in ways that prove functional. We do so, however, not by conscious design and not even because we are taught to do so, but naturally and inevitably; that is the message of current research on concept-formation. Daniel Osherson of MIT, who is currently exploring the nature of the differences between natural categories and artificial ones, put it to me this way: "We can program a computer to deal with any concept, no matter how bizarre or unrealistic—such as, for instance, the class 'all bachelors and nonreturnable bottles'—and the computer can handle it. But the human mind can't. The computers in our heads simply aren't hard-wired to form or make use of concepts of that sort but only of natural and realistic ones."
"Hard-wired" is the key word. It's a computer-science term that refers to built-in characteristics—those resulting from fixed circuitry rather than programming. Applied to human beings, hard wiring refers to innate abilities or, at least, predispositions, as opposed to learned behavior. Osherman was saying, in effect, that the circuitry of the brain develops, under the biochemical guidance of the genes, in such a way as to assemble incoming experience into realistic, useful concepts, and not the converse....
The new view of categorization [arising from cognitive science] emerged from several lines of research, among them studies of how children acquire concept words. When they first begin to talk, they use words for specific objects or actions, but soon begin to form categories — not by subtracting or extracting traits but by generalizing on the traits they see. In fact, overgeneralizing on them. The child between one and two will, typically, learn a word like "ball" or "dog" and then call all round things "ball" and all furry, four-legged creatures "dog"; the child is categorizing, and simply needs to have its categories refined....
Many such experiments have made it clear that concepts are learned, and that ... what is learned is not whatever we choose to teach the child; it is the product of an interaction between incoming experience and the brain's circuitry. We do not possess concepts a priori; rather, the concepts we make of our experience are largely predetermined by neural structures. The human brain is concept-prone — but prone to conceptualize experience in certain ways....
It took a revolution in cognitive theory for scientists to see two related facts that now seem obvious... First, we do not perceive an object as a set or list of distinct attributes (such as having two squares, shaded, and one border), but as wholes (a particular person, chair, house). Second, as a consequence, we do not naturally group objects in categories with sharp boundaries but in clusters that have a dense center and thin out to fuzzy, indeterminate edges, overlapping other categories....
Our method of making categories has a simple and obvious biological rationale: it is the mind's way of representing reality in the most cognitively economical form. In the real world, writes Eleanor Rosch, traits occur in "correlational structures"; observable characteristics tend to go together in bunches. "It is an empirical fact provided by the perceived world," she says, "that wings co-occur with feathers more than with fur," or that things that look like chairs have more sit-on-able traits than things that look like cats. The prototypes and categories that our minds naturally make are isomorphs of clusters of traits in the external world.
This is so simple a notion that you might wonder why it took the revolution of cognitive science to discover it. But such was the awesome reputation of Aristotle that his fiats concerning categories — they should be logically defined, clearly bounded, and nonoverlapping — kept us until now from seeing the plain facts about the mind's natural method of sorting out its impressions of the world. And that method, though neither logical nor tidy, is an evolutionary design that maximizes our ability to make sense of our experiences and to interact effectively with the environment. (The Universe Within, 163-173)
As long as Rand sticks to her "unit-economy," she manages to stay pretty close to the cognitive straight and narrow. When she strays from it, she tends to get into trouble. Consider her "definition" of a concept: "A concept is a mental integration of two or more units which are isolated according to a specific characteristic(s) and united by a specific definition." Alas, this doesn't quite hit the nail on the head. There is no integration of two or more units: merely the development of a prototype, which requires merely one "unit"! Nor is there any evidence that definitions play any role in the development of most concepts. Since the overwhelming majority of concepts are formed unconsciously and on the basis of intuitive prototypes, there's no need for definitions, which are mostly used to define word usage. (That's what you will find in any dictionary: details of word usage, often with specific examples, so that there's some truth in Popper's dictum that "definitions never convey information—except to those in need of a dictionary.") Nor is there any such thing as "invalid concepts." There are concepts (e.g. unicorns, centaurs) that refer to things that don't exist; there are so-called "artificial" concepts (e.g., all bachelors and nonreturnable bottles) that are cognitively inefficient and useless to the human mind—unmemorable and deserving to be forgot; and there are concepts that refer to confused theories (e.g., the concept of Marxism, Hegelianism, Objectivism, etc.): none of these concepts are "invalid" (perhaps their referents are "invalid," but that's another story). Since the mind forms most of its concepts unconsciously, on the basis of innate cognitive predispositions originally developed in the crucible of evolution, questions about the "validity" of concepts are misplaced. If our concepts never referred to anything in reality or were perversely inefficient, we wouldn't be around to argue about them: natural selection would have taken us out a long time ago. It is how concepts are used, the assertions and arguments and theories made about their referents, that can be true or false, valid or invalid.

The Binswanger Loyalty Oath

A commenter on this thread reminds us of ARIan Harry Binswanger's odd requirement for joining his email list:

The HBL Loyalty Oath
I have created this list for those who are deeply and sincerely interested in Ayn Rand's philosophy of Objectivism and its application to cultural-political issues.

It is understood that Objectivism is limited to the philosophic principles expounded by Ayn Rand in the writings published during her lifetime plus those articles by other authors that she published in her own periodicals (e.g., The Objectivist) or included in her anthologies. Applications, implications, developments, and extensions of Objectivism--though they are to be encouraged and will be discussed on my list--are not, even if entirely valid, part of Objectivism. (Objectivism does not exhaust the field of rational philosophic identifications.)

I do not make full agreement with Objectivism a condition of joining my list. However, I do exclude anyone who is sanctioning or supporting the enemies of Ayn Rand and Objectivism. "Enemies" include: "libertarians," moral agnostics or "tolerationists," anarchists, and those whom Ayn Rand condemned morally or who have written books or articles attacking Ayn Rand. I do not wish to publicize the myriad of anti-Objectivist individuals and organizations by giving names, so if you have questions about any such, email me privately and I will be glad to discuss it with you.

If you bristle at the very idea of a "loyalty oath" and declaring certain ideological movements and individuals as "enemies," then my list is probably not for you. To join my list while concealing your sanction or support of these enemies, would be to commit a fraud. Again, if you have any questions on this policy, please let me know

Tuesday, November 13, 2007

Are Values Hardwired?

Apropos of Greg's recent series examining the subconscious or "hardwired" part of human nature, YahooNews reports further evidence suggesting reciprocity (or what Objectivists call "the trader principle") is hardwired even into monkeys:

The latest findings suggest that a sense of fairness is deeply ingrained in human evolutionary history rather than the idea that it's a more cultural response, and thus, learned from other humans....Brosnan, along with lead author Megan van Wolkenten and Frans B. M. de Waal, both at Emory University in Georgia, trained 13 tufted capuchin monkeys (Cebus apella) at Emory's Yerkes National Primate Research Center to play a no-fair game. In the game, each of a pair of monkeys would hand a small granite rock to a human in exchange for a reward, either a cucumber slice or the more preferable grape.When both monkeys received cucumber rewards, all was fine in primate land. But when one monkey handed over the granite stone and landed a grape, while monkey number two got a cucumber, madness ensued.

Saturday, November 10, 2007

The Cognitive Revolution & Objectivism, Part 3

Cognitive Unconscious. Rand wanted to believe that every aspect of cognition and willing could be controlled, directly or indirectly, by the conscious mind. Thus the Objectivist contention that the unconsciousness (or "subconsciousness") "is simply a name for the content of your mind that you are not focused on at any given moment. It is simply a repository for past information or conclusions that you were once conscious of in some form, but that are now stored beneath the threshold of consciousness. There is nothing in the subconscious besides what you acquired by conscious means. The subconscious does perform automatically certain important integrations (sometimes these are correct, sometimes not) but the conscious mind is always able to know what these are (and to correct them, if necessary)." (Ayn Rand Lexicon, p. 484)

Rand never denied that unconscious (or "subconscious") processes occur in human thought. What she appears to have denied is that these processes could ever originate, in whole or in part, from below the threshold of consciousness. Hence, for Rand, "subconscious" thinking is simply conscious thinking that has been automatized, or "programmed," like with a computer. Since human beings, in the Randian view, are blank slates, nothing can ever get into the "subconsciousness" that wasn't first in the individual's conscious.

This view, at the very least, is a gross exaggeration. Consider what George Lakoff and Mark Johnson have to say about the cognitive unconscious:
Conscious thought is the tip of an enormous iceberg. It is the rule of thumb among cognitive scientists that unconscious thought is 95 percent of all thought—and that may be a serious underestimate. Moreover, the 95 percent below the surface of conscious awareness shapes and structures all conscious thought. If the cognitive unconscious were not there doing this shaping, there could be no conscious thinking....
The cognitive unconscious is posited in order to explain conscious experience and behavior that cannot be directly understood on its own terms.... The details of these unconscious structures and processes are arrived at through convergent evidence, gathered from various methodologies used in studying the mind. What has been concluded on the basis of such studies is that there exists a highly structured level of mental organization and processing that functions unconsciously and is inaccessible to conscious awareness. (Philosophy of the Flesh, p.13,103-104)

Or consider what the neuroscientist Antonio Damasio has to say on the unconscious:
The field of social psychology has produced massive evidence for non-conscious influences in human mind and behavior.... Cognitive science and linguistics have produced their own evidence. For example, by the age of three, children make amazing usage of the rules of construction of their language, but they are not aware of this "knowledge," and neither are their parents. A good example comes from the manner in which three-year-olds form the following plurals perfectly:
dog + plural = dog z
cat + plural = cat z
bee + plural = bee z
The children add the voiced z, or the voiceless s, at the end of the right word but the selection does not depend on a conscious survey of that knowledge. The selection is unconscious. (The Feeling of What Happens, p. 297)

Experiments in cognitive science have demonstrated that unconscious processes accompany and underly every facet of thinking. Indeed, some experiments suggest that most conscious thinking may be the result of unconscious processes, so that Rand nearly got it backwards. As cognitive scientist Richard Nisbett explained:
In one of our experiments, we asked people to pick out the nylon panty hose of the best quality from a set of four pairs that were arranged in front of them from left to right. Actually, the panty hose were identical, but our subjects picked the right-most pair four times as often as the left-most pair. When we asked them why they picked that pair, they gave reasons of one sort or another, but never mentioned position. And when we asked them if position could have influenced them, they denied it and sounded annoyed, or, in some cases, seemed to think they were dealing with madmen. The evidence of a number of related experiments suggests that most of the time we don't know why we think as we do, even when we feel certain we do." (Morton Hunt, The Universe Within, p. 276-277)

And finally, in conclusion, consider the testimony of Gilles Fauconnier and Mark Turner:
Nearly all important thinking takes place outside of consciousness and is not available on introspection; the mental feats we think of as most impressive are trivial compared to everyday capacities; the imagination is always at work in ways that consciousness does not apprehend; consciousness can glimpse only a few vestiges of what the mind is doing; the scientist, the engineer, the mathematician, and the economist, impressive as their knowledge and techniques may be, are also unaware of how they are thinking and, even though they are experts, will not find out just by asking themselves. Evolution seems to have built us to be constrained from looking directly into the nature of our cognition, which puts cognitive science in the difficult position of trying to use mental abilities to reveal what those very abilities are built to hide." (The Way We Think, p. 33-34)

Oh Yes, They Called Him The Streak

Guest poster Neil Parille from Objectiblog tells the tale of Ayn Rand's odd reaction to a famous prank at the 1974 Academy Awards.

Objectivists often accuse non-Objectivists, anti-Objectivists and apostates from ARI Objectivism as suffering from “rationalism.” This term appears to mean something like applying principles to situations without taking into account the facts of experience. A recent example is Leonard Peikoff’s 2006 statement that anyone who considers voting Republican or abstaining from voting “does not understand the philosophy of Objectivism, except perhaps as a rationalistic system detached from the world.” Incidentally, the term does not appear in this sense in either The Ayn Rand Lexicon or the index to Leonard Peikoff’s Objectivism: The Philosophy of Ayn Rand.

Ellen Stuttle has drawn attention to the following from Leonard Peikoff’s 1987 talk “My Thirty Years With Ayn Rand,” reprinted in The Voice of Reason:
About a dozen years ago, Ayn Rand and I were watching the Academy Awards on television; it was the evening when a streaker flashed by during the ceremonies. Most people probably dismissed the incident with some remark like: "He's just a kid" or "It's a high-spirited prank" or "He wants to get on TV." But not Ayn Rand. Why, her mind, wanted to know, does this "kid" act in this particular fashion? What is the difference between his "prank" and that of college students on a lark who swallow goldfish or stuff themselves into telephone booths? How does his desire to appear on TV differ from that of a typical game-show contestant? In other words, Ayn Rand swept aside from the outset the superficial aspects of the incident and the standard irrelevant comments in order to reach the essence, which has to pertain to this specific action in this distinctive setting.

"Here," she said to me in effect, "is a nationally acclaimed occasion replete with celebrities, jeweled ballgowns, coveted prizes, and breathless cameras, an occasion offered to the country as the height of excitement, elegance, glamor--and what this creature wants to do is drop his pants in the middle of it all and thrust his bare buttocks into everybody's face. What then is his motive? Not high spirits or TV coverage, but destruction--the satisfaction of sneering at and undercutting that which the rest of the country looks up to and admires." In essence, she concluded, the incident was an example of nihilism, which is the desire not to have or enjoy values, but to nullify and eradicate them.

[. . .]

Having grasped the streaker's nihilism, therefore, she was eager to point out to me some very different examples of the same attitude. Modern literature, she observed, is distinguished by its creators' passion not to offer something new and positive, but to wipe out: to eliminate plots, heroes, motivation, even grammar and syntax; in other words, their brazen desire to destroy their own field along with the great writers of the past by stripping away from literature every one of its cardinal attributes. Just as Progressive education is the desire for education stripped of lessons, reading, facts, teaching, and learning. Just as avant-garde physics is the gleeful cry that there is no order in nature, no law, no predictability, no causality. That streaker, in short, was the very opposite of an isolated phenomenon. He was a microcosm of the principle ruling modern culture, a fleeting representative of that corrupt motivation which Ayn Rand has described so eloquently as "hatred of the good for being the good." And what accounts for such widespread hatred? she asked at the end. Her answer brings us back to the philosophy we referred to earlier, the one that attacks reason and reality wholesale and thus makes all values impossible: the philosophy of Immanuel Kant.
The event in question was the 1974 Academy Awards. By that time, streaking had become the national prank. Ray Stevens’ song “The Streak” had been written but not published. Based on the little evidence available to Rand that night, the most likely explanation was that the streaker was just another “kid” pulling a prank, and the Academy Awards program chosen because it would give him maximum “exposure.”

In fact, the streaker was one Robert Opel, a thirty-three year old variously described as a photographer and an advertising executive. Opel wanted to make a statement about public nudity and sexual freedom (he was for it) as well as jump-start his career. His motive, then, does not appear to have been nihilism or tearing down the Academy Awards.

Rand’s discussion of the streaker incident highlights a couple of problems common with her analysis of historical and cultural events. First, she tends to draw conclusions in the absence of evidence. Second, she tends to ascribe philosophical motivations to individuals without considering more mundane explanations. In short, it was Rand who was guilty of rationalism in this case.

In the above excerpt, Peikoff continues that hearing Rand that night inspired him to write the chapter on Weimar culture in The Ominous Parallels. This misguided work, in which Peikoff all but blames Kant for Auschwitz, illustrates the streaker problem in reverse: the facts available to the historian are so vast that determining the one philosophic principle explaining it all (if there is just one) is close to impossible. It is more likely that a number of philosophical trends converged in 1933 which, when combined with the German public’s frustration over the economy and the humiliation of the Treaty of Versailles, resulted in the Nazi takeover. As Greg Nyquist argues in his book, if Hitler’s adversaries had adopted a better strategy, it is possible that the Nazis might not have seized power.

- Neil Parille

Thursday, November 08, 2007

Van Damme Replies (Sort of)

While cleaning up some comments spam I came across this brief note that may have been lost in the flurry. In this post Greg criticised a passage from Lisa Van Damme's essay "The False Promise of Classical Education."

In a kind of proxy comment, an anonymous contributor passed this reply on, allegedly from Van Damme:
From Lisa VanDamme:

What defines the hierarchical order of concepts is distance from the perceptual level, not "wideness." "Doberman" is, unquestionably, more abstract than "dog": Ayn Rand addresses this issue in ITOE, and nothing I have ever said contradicts it.
Leaving aside the oddness of replying by anonymous proxy - surely it would be easier just to reply directly - this also seems to not answer the issue. Here's what Greg quoted from Van Damme's original essay:
Van Damme: "There is a necessary order to the formation of concepts and generalizations. A child cannot form the concept of “organism” until he has first formed the concepts of “plant” and “animal”; he cannot grasp the concept of “animal” until he has first formed concepts such as “dog” and “cat”; and so on. The pedagogical implication of the fact that there is a necessary order to the formation of abstract knowledge is that you must teach concepts and generalizations in their proper order. An abstract idea—whether a concept, generalization, principle, or theory—should never be taught to a child unless he has already grasped those ideas that necessarily precede it in the hierarchy, all the way down to the perceptual level."
Greg's point in the post was that if this was true, one would start with "Doberman", go up to "dog", then to "organism" etc. Yet this seems unlikely.

Van Damme and other commenters do not seem to have a direct response to this, and merely reply to the effect that Greg's demonstration is a "straw man" because Rand said otherwise in the ITOE. But if Rand's theory (or Van Damme's expression of it) does not stand up to its own derivable consequences, this is hardly Greg's problem! Rather it speaks to the basic wooliness and what I regard as the outright emptiness of most of the speculations expressed in the ITOE.

At any rate, it would be interesting if Van Damme wished to reply in a bit more detail - even by proxy - to Greg's original post, and we cordially invite her to do so.

Wednesday, November 07, 2007

Conjectural Notes on Free Will

The intense interest on this blog in the question of free will has persuaded me to interrupt my series on the Cognitive Revolution and Objectivism to expound on this contentious topic.

Most criticisms of Objectivism focus on the alleged contradiction between Rand's acceptance of causality on the one side and her insistence on free will on the other. "Ayn Rand's views on causation contradict her views on free will," writes one commentator. "The reason is very simple: her views on causation are those of a determinist; her views on free will, however, make her a libertarian. And those two positions are, by definition, incompatible." Oh, well, if they are incompatible "by definition," that settles the question! Such reasoning, however, is circular: the terms have been defined in such way as to reach the conclusion desired. This is rationalistic verbalism—and is nothing to the purpose. Neither is Peikoff's assertion that "one must accept [free will] in order to deny it" very helpful. Surely the question is every bit as empirical as any other concerning matters of fact. So why not attend to the relevant facts, thereby supplying one's reasonings with the useful check of experience?

Causal determinism asserts "that future events are necessitated by past and present events combined with the laws of nature." The doctrine is very convincing when applied to physical objects. The material world does appear to be deterministic in just this sense, as innumerable experiments have shown. Things get a bit more dicey once we go down to a quantum level. But a propensity interpretation of quantum mechanics, as suggested by Karl Popper, may turn out to be a convincing way around these issues. In any case, the question arises: does the determinism found in the physical world (at least above the quantum level) hold good in the mental world? And if so, why?

One cannot simply assume, a priori, that all of existence is fashioned after a single plan, so that if determinism is found operable in the material world, it must also be found operable in the mental world as well. Do the facts support such a supposition? Not entirely. Do physical entities or events possess the same kinds of characters or properties as percepts and images? Do they exist in the same spatial and temporal order with them? Our experience suggests otherwise. Ideas, concepts, percepts, mental images are one thing, the material world is something else. The mental and physical orders are clearly worth distinguishing.

The next question is: how do these two planes of existence relate? Scientific determinism generally favors the view that the mind is solely determined by physical processes. Superficially, this view seems to be supported by the fact that the mental world depends so enormously on the brain. Yet here we must be careful. There are some facts that simply don't square with the idea that all mental processes are really only material processes in disguise. In the first place, this view strongly suggests that consciousness is epiphenomenal and, ipso facto, inefficacious. But if so, why did evolution breed and select it? Then there's the problem of our own introspective experience. We experience ourselves making decisions and choosing between alternatives. Is our own experience to be ignored as a mere illusion? But if that experience is illusory, why isn't our experience of the material world as deterministic also illusory?

If consciousness is efficacious, if the mind provides a contribution to human life that goes beyond merely physical processes, then free will, in some measure or form, has been established. In what precise measure or form is a question best answered, again, by consultation of the relevant facts. Here neuroscience can provide fascinating perspectives, particularly when combined with evolutionary psychology. If the brain is a product of evolution, then free will must also be a product of evolution. But if free will has evolved, then this suggests that it has in the past, and could still, exist in degrees. A strange notion, to be sure. Is there any evidence beyond evolutionary speculation on its behalf? Yes, there is. People with damage to the ventromedial frontal cortex of the brain are no longer able to engage in effective decision making. It is as if there free will is impaired. Decision making, initiative, planning all rely on the frontal cortex of the brain. If there exist innate factors that increase or decrease how well this prefrontal cortex functions, its plausible that differences in the degree of free will could be partially innate. Indeed, we know intelligence is partially innate, and individuals with greater intelligence probably have a greater degree of free will than those wilth less intelligence.

This view of the mind suggests the following conjecture: that free will is perhaps best conceived as a kind of module of the brain, working in tandem with other modules, such as those providing motivation and thought. There is certain level of innateness running through all these modules, particularly in the affect system, but there's also a certain amount of plasticity to both experience and thought, so that the mind is best envisioned as a complex web of drives and thoughts and self-initiative which interact against one another, perhaps even slightly altering one another in the process, and commencing in volitions that are influenced by thoughts, innate drives, memory, social-training and self-initiative, all working at difference degrees and intensities. Under this model, there is none of the sort of unsaddled free will imagined by Rand. The majority of humans beings are what we find them in everyday life, in history, in great literature, and in scientific research: inherently limited, imperfect, yet still capable, through initiative, discipline and hard work, of attaining a modest level of dignity and self-efficacy. Some may be able to attain more along these lines, some less, depending on the amount of intelligence, initiative, and emotional stability that they are born with.

Tuesday, November 06, 2007

ARCHNblog Flashback: "Rand and Empirical Responsibility"

Given the recent discussion it seems apropos to re-run Greg Nyquist's Journal of Ayn Rand Studies article - which was a reply to Fred Seddon's review of "ARCHN" - on this very topic.

Sunday, November 04, 2007

Objectivism and the Descent Into Pseudoscience

The debate over at vs "The Passion of Ayn Rand's Critics" author Jim Valliant now appears to be over, as it seems Valliant has retired. It has however been most revealing as an example of what seems to be the trend within Objectivism towards the embrace of pseudoscience. Among other rather amazing claims, Valliant attempted to pass off his own "introspections" as a compelling empirical basis for Ayn Rand's theory of volition, and when pressed to name any of the numerous "scientists" he initially claimed were Objectivists, could only offer the anti-relativity crank Petr Beckmann*. This apparent trend we will explore further here at ARCHNblog, as it is corroborates Nyquist's thesis that Objectivism in practice only pays lipservice to scientific standards. But what are the underlying drivers in the philosophy that explain this? In a post at I offer one speculation as to why:

Dave, I'd like to touch on a central point that risks being overlooked.

Yes, Rand and her followers are generally - and as Mr Valliant demonstrates with his "Objecto-empiricism", even hilariously - ignorant of science. This is basically admitted every time they attempt to ring-fence some of her (and their) absurdities by appealing to a "philosophic" context, rather than a scientific one.

But the key point is this: basically Rand believed philosophy should quite literally set the terms for science (she says this quite clearly in the ITOE, in a remarkable passage where she insists that philosophers like herself will tell scientists the "proper" meaning of the words they use**). Philosophy is always in control; it might refute science; but it can never be the other way around. Philosophy is the master discipline - with Objectivism at the summit of that - from which "true" science (as well as ethics, politics, aesthetics etc) flows.

One can easily see the seduction here. The dream of the kind of intellectual power that allows you to make pronouncements over what is "proper" to even disciplines that you are ignorant of, has, like other more obvious dreams of power, perennial appeal. Further, it is a power that seems remarkably easily achieved; for it is basically very easy to play with words, as the Aristotelian method of definition that Rand adopted inevitably leads to (just as it lead to the scholasticism of the Middle Ages). This in turn empowers people like Mr Valliant to rock up to a science forum and begin instructing all and sundry as to what does and does not constitute "empirical" evidence. (And surely enough, with sufficiently determined sophistry, we see the determinist arguments from the massive empirical successes of physics and chemistry become dismissed as "empty arguments from authority", and his personal "introspections" promoted as both "empirical" and even "objective.")

It could be that Mr Valliant merely thinks it is Opposites Day here at However, I think there's a bigger story here. I conjecture that it is this underlying dream, and its accompanying scholastic methodology, that accounts for this peculiar picture of a movement ostensibly dedicated to reason, freedom, science, and productivity exhibiting, as you have already noted, so many alarming tendencies in the reverse direction.

*Even this appears to be incorrect, as Beckmann was apparently merely influenced by Rand. More about Beckmann here
**see pp289/290 "Introduction To Objectivist Epistemology"

Friday, November 02, 2007

Cognitive Revolution & Objectivism, Part 2

Behavioral Genetics. From the modular theory of the mind, we proceed to behavioral genetics. It is here that the first really serious challenge to Rand's man-as-self-creator view of human nature commences. Steven Pinker outlines behavioral genetics as follows:
[T]he findings of behavioral genetics are highly damaging to the Blank Slate and its companion doctrines. The slate cannot be blank if different genes can make it more or less smart, articulate, shy, happy, conscientious, neurotic, open, introverted, giggly, spatially challenged, or likely to dip buttered toast in coffee. For genes to affect the mind in all these ways, the mind must have many parts and features for the genes to affect [i.e., the mind must be modular]. Similarly, if the mutation or deletion of a gene can target a cognitive ability as specific as spatial construction or a personality-trait as specific as sensation-seeking, that trait may be a distinct component of the psyche.
Moreover, many of the traits affected by genes are far from noble. Psychologists have discovered that our personalities differ in five major ways: we are to varying degrees introverted or extroverted, neurotic or stable, incurious or open to experience, agreeable or antagonistic, and conscientious or undirected.... All five of the major personality dimensions are heritable, with perhaps 40 to 50 percent of variation in a typical population tied to differences in their genes. The unfortunate wretch who is introverted, neurotic, narrow, selfish, and undependable is probably that way way in part because of his genes, and so, most likely, are the rest of us who have tendencies in any of those directions as compared with our fellows.
It's not just unpleasant temperaments that are partly heritable, but actual behavior with real consequences. Study after study has shown that a willingness to commit antisocial acts, including lying, stealing, starting fights, and destroying property, is partly heritable.... People who commit truly heinous acts, such as bilking elderly people out of their life savings, raping a succession of women, or shooting convenience store clerks lying on the floor during a robbery, are often diagnosed with "psychopathy" or "antisocial personality disorder." Most psychopaths showed signs of malice from the time they were children. They bullied smaller children, tortured animals, lied habitually, and were incapable of empathy or remorse, often despite normal family backgrounds and the best efforts of their distraught parents. Most experts on psychopathy believe that it comes from a genetic predisposition, though in some cases it may come from early brain damage. (Blank Slate, 50-51)
If behavioral genetics is largely correct, than Rand's tabula rasa view of human nature is wrong. Man is not a being of a self-made soul; his volition is in fact saddled with tendencies; and his emotions are not entirely the product of his conclusions. Now the evidence for behavior genetics is very compelling. "The effects of differences in genes in minds can be measured, and the same rough estimate — substantially greater than zero, but substantially less than 100 percent — pops out of the data no matter what measuring stick is used," writes Pinker. Whether we're talking about identical twins raised apart or experiments involving isolation of genes, it all points to one conclusion: that genes influence (though they don't determine!) the mind and the behavior that emerges from it.