Review Article
The Future of Stupidity and Vice Versa
James F Welles*
Corresponding Author: James F Welles, P.O. Box 17, East Marion, New York, USA, Tel: 631-323-8153; E-mail: JWelles103@aol.com
Received: June 11, 2018; Revised: November 21, 2018; Accepted: June 20, 2018
Citation: Welles JF. (2016) The Future of Stupidity and Vice Versa. J Psychiatry Psychol Res, 1(1): 24-45.
Copyrights: ©2018 Welles JF. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Share :
  • 878

    Views & Citations
  • 10

    Likes & Shares

For all stupidity’s cognitive inconsistency, there is an unethical element which characterizes deliberate, informed, maladaptive behavior. Simply put, there is invariably a common sense code of conduct which clearly delineates what one should do which then is overridden by an ego-centered and/or socially gratifying urge toward impropriety. It is particularly grievous that such activity characterizes those in positions of political power. Suggestions are made to reduce the probability of such deleterious actions.

 

Keywords: Failure, Limitations, Reformers, Crises, Science, Myths, Freedom, Morality, Humanity

INTRODUCTION

If anything, modern physicists seem a bit too willing to dismiss reality as simply a field for subjective impressions. There is reason to believe physicists find what they seek because they Failure: America and the Western world in general comprise only the most recent example of a civilization failing to live up to its own standards. In this respect, we are but typical of the civilized tendency of failing to fulfill a presumed destiny. In fact, with or without expectations or destinies, one of the most consistent characteristics of civilizations is failure [1]. Archaeologists have built a profession on studying failures. Historians build careers by explaining failures [2] in books tropically entitled “Decline and Fall...” or “Rise and Fall...” [3]. Every day, we are immersed in ignorable warnings (e.g. an un-manageable federal debt) that we too may fail as have those who have gone before. The paradox is that when studying the past, archaeologists (and historians) assume the societies they study were composed of rational, cognitively integrated and sane human beings who made sense. However, when observing contemporary civilizations, social scientists have found this assumption impossible to support [4].

The cause of this intellectual fault line is that the fundamental human constant across the ages and throughout modern society is not integrated rationality but stupidity. Unfortunately, no amount of information, learning or technological expertise seems to alter this subtlety of internal contradictions unresolved one iota. A basic cause of this is that we have ready-made, socially condoned, psychologically correct explanations for practical problems. Unexplained is the curiosity that such problems arise without evident cause and remain unresolved despite everyone's best efforts. The trouble really is, of course, with the explanations, which contribute to failure by explaining away not only the inexplicable but the explicable as well. We need the assurance of having answers, so if necessary, we make them up. These myths, in turn, can prevent us from discovering valid answers to our questions. Particularly elusive is the answer to the perpetual human riddle-why are our best efforts not good enough? The answer offered here is because swapping one myth for another is not progress.

Put functionally, our efforts may not be our best [5] because we are biased toward the particular schema which, as both a perceptual template and set of rules for action [6], defines our inability to cope. Not only does this bias inhibit cultural improvement by limiting competence but the majority of people, with marginal abilities, support those who goof up, feeling that they will then get similar support when their turn comes. Thus, the weak support the corrupt, because just as efficiency is regarded as a threat by the inept, accuracy of perception is regarded as a threat by the mighty, which are invariably corrupted by their capacity to transcend ethical limitations.

LIMITATIONS

If we want to escape this self-constructed impasse, we would do well to make fresh inquiries into our short-comings and imperfections. Our cultural liabilities are so decisive in the way they undermine our institutions that we are compelled to understand them if we intend to be exceptions to the rule of civilized failures. Thus far, the balance sheet on Western Civilization is more extensive but no more favorable than that of any society that has passed before us. As fast as wealth piles up here, poverty springs up there. Increases in material abundance are matched by increases in bitter resentment as production and success beget scarcity and jealousy. Scientific advances are matched by spiritual failings, construction by decay and happiness by misery [7]. These balanced equations are maintained by the characteristic errors, ignorance, ill will and general stupidity of civilized people as we struggle to overcome cultural traps we have created for ourselves. Incompetence, dishonesty corruption and unaccountability may not be fatal flaws, but stupidity is in the long-run definitely maladaptive. Confronting us is the very real possibility that the human experiment was a grand mistake [8].

Western Civilization owes its technological predominance to the application of reason to the study and control of nature, but a major stumbling block to the study and control of ourselves has been the assumption that, since we can use reason, we are reasonable. We have had 100 years since Freud to acknowledge that we are basically irrational, but the super-egoish models for human behavior proposed by the methodical scientific community [9] are invariably idealized constructs which are much more self-consistent and orderly than people would ever want to be. The problem for behavioral scientists is that logic must be used to explain irrationality.

Although reason is useful for extending a line of thought to the next point, it is of limited value in untangling complexity. Logic is certainly a valuable analytical tool, but the overall physiological condition of an organism, for example, is not particularly rational and cannot be comprehended by anyone limiting his thinking to linear logic. (e.g. there is no logic in balancing hunger and thirst, sleep or sex. These are drives or states by which competing physiological systems cooperate and compensate with each other to maintain the dynamic imbalance we call life). The best that can be done in analyzing such phenomena is to use polygraphs to provide data for statistical models which allow us to predict the probability of normal activity. In fact, approximation is the best way to represent matters of such uncertain complexity.

This basic principle is even more important when one attempts to understand human behavior. Behavior is very much a compromise phenomenon. It may be analyzed logically, but as a functional whole, it is comprehensible only in terms of relationships among interacting systems. Only by accepting a compromise model of the human being in all its inconsistent ineptitude based on misperceptions of the environment can one begin to understand what being human means because the world cannot be separated from our perception of it [10]. Although we gather a lot of information, it is all imperfect [11] because errors are inherent in subjective observation and inextricably bound up in the nature of human knowledge [12] We also ignore a lot and may even be pointedly agnostic in matters of great importance to us simply because our schema directs us to be ourselves. Likewise, the information people possess may be used inappropriately because of perception management by others or because certain behavioral patterns are preprogrammed into or excluded from the response repertoire. This is as human as it is stupid.

As all indications are that there was and now is more than enough stupidity to go around, to the extent that the past is a guide to the future, we should expect stupidity to continue to be our constant companion as the vast swamp of history unfolds. Certainly, it has been an integral component of Western Civilization since the beginning. The ancient Greeks indicated their firsthand familiarity with it when they formulated Cassandra's Curse [13] that those who prophesy the truth will not be believed. There have been numerous examples throughout history of accurate warnings wasted, e.g. Stalin ignoring a warning of the German invasion of Russia in WWII-because recipients were not disposed to accommodate their beliefs to better information. Alternatively, most Cassandras are considered Chicken Littles who happened to be right.

In his last plays, Euripides paired moral evil with folly and asserted that people would have to confront both as part of their being [14], but we have been very reluctant to do so. The problem seems to be that however brilliant the human mind may be in other ways, it is not geared to compensate for its own deficiencies. The reason for this is that cognitive deficiencies (which take the form of opposition to intellectual integrity) are expressions of the psychosocial dimension of life. It is this which shapes the schema as an individual becomes a member of a reference group.

The condemnation of idealism is a constant theme coursing through the history of Western stupidity. Socrates was a case study in the stupidity of civil obedience. As a dying refutation of the theory of cognitive dissonance, Christ was crucified for living up to ideals. Jan Huss was a religious reformer burned at the stake as a heretic in 1415 [15]. As a Lutheran Catholic [16], Giordano Bruno was perhaps a little too philosophical a philosopher to have profited from Huss's experience and so followed his fate in 1600. [1]

Not long thereafter, Galileo was forced, under threat of torture, to disavow the truth about motion in the solar system.

As shameful as all this was, it is embarrassing to note that for all our sophistication and technological expertise, contemporary civilization is at least as morally retarded and ethically handicapped as any that ever existed. In this sense, there has been at best negative progress in Western history. Worse yet, there is no prospect for anything better because, with our churches attended by “Seventh Day Christians”  no one in the research/dollar oriented educational establishment is remotely aware of the problem much less addressing the issue that we need more virtue along with more accurately perceived knowledge. Unfortunately, just as science and technology have made wars inhumanly brutal beyond comprehension, so have mass media dehumanized us [17] to the point of, at best, numbed in difference. [2] In that context, education is not an amassing of facts but a matter of how they are gathered and processed [18] the matter of learning how rather than what to think. Intelligence is a matter of modifying an operative schema according to new, valid, relevant data-the open-to-new-info fox rather than the schema-bound hedgehog model [19].

We, like all others, are still imprisoned in our belief systems. For millennia, Western Civilization was enslaved by its belief in God. Her death in the eighteenth century was followed by a period of enlightened rationalism [20] when Europeans sank by their own encumbering bootstraps into revolutions and intercontinental wars. [3] During the nineteenth century, Darwin seemed to suggest that, although people could modify their environment, that which was innate would remain beyond human control [21]. The only thing people could do about their stupidity was ignoring it or labels it something else and make light of it. [4]

Around the turn of the 20th century, Dr. Freud reinforced Darwin, brought us back full circle to Euripides–and reconfirmed the Skeptics contention that we cannot know anything [22] much less ourselves–by burying the controlling forces of human motivation deep within the mind/soul (psyche), far beyond the good intentions of rational will [23]. Now the suggestion is that the controlling power of humanity is not within us but around us: not in our natural environment but in our culture. Human nature is not coded into DNA: it is structured by schemas which shape behavior by the way people use language to structure perceptions and explain their lives to themselves. In addition, social norms reinforce culturally correct convention as the truth is sacrificed for control.

Sometimes not only truth is sacrificed but the general supporting culture as well. This willingness to write off one's extended human environment for the benefit of the self-aggrandizing in group is most obvious in the mighty. In America of the recent past, the Johnson administration made this point clear in a backhanded way with its occasional lapses into realism about the grave political and moral ramifications that escalating the Vietnam War would have on the country. These moments of temporary lucidity served to underline the sad fact that the Johnson clique was quite willing to sacrifice national and party interests for the sake of presidential image. The exceptional moments of brilliant insight contrasted starkly with the prevailing mood of gloomy fantasy and served to demonstrate only that every silver lining has a cloud.

This catchy image expresses little more than that two contrasting trends have coexisted throughout history. One is the tendency of people to accept their fate, the other is the tendency to rebel against it, and the history of Italy in the 20th century provides examples of both trends. On Sicily, the Mafia (certainly one of the most successful organizations ever) flourishes among people pretty much resigned to accept it as a fact of life and death. On the mainland, the glory of a mad egocentric was doomed by his magnificent stupidity. Mussolini personified a fool rebelling against the limitations of his world. For example, his population policy-Brats for Glory-made the Catholic Church look like the Institute for Planned Parenthood. No leader could survive such reckless disregard for the realities of re-sources, no matter how charismatic he might be [24].

REFORMERS

Whatever it’s superficial appeal, the missionary complex is often darkened by a deliberate effort to create fate. Those determined to remake the world in their own image cannot accept the stupidity of the world as it is: they feel compelled to add to it. In 1961, the Kennedy administration suffered a crusading compulsion to guide the Vietnamese away from their own objectives and toward those of American policy [25]. This mission was doomed because we could not perceive the native anti-colonial sentiments as anything but Communist threats to democratic capitalism.

In the case of Vietnam, no amount of information would serve to reform American reformers. The efforts of the American Intelligence community to gather data were generally quite successful. The problem was that policy makers had closed their minds to the evidence and its implications [26] American stupidity in Vietnam was not founded on agnosticism or ignorance but misinterpretation—a determined refusal by those in power to acknowledge as valid any views conflicting with the prevailing official misperceptions which confused nationalism with Communism.

When events fail to confirm beliefs, the mental condition of cognitive dissonance exists while some adjustment of or to the in-coming data can be made. Usually, in the face of challenge, the schema becomes rigid [27] and data conflicting with it are sacrificed for the sake of emotional and ideological stability. During the Johnson years, the administration was frozen in a dream world of political dissonance completely at odds with clear evidence that official policies were not just ineffective but counter-productive. As would happen again five years later, it remained for the media and the people to save the country from the government, since those loyal to the President[1] had become incapable of making objective assessments of the effects of their actions on the real world [28].

CRISES

It is noteworthy that the Kennedy team liked to refer to themselves as "Crisis managers" [29] and in fact JFK noted that “Great crises produce great men” [30] so one might wonder if he created crises to find out how great he was. Indeed, his Undersecretary of State Chester Bowles noted the men in Kennedy’s inner circle were looking for a chance to prove their muscle [31]. In a similar vein, before becoming President, Richard Nixon wrote a book which covered six crises up to that time and then furthered the theme in his presidency [32] by overreaching himself. From Theodore Roosevelt to Bush II, our presidents have openly expressed a desire to test their mettle in a national crisis like a war [33] to discover how much control they have, what their limits are and who they are.[2] Much as they like to pose as macho gunslingers, the leaders who are most successful in resolving crises are those who cope best with ignorance and error [34]. Indeed, successful crisis managers are often mentally unbalanced [35]. Too often, rulers give themselves the choice of the disastrous or the incorrigible and then choose both.

Since many of the major, specific problems cum crises confronting contemporary civilizations are not cultural universals, they should be (theoretically at least) solvable. Poverty, racism, sexism, family disorganization, political exploitation, ideological oppression and war are not defining characteristics of the human condition: they are all products of certain circumstances which could be altered [36]. Whether they will be altered or not is the solemn matter we address here [37]. The great human tragedy is that we know which conditions to alter and how to alter them in order to eliminate most of the concerns mentioned above. Nevertheless, we fail to do so because our macho leaders are too stupid to adapt to the novel conditions they create because their schemas keep them from finding new ways to look at the latest concocted versions of age-old problems.

SCIENCE

Indeed, it is the mark of a truly wise person to be able to put himself in his own place-to view the world accurately from his own perspective. Making due allowances for one's own values permits accuracy in perception so that behavior may be based on relevant considerations. However, it is most difficult for people to penetrate their religious myths, comprehend their plight and then apply their cognitive skills objectively so as to deal successfully with their problems. In fact, what we need is as little humanity as possible in the process of gathering and analyzing data and as much as possible in the technological process of applying knowledge and understanding. Generally, however, there is no clear distinction between the two processes of gathering and using information. As we interact with our environment, we monitor the results of our behavior, apply what we know toward the solution of problems and then, presumably, learn how effective we have been in achieving our preset goals.

There is something of a God/Satan, Ying/Yang dichotomy here. On a given issue at a given time, people are on the side of learning and correcting or the side of ignorance and error. Unfortunately, society is better set up to learn what it believes than how ineffective it is. If lessons of life cannot be massaged into conformity with ideology, they will be rejected for the good of the directing schema. It was this very human commitment of cultural priests to theological consistency which led such stubborn visionaries as Huss and Bruno to the stake and Galileo to humiliation [38].

There is nothing so unnerving for established powers as having their assumptions challenged, but challenging assumptions has been the stock in trade of great scientific revolutionaries throughout the ages. Copernicus was the first. In fact, the term "Revolutionary" is derived from his notion that the earth revolves around the sun. A major step in the development of this insight was his realization that the prevailing astronomical assumption of his day-that the earth was a fixed point around which everything else rotated-was just a subjective view which everyone took for granted [39] Although this view was fundamentally incorrect, it was part of an ideology which was considered a consistent whole by the religious establishment. An attack on any part was construed as an attack on Christianity in general and was met with determined resistance in the form of extraneous criticisms. Basically, the gist of the counter-argument was that the earth had to be the center of the universe because that was where God obviously would have placed the home of important creatures like us.

Although objective observations and rational theories count for little when one attempts to refute the absurdities which sustain the power structure, science can help us understand our universe and our place in it. As a heuristic device, it has been remarkably successful, but as both a schema breaker and maker, its potential was and is invariably affected by the human need for a positive self-image. This may very much affect the selection of research projects and evaluation of gathered data. The ubiquitous and eternal human reluctance to know who we really are is, nevertheless, yielding to behavioral scientists committed to finding out. Naturally, the success of science has often been at the expense of those wishful fantasies which stifled our cognitive development for centuries. This essence of this uphill battle was expressed in Voltaire’s canny aphorism, “It is difficult to free fools from the chains they revere” [40].

Science nevertheless dealt humanity’s narcissistic reverence three[3] devastating blows courtesy of Copernicus, Darwin and Freud [41]. In all three cases, scientific explanations were resented and resisted by all those who favored the more flattering established notions that we were rational beings created by God and placed in the center of Her universe. The scientific theories survived despite the fact that they lacked any intrinsic appeal to people in love with their image if not themselves.

Scientific theories are appealing only in an icy, intellectual way. Science is really a system of established rules for gathering and analyzing data and is supposedly accepting of conclusions derived by the process regardless of their emotional appeal. In fact, the success of science is due to the institutional establishment of the means of schema formation so that the popularity of a particular interpretation will have minimal impact on the evaluation of experimental results. As the end of science is understanding not the establishment or perpetuation of any particular idea, it is something of a contradictory institution; being set up to both confirm and refute prevailing theory. In an ideal form, science calls on practitioners to observe, compare, reason, reflect, and do all this without quarreling [42]: In their ideal moments, scientists are totally objective and replace bias, gut feeling and prejudice with accuracy and integrity.

Unfortunately, real scientists (and physicians) [43] are so human [44] that they quarrel in their subjective pettiness and embrace bias and prejudice to the detriment of accuracy and integrity. Thus, the personalized institution of science is too encrusted with stupidity for it to save people from themselves. A classic if benign and tragic example of scientific stupidity was the vacuum of indifference which greeted Mendel's work on the genetics of pea plants in 1865. Scientists of the day simply were not able to appreciate his findings. He would have had greater impact had he been able to generate some controversy. As it was, he simply presented his results, which were roundly ignored by everyone else as irrelevant to what they were doing and thinking [45]-until, thirty-five years later, when biologists were doing things and thinking about problems which allowed them to appreciate the significance of his contribution [46].

In the same year of Mendel’s presentation, the general problem of “The tyranny [of a] hypothesis once formulated” [47] in science was aptly described by Claude Bernard, who wrote: Men who have excessive faith in their theories or ideas are not only ill prepared for making discoveries, they also make very poor observations [because] they observe with a preconceived idea, and when they devise an experiment, they can see, in its results, only a confirmation of their theory...It happens, further quite naturally that men who believe too firmly in their theories, do not believe enough in the theories of others... [48].

Although it may take a generation or two, the scientific community will eventually catch up with its unnatural selection of ideas and correct the markedly unscientific tendency of laboratory priests to adhere to familiar theories. Their typically human reaction to a new revelation is to compare it to the prevailing schema (i.e., theory). However, this is usually a one-way process, with the entrenched explanation being accepted as the defining standard of reference to which data and new hypotheses are expected to conform. Scientists are quite human [49] in their propensity to protect their intellectual turf by ignoring or rejecting, for as long as possible, findings inconsistent with theories and paradigms [50] proposed and supported by them-selves and the mighty (As an example, the battle over the germ theory in medicine pops to mind).

The bottom line is that science is really a religion, with the devoted believers sticking to dogma whether it makes good sense or not. Every difficulty is placed in the path of the heretic who dares challenge a sacred tenet of the faith [51]. Research which might disprove an established theory may not be funded because it would prove to be at best a waste of money and at worst rather disturbing. Experimental results which are at odds with holy expectation are scrutinized very carefully if they cannot be rejected outright. If valid, disquieting results still may not be published by journal editor’s indoctrinated in revered theory and likely to perceive novel findings only as threats deserving of suppression. If published, original interpretations and hypotheses can always be ignored by practitioners of old-time religion or slammed by biased reviewers.

It is rather tragic to note some of the works which were not even ignored. A case in point was John J Waterston's paper on the kinetic theory of gases. This was rejected by the Royal Society of London in 1845 as being "Nothing but nonsense". It was finally published in 1892 when it no longer posed a threat to the reestablishment [52]. One can but wonder how many possible advances in scientific thought have been thwarted by professionals oppressing their conventional expectations and values onto new, inventive ideas. For all their training and sanctimonious pronouncements about objectivity, scientists are no more tolerant than any other people when their self-evident, hallowed, unassailably correct and righteous views are challenged.

In fact, a Young Turk starting out in science (or any other field for that matter) should keep to himself any good ideas of importance which might threaten to advance his profession or improve his reference group.[1] Specifically, the young scientist is well advised to begin his career by contributing some bricks of knowledge to the wall of ignorance. Initial research proposals should not challenge the major theories of the day.

Revolutionary/unpopular ideas should be put on "Hold" for a few years until the initiate is clearly a member of the club. Then he will have the prestige needed to get any offbeat ideas he might still entertain accepted for publication. Of course, this is all good advice well wasted on anyone cursed with an ounce of integrity or a passion for understanding [53].

It will come as no surprise to cynics that the payoff in science is not fallible knowledge [54] but money, with most going to those who publish most. These tend to be ideological conservatives who concoct little research projects and fulfill Daniel’s prophecy (12:4): “Many shall run to and fro and knowledge shall be increased” by adding factual bricks to walls which inhibit thinking [55]. Coupled with this tendency toward financial support for the orthodox is an organizational trend toward teamwork in research groups at the expense of the individual go-getter. The scientist is becoming decreasingly an independent thinker and increasingly a fellow worker who fits in and gets along with the team [56].

Outside the lab, the relationship of science to the community is supposed to be one of mutual support. Scientists are really specialists at converting money into culturally acceptable knowledge, so from the standpoint of the scientist, the need for political and financial support can be a corrupting influence on the questions which are asked and the answers which are permitted or encouraged [57]. Psychologist Danny Kahneman made this point with his otherwise pointless research on light intensity in stimulation of the human eye. It was relevant research only in the sense of providing journal editors with articles they would publish: psychologists were measuring things, regardless of how irrelevant they–the things or psychologists–were. [1] Commonplace observations like the fact that corrupt cops completely shatter the theory of legal dissonance go unpublished although/because they show how detached scientific psychological research can be to understanding dysfunctional human behavior [58].

Further, while anthropologists in the Third Reich produced research which supported policies of racial supremacy, Arthur R Jensen found modern American culture is generally hostile to his suggestion that there is a genetic basis for the difficulties black children have in academics [59]. Fortunately, our interest here is not in the validity of this or any other study but in the cultural attitudes which cause scientific findings to be embraced or rejected by a given group. It is simply irrelevant to evaluate the validity of a theory or research results in terms of the effects they might have on a particular social or institutional cause. Still, that is how societies judge which research programs will be supported and what results will be accepted.

With the overwhelming success of science in the last 200 years, metaphysics has fallen into non-reputed and one might be prompted to call for its revival but for the overwhelming success of science. After all, it is no long proper to refer to innate behavior in a species if genes can be manipulated so as to remove a behavioral pattern from the species’ repertoire [60].

MENTAL HEALTH

In a reverse way, a basic concept like "Mental health" has been shaped by two stupid cultural factors. The first of these is confusion as to just what kind of world it is to which the mentally ill are supposed to adjust. The second is the tendency of those who use and define labels to take them and themselves a bit too seriously.

In terms of mental health and illness, the problem confronting all of us is that we are expected to adjust to an idiotic society. This is what makes the goal of most psychotherapy so tautologically self-defeating. As therapy proceeds, the individual is to become more self-accepting and more realistic [61], which is just fine, if the "Self" is grounded in reality. However, what is to be done when realism leads one to the overwhelming conclusion that the self is a bundle of contradictory needs and emotions, maniacal and depressing drives, brilliant and stupid ideas? The problem then becomes a matter of accepting this while trying to adjust to a world of contradictory organizations and institutions.

When the problem of adapting to ourselves boils down to the prayer of Alcoholics Anonymous to have the serenity to accept what cannot be changed, the courage to change what can be changed and the wisdom to know the difference, one is practically driven to drink. Even professional staff members in mental hospitals do not know the difference when they attempt to distinguish sane from normal people or sick from healthy patients or whatever it is they are so subjectively doing. David Rosenhan demonstrated this with a study in which seven "Normal" people managed to get them admitted to mental hospitals by complaining that they were hearing voices – which all but deaf people do whenever anyone speaks within earshot. After being diagnosed as schizophrenics, they behaved as normally as possible and never were detected as impostors by anyone on the staffs of the institutions, although some of the patients became suspicious [62]. The disturbing lesson of this study is that mental patients are better at diagnosis of mental illness than those trained to believe the labels they happen to stick on people.

By fiddling with definitions, epidemics can be created or ended [63] just as labels can be used not only to make people look sick to doctors but to cure them as well. This was accomplished by the trustees of the American Psychiatric Association on December 5, 1973, when they voted to remove homosexuality from the psychiatric classification system [64]. In one deft stroke, millions of people formerly labeled as mentally ill were deemed healthy. It is sad to note that the general medical community has not picked up on this method of defining health. Cancer is so common that it could be voted a "Normal condition" so that cancer victims would no longer be considered sick. Likewise, heart attacks are common enough to be voted "Routine events", so anyone suffering one would not have to be treated as ill. Perhaps the trustees of the Psychiatric Association should have their heads examined by some of their patients.

1973 was a good year for cosmetics, as that was also the year in which "Mental retardation" in America was redefined from an IQ of 85 and under to one of 70 and under. This automatically cured 14% of the population of retardation. Just think how normal homosexuals with IQ's between 70 and 85 must have felt. It certainly must have been nice for such people to have been officially accepted within the bounds of general society. It is even more comforting to know that by this simple expedient of inflating standards, we could produce any number of geniuses desired [65]. All we would have to do is drop the defining IQ level of genius a number of points, and we would have that many more eggheads to create problems for us.

PHYSICS

Physicists must envy social scientists that can cure the ill by voting and convert the abnormal to acceptability by redefining terms, but physical scientists are actually busy playing their own subjective games with nature. In fact, they have gone overboard to the point of giving up on "Reality" as a limiting condition in research. Modern physics is built on the principle that anyone's version of reality is so structured by his schema that there are as many realities as there are observers.

In the good old days of Newtonian mechanics, physicists worked in a precise, objective, determined universe which ran along like some grand celestial clock [66]. Quantum mechanics has changed the clock into something even Dali could not have recognized.[1]  The universe is now perceived as a grand expression of undetermined micro events from which humans can garner only generalized statistical conclusions [67].

If anything, modern physicists seem a bit too willing to dismiss reality as simply a field for subjective impressions. There is reason to believe physicists find what they seek because they create conditions which will produce results supporting their assumptions. If they expect to observe particles, they find particles, if waves, they get waves. Indeed, in Gottingen, Germany in the 1920's, the word was that on Monday, Wednesday and Friday, electrons behaved like particles, on Tuesday, Thursday and Saturday, like waves [68]. On Sunday, they may have rested, but when spinning, they dutifully spun as expected with the axis of spin conforms to the investigator's prediction. Such findings prove little except that the subatomic world is as determinate as it is accommodating to experimenters.

There is, thus, an alternative explanation for what physicists assert is undetermined micro events – it may be they are determined by methods of investigation which are too crude to permit objective studies of subatomic phenomena. On the other hand, maybe this whole matter should be left to the experts: as Niels Bohr noted, anyone who was not made “Dizzy” by “Quantum weirdness” did not understand it [69]. However, one who did was Ernest Rutherford who, nevertheless, in 1933, predicted that no one should expect a practical application like a new source of energy from the atom [70]. He had been approached by Hungarian physicist Leo Slizard who tried to convince him of the potential of atomic energy, which Rutherford dismissed as the “Merest moonshine” [2] and threw Slizard out of his office [71].

Fortunately, about that time, the Last Reich was likewise turning away from what it termed “Jewish physics” [72]. On the other hand, some investigated phenomena were not determined or dismissed by biased means but created by wishful thinking. Such was a case in 1903, when Rene Blodot at the University of Nancy discovered a new type of radiation called N-rays. This was all the more remarkable if not quasi-theological in that they did not exist at all. It was not a hoax – just an honest mistake. Even more remarkable, however, were the confirming reports which poured in from around the world, with some “Scientists” vying for the honor of prior discovery of the non-existent rays. Eventually, the self-corrective mechanisms of science caught up with its more imaginative practitioners, whose “Believing is seeing” perceptual apparatus created a great deal of embarrassment all to no effect [73].

On a grander scale, some physicists claim we live in not a universe but a multiverse and condemn their skeptical colleagues for their reluctance to consider possibilities which would require them to alter their conceptions of reality [74]. As Walter Lippman put it, in another context, most thinking is done in a pseudo-environment (i.e., culture) in which a gang of brutal facts murders a beautiful theory [75]. Skeptics, in turn, properly counter they do not want to waste their time and energy investigating things that do not exist. So, someone is being stupid, but we do not know who it is. Perhaps this is the place to note Dr. Nima Arkani-Hamed’s admonition that one should never under-estimate the feebleness of the human imagination [76].

CAUSE

In that vein, it is one of the great ironies of science that, as David Hume pointed out; the assumption of cause/effect cannot be proven. Events may be correlated, but all a true scientist can assert is that under certain conditions, particular, specified couplings are more or less probable. For example, there is a good correlation between mangled bodies and car wrecks, but, even with the temporal sequence considered, which causes which cannot be proven. How-ever, the fact that cause/effect cannot be proven logically does not mean there is no causation: It simply means there is a limit as to what logic can prove. [1] An unfortunate result of this philosophical limitation is a tendency to disregard the obvious fact and basic tenet of science that events are caused [77], much to the benefit of many of many of our stupider myths.

The classic case was the former controversy over the effects of tobacco on smokers. It is now generally conceded that smoking causes cancer, heart disease, strokes, bad breath, etc., but even when industry insiders knew better, spokespeople for tobacco made careers of pointing out the possibility that both smoking and ill health might be due to a common cause. For example, it was suggested hypertension might make one prone to disease (by lowering systemic resistance) and given to smoking cigarettes merely for the release of tension provided by oral gratification.

FREEDOM

Of all the myths which thrive in the face of scientific encroachments, however, "Free will" is the most fundamental [78]. Although study after study confirms that human behavior is caused by the interactions of the environment and people on each other, the Western belief in freedom cannot be laid to rest because Catholicism was based on freedom [79]. Although every successful experiment in the behavioral sciences theoretically undercuts the notion of freedom, there is no great soul searching confrontation developing on this issue. Just as God adapted to Charles Darwin, freedom is adapting to Skinner and his behaviorist colleagues so that our traditional schema may be retained. In this great unacknowledged battle between science and our favorite secular religion, our cultural priests play "Mindguards", ignoring and interpreting accumulating evidence so as to minimize our awareness and anxiety as to just who and what we are. In this sense, the concept of human freedom is to the contemporary Western world what the Ptolemaic planetary system was to medieval culture an idea that makes us feel important although it makes no sense whatsoever.

This myth is sustained not only by Spinozans, who hype understanding as freedom [80] and those who revel in the limitations of statistical analysis but also by the Existential-Humanists. These are behavioral philosophers who sort of play the sad clowns in the circus of psychology. They are very much in love with the illusion of human freedom and feel the behaviorists' assertion that humans respond predictably to combinations of internal and environmental factors robs people of their dignity. They prefer to view people as creative and inherently good beings who are striving to fulfill their potential [81]. According to them, Adolf and Attila the Huns were essentially good people just trying to realize themselves. [1] Collectively, Existential-Humanists constitute the "Aw, shucks..." school of psychology, and if there ever was a religious myth masquerading as philosophical idiocy, this is it.

By way of sympathy, it should be noted that the existential philosophical movement developed not as an attempt to understand how people, during the horrors of World War II, could "Rise above themselves" and find meaning in their lives but actually as a rationale for how France rolled over and dropped dead when the Germans invaded [82]. Following in the mind prints of Nietzsche, Kierkegaard and Heidegger, Marxist Sartre made a career of finding catchy ways to tell people what they wanted to hear, thought they knew and already believed. As the scientific equivalent of spam, he emphasized the standard bromides of self-determination, choice and responsibility for rising above one’s immediate circumstances [83] - in his case, those of a swivel-eyed gnome [84]. The first sound bite philosopher [85], his maxim was "We are our choices", as not only existence but meaning was in our own minds. We alone are sup-posed to decide freely what our attitudes and behavior will be based presumably on our own, individual life-determining experiences [86]. Fortunately for him, he spouted his supraintellectual nonsense in an age when the great philosophical minds were falling all over themselves and each other trying and failing to explain the inexplicable war psyche.

Specifically, his nonsense was nonscientific, subjective nonsense. It may have made good religion, but it was lousy philosophy and no kind of psychology at all. The phrase "Rise above themselves" may sound better in French, but it is meaningless in any language. Self-control, choice and responsibility are elements of a conceptual schema people can learn, and it may be awesome but not totally surprising that some people clung to them during their desperate experiences during the war [87] in their determination to rise above their circumstances. In toto, the Holocaust, the world’s poverty, starvation, sickness and cruelty led theologian Jackie Mason to conclude that if there is a God, he is an idiot [88]. As for people, a pat on their collective heads by Humanists might make them feel good, but it will not help anyone understand anything other than man’s alienation from alienation and that God thinks of man as slime [89].

The one thing we do not want to understand is that our self-control is so patently superficial. Self-control is the ability to change behavior by consciously directing actions to achieve specific goals. However, this whole notion is rendered irrelevant by the realization that the selection of the specific goals is predetermined by a person's cultural background and individual experience. As Tolstoy noted in an historical context, “Every action [of great men], that seems to them an act of their own free will, is in an historical sense not free at all, but in bondage to the whole course of previous history, and predestined from all eternity” [90], i.e., caused by one’s behavioral milieu and history, although it pleases him/her to think otherwise.

Although self-control may be illusionary if not impossible, belief in it and in personal freedom have been, are and probably will continue to be major contributing factors to the normal malfunctioning of Western society. This belief-as opposed to a fatalistic belief in determinism-is easy for us to accept because the English language is so implicitly moral in connotation: e.g. courage, pride, innocence and guilt, and countless other words imply a sense of "Free responsibility" [91]. However, the concept of guilt, for example, is generally inappropriate in our legal system. We may punish those who break laws, but we should leave it at that – that their backgrounds, education, cultural values, personal development and circumstances made them rule breakers. In such cases, a coercive penal system may act to the benefit of society in general by being conducive to civil behavior, but the concept of guilt is irrelevant and unnecessary even in cases where intent is an essential factor.

Put the other way, determinism and amorality would be easy to accept if we lived in a simple universe in which A causes B and C causes D each and every time, and that is all there is to it. However, we live in incredibly complex multiverses which are so complicated that it is easy to slip in the notion that we are free to control ourselves. Nevertheless, the complexity of multiverses does not change their essential causal nature; it just makes figuring out causal relationships so difficult that we take preferential refuge in the smoke of probability.

More important, determinism invalidates an essential criterion for determining stupidity. "Knowing" and "Mal-adaptiveness" are much too subjective to be reliable guides to stupidity. Now we find that people cannot even choose to be stupid: they just are or are not stupid, depending on circumstances with which they interact but cannot control. Further, people usually are and wish to remain unaware of them and thus may unwittingly create more problems than they solve while trying deliberately to achieve their subconsciously determined goals [92]. One of the major problems with people, of course, is the selection of their subconscious goal of finding meaning in their lives. As apologist Jackie Mason (mentioned above) so catologically put it, “Subconsciously, [1] you know you’re full of shit” [93] Oy!

MORALITY

Nevertheless, and as nonsensical as it seems, there remains a moral dimension to Western stupidity simply because of our ability-imperfect though it may be-to anticipate results of our actions. We must accept responsibility for our behavior, regard-less of external and subconscious factors, when we knowingly, wit-tingly and consciously direct our behavior toward certain ends. That places a moral burden on us to be accountable for the future because our actions cause actions of people affected by or aware of them [94]. We must transcend our past in order to promote a better future for everyone.

The Western ethic based on individual responsibility is simply our specific form of the universal human requisite for a moral code. Although the particular code will differ from group to group, within the microcosm of a given society, its system of ethics has significance and meaning. Every group has behavioral guidelines-both formal laws and informal norms and morals. All of these systems reflect the cultural imperative of people to pass judgment upon each other and their id-driven selves.

The odd thing is that we are so often “Wrong”-that is, we are stupid according to our own standards of judgment. Often, we are wrong because we really cannot perceive what is right or wrong when we are actively and emotionally involved in a situation. The cause of this perceptual difficulty obviously is that we have schemas which guide the misapplication of misinformation by misconstruing our behavioral context.

It is all too human to know better but still do something wrong sometimes just to get away with it. The drug addict knows what his habit costs him day to day, just as we all know the price of deficit spending in terms of both personal credit cards and the national debt. Nevertheless, to the extent that personal and official stupidity of the future will be the result of conscious, unethical efforts on our part to permit our schemas to keep us unaware of the dangers our behavior pose for us, we will be stupid for the worst of all reasons- because we want to be.

One of the reasons people seem to want to be stupid is that they are trying to achieve inappropriate goals rather than those defined by society. For example, a public official may indulge in graft for his own short-term best interest and counter to his role of public trustee. Likewise, your "Pig" policeman may eschew law and order for the immediate satisfaction of pushing around some hapless soul. On the other hand and a grander scale, the Watergate and Vietnam debacles might not have occurred had the irresponsible megalomaniacs involved restricted themselves to acts which were both legal and conscionable.

More to the point, the presidents and their advisors would have fared better had they limited their behavior to what the average American considered conscionable. The real problem with the insiders of both the Johnson and Nixon administrations – as well as those involved in the Irangate scandal under Reagan, who had great difficulty admitting what happened [95] – was that they considered their actions conscionable. According to their standards of evaluation (covered by such catch labels as "National security" and "Executive privilege"), their behavior was at least acceptable if not correct. Even more telling, in the case of the Watergate cover-up, acts known to be illegal were not considered illegal. Instead, they were simply deemed political tricks or public relations ploys [96]. Somehow, the country managed to survive those leaders who considered their acts to be both legal and moral and was sure no one would catch them anyway.

Not so with 43 and the abuses of human rights which characterize our war on terror. America had indeed always stood for something – justice. In particular, Washington, D.C. had led the fight for freedom of slaves and promoted civil rights for minorities. After 9/11, that changed. To our shame we abandoned our commitment to human rights, embraced torture [1] and made a mockery of the principle of habeus corpus – the right of a detainee to appear before a judge. We abandoned that legal principle by unlawfully detaining suspects indefinitely, without charge and without recurse to our legal system. In this context, if we stand for anything, it is hypocrisy [97].

In their sordid way, Nixon's advisors and 43's advisors were simply striking examples of people who let loyalty to a person or reference group replace intellectual honesty as a higher form of morality. Followers and members may prove their loyalty and gain the immediate social reward of group support by inventing, falsifying and distorting information. In such cases, personal integrity is not so much sacrificed as it is redefined by group values, which become the standards for judging everyone and everything. Members may come to believe in their leader or reference group with religious devotion to the point that even attempts to improve him or it may be construed as attacks. [2] Of course, anyone who questions group assumptions or actually subscribes to the explicit values of the general culture is regarded as a heretic and treated as an outsider. A whistle blower who asserts that any leader or organization that suppresses truth and punishes virtue is not worthy of loyalty is rare enough, but rarer still is the whistle blower who insists the community leaders abide by the rules and laws they are supposed to be embodying, living by and, in the case of the police, enforcing.

Those who simply get fed up with the whole scene and process [3] should appreciate the timeless comment “I have become tired of hypocrisy, stupidity, gross arbitrariness, and of our bowing and scraping, dodging and hair-splitting over words” made by capitalist Karl Marx [98] as he departed over-regulated Germany eventually for overrated Britain. Alternatively, the current non-debate contrasting virtue with vice calls to mind an argument between two color blind people about red and green. The sad fact is, republics thrive on virtuous citizens and leaders and are currently getting few and fewer of either/both.

HUMANITY

It is sad enough that stupidity is built into the human condition by language and social reinforcement. Much of this is effected subconsciously and must be accepted as a given of human life. However, if we have contributed anything to the cosmic design of stupidity, it is that we have converted innocent animal stupidity into conscious immorality. In the zoological kingdom, concentrated neural systems (i.e., brains) have always blocked relevancies and some have paired irrelevancies. We have compounded subjective stupidity with rational, arbitrary invented irrelevancies as we engage in calculated efforts to be unfair and dishonest. When lying and distorting information became a conscious, witting effort, stupidity became a problem with a moral dimension. [1] We became the first and only species to take pride in and credit for knowingly blundering into disaster after disaster. If we can survive ourselves, stupidity is all but assured of a bright future by leaders who insult our intelligence in order to gain support for their nefarious schemes by making themselves appear sanctified and righteous while being vague about dysfunctional specifics.

TECHNOLOGY

As disturbing as it is that any leader presumes to play God, but it is all the more disturbing in a culture which has coupled the most awesome technology with a general indifference toward the human problems that technology creates. In the simple world of the! Kung tribes, the technology of bows and arrows and spears are complemented by knowledge of the total environment [99]. In the sophisticated world of modern, computerized stupidity, technology is the environment. We have created an artificial, shallow cyber verse grounded in instantaneous appeal. We no longer need wisdom, com-munity or enlightenment: all we need is information [100] in predigestible bytes. We believe culture floats above and independent of nature: Telephones call each other up, machines talk to each other, computers amuse themselves with chess matches, and the robots are delighted, as evidenced by canned laughter.

While we glory in our fallible hardware, what has become of people? They starve by the millions in Africa while we marvel at the focused, technical quality of the pictures of their misery [2] on newscasts. Our slums are accepted as 
givens, our prisons are filled beyond capacity, and our children are spaced out on drugs and idiotic ’puter games. Only fools like I believe the truth will save us in an age when fake news can be tweeted or twittered or whatever [101].

These are but some examples of a general and disturbing trend in the world today. Clearly, our cultural compromise between technology and humanology is imbalanced. Not only the individual but humanity itself is obsolete. [1] In the American political tradition, there is an amusing myth that the government exists for the people. In our technological tradition, we do not even have such a myth. We exist for our machines-not the other way around. We do not have computers, they have us.

As a cultural force, technology is narrowing and dehumanizing in its methodology. Ironically, the “Transhuman” movement aims at embracing technology to the extent that we transcend our humanity [102] and in one sense it is already triumphant. We call it “A robot.”, and as engineers are creating robots, we are culturally becoming them.

Generally, however, technology is very effective in its limited range, but computers tend to limit the range of those devoted to them. Although the scientific method in the form of the social sciences has been successfully applied to human affairs, this success has been confined to what we can learn about ourselves-which is all science can and should do anyway. What we do with the knowledge we gain from science is another matter entirely and it is on this point that we are floundering. The problem is that all our scientific and technological know-how and knowledge, all our machines and computers cannot tell us what we should do. Scientific methods may project what results we can expect if we select a particular course of action, but that is not the same as indicating whether we should or should not do it, meaning science is not a directing religion [103].

ETHICS

Thus, our faith in and commitment to scientific research are misplaced because no amount of information will make us better people. No amount of data would have made Hitler or Nixon better leaders: more knowledge might have made them more efficient but not better. Hence, at the most basic and general level, the crisis in Western Civilization is due not to a need for more knowledge and research data but to a failure of our ethics of action and shortcomings of our informational morality, i.e., our stupidity. This problem reveals the limitations inherent in game theory, which explicitly ignores non-quantifiable values and ethics but can predict what one can get and how to get it [104].

As for our ethics of action, there is good news and bad news. Currently, we are in a phase of consolidating, organizing and institutionalizing stupidity–concentrating it in a technoelitist computer/communication complex whose effects are broadly distributed democratically to the long suffering public. Even if, as Bronx philosopher Lawrence Berra posited, “The future ain’t what it used to be” [105] (and probably never was), we should expect more and more planned stupidity, as centralized, standardized bureaucrats base blunders and design disasters upon our ever deepening foundation of amorality and for an ever expanding base of dependent victims. If this is not to be, if this prognosis proves false, it will be because we finally recognize that science and technology are ethically barren and morally neutral. That is the good news.

The bad news is that through poverty, disease, illiteracy and stupidity, our used and abused moral values have provided the ethical guidelines, rationalizations and justifications for all the political corruption, social ills and idiotic wars we have forced ourselves to endure [106]. If the past is any guide, it will not be much of a guide for the future. Nazis aside, if our past (im)morality brought us to the brink of nuclear war, created slums, fostered crime, starvation and misery, how will those values help us cope with the new challenges technology imposes upon us? Now that we can transplant organs, someone has to decide when the donor is “Ready”. Euthanasia will become more common as an alternative release from the lingering suffering of those afflicted with incurable but non-fatal conditions which modern medicine can prolong indefinitely. Unfortunately, none of the advances of modern science have reduced poverty or even provided a basic safety net for the poor and needy [107].

If we are to maintain our historic tradition of stupidity, we are going to have to devote more time and energy to planning our immorality. Further, we will have to develop new forms of stupidity to prevent us from coping with the problems we are creating, i.e., our worshiping of money as the standard of judging success, special interests, sensationalist media and ideological attack groups [108] pop to mind. Futurists should note that stupidity will be one of the more dynamic fields in our coming cultural development, but one of our great saving hopes will be that we avoid programming our cultural biases into objective computers.

Nevertheless, genetic engineering and eugenics are but two fields which will pose increasing problems for society. For years, people have selectively bred birds, dogs, horses and cattle and peas, beans and melons. Is it or is it not stupid to improve our own species by similar methods? Whatever the answer, it is based on morality, if not intelligence. Historically, the answer has been “No” to the suggestion of selective human breeding. It is considered immoral to use the knowledge we possess in this field to improve ourselves by deliberate planning. The basic problem is that of finding broad agreement as to just what would constitute “Improvement” other than the universally accepted “More people like me”. While this is a difficult matter, it should be possible to find some general principles to which everyone would agree, if we were to but try.

Such principles will themselves be determined by the values used when we judge the application of knowledge in the cause of humanity. Unfortunately, “Sci-tech” will not be much help in this regard and, as suggested earlier, may even be limiting the ethical development of Western culture by its very success with “Quantitative reductionism” [109] Science helps us learn about nature by breaking down complex phenomena into measurable units. However, all the essential complexities of biological and social systems do not lend themselves to being reduced to quantifiable bits of information. Nor do these complexities of life readily lend themselves to the stepwise logic of linear analysis by logical extension. Computers which can help analyze simultaneous interactions of phenomena help overcome this limitation of dealing with, at most, one thing at a time, but they are limited to handling information which can be reduced to computers. Nerds are understandably unable to grasp the importance of the human element which cannot be translated into their language.

A tragic example of this was the failure of the American military to calculate morale as a factor in the Vietnam War. Secretary of Defense Robert McNamara was the consummate computer man-the avatar of the Harvard Business School and Rand Corporation approach to war [110] as “Rational gamesmanship” [111] and everything that could be was measured and analyzed: number of troops, amount of equipment, tons of supplies, etc. Further, every quantitative measure from 1962 on showed we were winning. Not only on print-outs but in reality as well, the government forces enjoyed a ten to one ratio in everything calculable over the Vietcong. However, all this was outweighed by the fact that the North Vietnamese were more than ten times willing to fight the war than the American public was to support it. The inability of the Pentagon to appreciate this unquantifiable but crucial element of motivation and incorporate it into its intensely statistical schema was a major contributing cause of the American loss of the conflict [112]. Worse yet, the objective/bean counting school of intelligence concluded that such human factors like motivation and determination should be excluded from military evaluations because including them can lead to false conclusions: But so can excluding them. [1] The basic problem with including intangibles is that the analysts tend to make judgments about them which confirm their beliefs [113]. In a nutshell, we are pretty good at knowing what potential adversaries can do but not at knowing what they will do [114].

Looking forward in more general terms, it is with discouraged resignation that we must accept our fate of a future shaped by all kinds of stupidity, with the specific dominant form depending primarily on the evolving relationship of technology to the society it creates. As life becomes reduced from DNA to a silicon chip, knowledge will become an end in itself to the point that society is dehumanized. The best that we might hope for is that scientists will honor their own ethics for gathering information and secondarily, promote a humane technology when applying knowledge to the creation of problems. In any event, stupidity will be an integral part of the compromise condition of social life in the future, with its precise role and style being shaped by what we expect of and can accept from ourselves.

QUESTIONS

If we want to make our expectations a bit more realistic, there are a number of questions we can ask when analyzing our stupid behavior. Was it an individual or group effort? Who made the crucial decision? Did he know what he was doing? Was he trying to find out? What made it a defective decision? Did external conditions contribute to make it stupid?

ANSWERS

For such clear-cut questions, there are ambiguous answers. To the extent that stupidity is behavioral irrelevance, one source may be found in the subjectivity of decision makers. They may be excessively concerned with their own status (maintaining or advancing it), the social cohesion of their reference group or denigrating the opposition. An example of this last point was Nazi propaganda minister Joseph Goebbels somewhat biased assessment of Winston Churchill as belonging “To that category of criminals who in their stupid brutality are unteachable” [115]. On the other hand, one can be stupid by pushing objectivity to the point of social disruption, as when pointing out the silliness of someone else's religion. Normally, stupidity tailored to enhance a leader's status or a group's cohesion tends to be conservative, with relief provided when some crackpot devises a new and better way to be idiotic.

To the extent that future stupidity will be caused by individuals making defective decisions, an understanding of 
individual stupidity will help us appreciate the irrationality of the years ahead. Unlike corporations and institutions, which are incapable of feelings, a person may be emotionally subjective. Further, an individual invariably has developed blind spots due to the specifics of his particular life experiences. In that regard, we might pay heed to the self-insight of a Jane Austin character who realized the way to overcome folly was to compensate for prejudice, avoid ignorance and apply logic [116]. Finally, shortcomings of information processing by any single mind prevent an individual from comprehending all the complexities of any but the simplest decisions [117].

Unfortunately, the growing trend toward institutionalized stupidity will not change the essential fact that it will still be stupidity. Only the type will change somewhat as the past predominance of individual idiocy created by enthusiastic bursts of brilliant lunacy will be overshadowed by plodding committees which can draw upon the collective and compounded drawbacks and limitations of their members. While being unemotional may encourage institutional logic, the resultant rationality may run over people's feelings and moral sensibilities. Finally, perceiving the complexities of a situation could lead to no decision at all. After all, very few polices are pleasing to everyone. At some point, action must be taken and it is stupefying to analyze and debate every possible ramification of each and every possible act under all possible contingencies.

Nor will computers really help us avoid stupidity in the future. First, much of the human experience cannot be programmed. Feelings, hopes and emotions are not reducible to quantified bits of computerese. Neither can any program work out all possible costs and benefits of contemplated actions. Worse yet, although computers can help us deal accurately with the data we deem relevant to a given problem, these suffer deification once they are entered. Computers have become our sacred cows, and their contents and pronouncements are now holy beyond critique. Disputes are considered settled when the computer speaks and to many priests in the field, the “Garbage in-garbage out” problem is secondary to the systematic processing of garbage. Indeed, a variant of Gresham’s Law is taking hold in data processing as good info on the net is being rendered suspect if not driven out by that which is dubious if not bad. [1] Seldom do we find computer operators enthusiastically rushing to make corrections of either input or programs so that they can improve the quality of their faster and faster garbage. Electronic garbage clearly poses a true threat to democracy [118] by converting it into an idiocracy via invalidity promoted by anonymity: that is, invalidity varies directly with anonymity, while Ethics × Anonymity=K. Just as the ethical controls of small town 19th century America were overcome by PR imagery of the big city of the 20th century, they will be all but eliminated by the anonymity of the internet of the 21st. [2] So in the future, we can expect more, higher quality stupidity as the vagueness of the net promotes invalidity by and reduces ethical constraints on all too human users who become more emotional, irrational, mobbish and susceptible to nonsense [119].

Those humans who use language will find it will also make its contribution to stupidity in the future. That is, as long as we communicate by language and use it to construct our cognitive schemas, we will misperceive events, misinterpret data and misapply principles. After all, that is what being human is all about, and, ultimately, like it or not, we must confront ourselves.

If computers and language need an ally in frustrating informational morality, the basic commitment of people to preserve, protect and defend their self-images and reference groups all but guarantees stupidity a rosy future. While there is no iron law of stupidity which dictates that people have to wreck their own civilizations, it just always turns out they always do, and nothing in the contemporary world indicates that we are going to be exceptions to this rule. It might help we established an "Information ethic" (i.e., let the facts speak), but society probably could not stand the strain of cognitive consistency and cultural honesty. In this vein, Auguste Comte proposed objectivity as the antidote to bias [120], but it is a difficult if not impossible ideal to achieve. A demand for intellectual integrity might reduce the establishment's abusive application of information possessed, but no one can claim to be objective: every schema is a composite synthesis of the obliquely interrelated worlds of factual data [121], social cohesion and political power, so any information ethic must be intrinsically compromised by our inherent subjectivity. We are becoming detached from reality as facts become overwhelmed by image and spin.

Although philosopher Ron White has opined, “You can’t fix stupid” [122] we can try. The devout can ask God for wisdom, and it will be given to them [123]. For all others, at the personal level, idiocy often will result from misguided efforts of people trying to avoid the psychic discomfort of cognitive consonance, e.g. a whistle blower expecting to be rewarded for telling the truth. [3] It is unfortunate that an adopted strategy of deliberate ignorance usually results in a maladaptive schema being preserved at the expense of crucial, adaptive information about the environment. An unfortunate, specific example of this occurred in the White House during the depths of the Lewinsky scandal when Hillary Clinton stopped reading the newspapers and watching TV. Her rationale was that the stories were written by hacks trashing her beloved Bill. The result of this emotion-saving tactic was that she knew less about what was going on than did her staff, friends and enemies [124].

While induced ignorance may reduce cognitive dissonance it is hardly an effective coping strategy: When warnings go unheeded and facts are ignored, behavior becomes less and less relevant to reality. Alternatively, there may be a Descratian duality of mind and behavior with an action program at odds with an individual’s philosophy of life. [4] The two exist side by side with no attendant emotional tension whatsoever. People quite comfortably do one thing and believe another. An emotional conflict comes, however, when someone else points out the mismatch between creed and deed or, worst of all, lives up to the creed. That can really get true believers bent out of shape.

Although education should and could be a way to develop in people effective ways for dealing with such challenges to their schemas, the history of modern science indicates that academic training as currently practiced is no guarantee against stupidity. In fact, most educational institutions seem to inculcate specific belief systems rather than train people to find their own when traditional schemas bring themselves and their devotees into intellectual disrepute. This process is even more pronounced in the social sciences, which make a point of informing students the way society is not [125] but someone wishes it were – with all democratic subgroups being presumably equal in ability.

However, once one’s schema is created, it should always be subject to confirmation, revision or refutation, but when a positive feedback system takes insulation to an extreme, the belief system is beyond reach: it is then a case of terminal stupidity immune to correction. Those who wish to fix Mr. White’s stupid should bear in mind the apocryphal epitaph of a prescient if unlucky, fictional, late skydiver whose final observation was, “A parachute is like the human mind, it functions best when open” [126]. However, the best guideline for preventing stupidity is: The more you want something to be true, the harder you should check to see that it is. Beware wishful thinking [5] because the easiest person to fool is you [127]. As Eleanor Roosevelt noted, “It is funny how hard it is to be honest with yourself and not be swayed by your own wishes...” [128].

Another way to inhibit individual stupidity is for a person to anticipate what other would think of him/her if they knew what (s)he was about to do. If someone is trying to get away with something, it probably is something (s)he should not be thinking about much less doing in the first place.

In institutions, stupidity can be inhibited by breaking down the isolation and compensating for the bias which contribute so much to the idiocy of groupthink as was exemplified by FDR’s management style: he usually reached decisions via huddles and bull sessions in which he welcomed opposing views, encouraged dissent and deliberately brought outsiders into the mix [129]. However decisions are reached, they can be corrected if someone in the power web [6] will recognize the advisability of so doing and act accordingly [130]. Unfortunately, egos often trip on images, as devotees become so committed to a course of action blessed by the leader that perception of any obviously negative consequences are inhibited to the degree that mere knowledge of unexpected problems cannot induce a reconsideration of the matter.

The well-known Peter Principle, whereby people are promoted one grade above their ability to function effectively [131], is another example of institutional stupidity which can be corrected if options remain open. If promotions were made provisional for a short period of time so that performance could be evaluated, there might be fewer people put permanently into positions beyond their abilities to cope. (The military's "Brevet" promotional system is a step in this direction, but it is usually used to save money by paying a person the salary of his lower rank while he assumes greater responsibilities). There would be, of course, some loss of face for any workers who were returned to their earlier positions after provisional trials, but their short-term disappointment would be the price paid for finding the level at which they could function effectively. In the long run, this probably would be best for everyone-the more efficient institution as well as the crestfallen individuals.

The likelihood of institutional stupidity can also be reduced if decision makers acknowledge the dangers or negative consequences which may result from their actions [132]. There often is a tendency to minimize risks (and maximize possible rewards) inherent in a given policy. This penchant to ignore risks can be an open invitation to disaster. Risks should not be neither minimized nor maximized—just recognized. They should be given probability and severity ratings which then should be multiplied, with the product granted due consideration in ensuing deliberations.

In addition, an explicit discussion of the morality of a contemplated act might also prevent stupid behavior. [7] Along with the legal, political, economic and social consequences of an act, its morality should be considered as well [133] Morality is an underlying, defining factor in any controversial endeavor and anyone who ignores it may well wish (s)he had not.

In fact, many people might have profited from the advice a former country lawyer gave a young man starting out in the legal profession. "Strive to be an honest lawyer," he said. "If you can't be an honest lawyer, be honest." The former country lawyer was, of course, Abraham Lincoln, who made something of a career out of embodying the mores of society beyond petty role playing.

At the institutional level, the best way to promote honesty is publicity. As awkward as it would be for major political and corporate figures to conduct their business in goldfish bowls, steps in that direction would induce them to behave responsibly when considering the ethically gathered data at hand and attendant options. Certainly, we would not have had the Bay of Pigs and Vietnam fiascos or the Watergate and the Iran-Contra scandals had our politicos been required to plan their policies under public scrutiny.

As idealistic as it is to suggest our leaders abide by God's first words "Let there be light" [134], it is reasonable to contend there exists an inverse correlation between public knowledge and their immorality if not stupidity and it is called “Secrecy”: It allows intelligent people to continue counter-productivity unabated because it is undetectable. The less known about what they are doing, the more likely they are to indulge in corruption. Conversely, the more known, the less likely they will do something naughty.☺ To put it a third way, you will have corruption to the same degree which you permit secrecy. Although an information ethic may not be a cure all for stupidity, it could be a first line of defense against public malfeasance: It should start with the people's right to know what their governments are doing and end by promoting official responsibility and efficiency via accountability.

Finally, although we must use language, jargon should be avoided or at least minimized. The use of loaded terms can distort judgment by inducing a sense of self-righteous overconfidence in one’s cause. On the other hand, when referring to an enemy, use of respectful labels may prevent an underestimation of the opponents' capacities and abilities.

While it is nice to have a list of strategies for reducing the role of stupidity in the future, it is appropriate to ask whether it is really possible for any organization to protect itself from something so characteristically human. Is it possible, for example, to have an intelligent, enlightened government? The answer is, apparently, "Not really" –although Ashoka came pretty close to being the ideal philosopher-king in India in the third century B.C. He ruled benevolently by per-suasion rather than coercion, helped the weak and poor, encouraged religious and ethnic diversity and protected the rights of animals [135]. Muslims claim Mohammed led one [136] and the enlightened Frederick the Great of 18th century Prussia deserves honorable mention as the nearest Europe ever saw to this laudable but apparently unobtainable ideal [137]. Hitler, inter alia, thought himself one [138] Oy!

More broadly, Plato's ideal of breeding and nurturing an elite of rational and wise leaders for government service was never tried in its purest form, although he gave it an aborted shot in Syracuse and the Roman emperors in the second century [139] took steps in that direction. Indeed, Emperor Gallienus (ca. 250) agreed to help pagan philosopher and court favorite Plotinus establish an ideal Platonopolis to be governed on the principles of the Republic, but he later withdrew his consent, perhaps to prevent a failure [140]. Later, the medieval Catholic Church came pretty close to the order Plato envisaged [141] and China's Mandarins were justly noted primarily for their platonic stability coupled with intellectual sterility. However, their failure to deal with the novelties of science, technology and industry (combined with corruption and inefficiency) contributed to their eventual deterioration and demise in decadent, effete incompetence [142]. (By way of pragmatic contrast, Plato is faintly alive if ill in contemporary China, North Korea and the Cuba, all of which owe their ideological foundation to Marx and Engels, who promulgated the perfect, centralized society).

If we are justly concerned with how to reduce stupidity, we must also consider by how much it should be reduced. After all, stupidity lets us live together and express ourselves through our influence on others while making it difficult for us to live with each other. The stupidest thing of all would be to eliminate it completely, as we would soon be breaking down and/or at each other’s' throats in rages of realism, rationality and cognitive consonance.

Thus, future reformers who aspire to get people to live up to or (in the idiotic terms of the Existentialists) transcend their potential would do well to bear in mind the plight of Nietzsche's Superman as well as that of Nietzsche himself. In order to be happy, his Superman had to overcome his Will to Power-that obsession with dominance and control which usually nets disdain and resentment. [8] In short, he had to overcome himself [143]. As the mighty rarely chose to exercise this option, idealists may have to accept that, for better and worse, people are going to be themselves.

The question for Americans in the 21st century is which “Self” will we be? In answering this question, we should bear in mind an observation, made in 1835, that the source of genius and power is righteousness which we are great to the degree that we are good and if America ceases to be good, it will cease to be great [144].

As for Nietzsche, he was happiest when he was clearly insane -thus calling into question “Happiness” as the goal of enlightenment. The Will to Truth was for him and still is something of a terrifying, destructive principle [145] to disturbed or distracted minds because we in such cases do not want to know who, what and why we are. Generally, like the physicists who create the phenomena they want to observe, we create the perceptions we want to hold. Thus, we then have to ask whether a self-induced myth could be better for society than the truth [146]. Be it to our advantage or not, in this context, we must bear in mind we can create anything we want out of human nature because it is and we are so subjective [147].

It is this subjectivity which makes operational definitions of stupidity (and so many other behavioral attributes-aggression, intelligence, etc.) so elusive. While there is a temptation to throw up our hands in dismay at the confusion inherent in the ambiguity of subjective phenomena, we must realize that this is not an end point for us but a beginning. It is our subjectivity which makes it not only possible but probable that we can and will be stupid, since it permits us to rationalize our behavior with unlikely explanations which are psychologically gratifying and socially acceptable. In our relativistic culture, both our abstract art and absurd theater indicate that the answer to the human riddle is not that there is no answer but that there is any answer we want.

The questions are rather simple and clear: what is the purpose of life? Why are we here? Who are we? What are we? Why are we? The answer we need is not framed in terms of material progress but an acknowledgment of our moral obligation to bring comfort to the human soul. In the presence of scientific advancements, our commitment must be to match technological progress with concomitant developments in moral philosophy and spiritual evolution [148], both of which appear to be on hold for lack of a recognized standard of judgment. For philosophy in general, the best system is the one which helps us learn better than any of the others. Generally, we learn enough not to repeat the exact same mistake, but we rarely learn enough to avoid making similar (i.e., new) mistakes [149].

Overall, the bottom line is that there is no bottom line—just a number of fuzzy borders, each of which provides a suitable perspective for a given person or reference group. Subjectivity has triumphed, and all things being considered equal (whether they are or not), humanity will both flourish and fail, and knowing how will not matter a bit any more than knowing about gravity will keep one from falling.

As for stupidity, we may as well accept it as a limitation language and society place on our intellect. Like death, which clears away the old for the new, stupidity is an incongruity inherent in life. Humans have certainly developed, expanded and promoted it. We do so each time we endeavor to construct yet another flimsy utopia while doing our worst to keep the power structure evermore entrenched within itself. What we cannot acknowledge is that ideals are the rainbows of life—only the pursuit of illusion is real. It is an ultimate of human stupidity that we must seek what we cannot attain in a manner which prevents us from attaining it. Essentially, culture is Lamarckian, with competing factors selected not by nature but by the prevailing culture itself, which acts in its short-term best interest at the expense of long-term adaptability. Specifically, it prevents us from recognizing that there are eternal things that matter and that we can know what they are [150].

In The Ascent of Man (1973), Bronowski expands upon what he calls “The human dilemma”. He laments our “Deliberate deafness to suffering” and “The betrayal of the human spirit: the assertion of dogma which closes the mind...” Unfortunately, “Deliberate deafness” does not cover the efforts by terrorists and military maniacs to cause murder and mayhem among civilian populations of cultures with whom they differ. Further, “The assertion of dogma which closes the mind” is not a betrayal of the human spirit: It is the human spirit. The human dilemma is that we have to overcome ourselves–our psychic desire to be unique and our social need to belong.

What we need in order to survive are systems which are not too systematic. They must be both functional and credible. This is the great human trade off. A functional system is unacceptable to super ego standards which require inspiring beliefs. On the other hand, trying to live according to a static moral system leads to insurmountable, pragmatic problems. Fortunately, stupidity permits us a com-promise blending so that we can entertain beliefs in all kinds of self-contradictory, conflicting but cognitively consonant systems while coping with some problems and creating others while ignoring our essence. [9]

While we are capable of all kinds of compromise blending’s, that needed for survival is fortunately not one of trading off the conflicting opposites of nihilists and realists. Nihilists aver there exist no eternal standard by which to judge and live, while traditional realists have argued society must be based on some universal, absolute truth which invariably turns out to be a subjective viewpoint at best. What we all need is an eternal moral compounded from a respect for intellectual ethics and a commitment to human rights [151]. Such a moral would be compatible with academic integrity, consistent with individual dignity and based on the compelling need for people to find meaning in their lives [152]. The search for this moral can unify the scientist, theologian, politician, artist and conscientious citizen [153] and it will lead to a schema which so broad as to include moderation within itself.

Equally compelling as the search for a universal moral is the need to find meaning for the deaths squandered in all the bloody crusades of the past and the lives wasted in the quiet despair of our ghettos. If experience gives us the opportunity and wisdom the ability to recognize mistakes when we repeat them, we must be very stupid indeed to have been party to so much carnage and indifference so that we can create more. Part of this is what we make difficult for ourselves to learn, part is what we do not want to know [154]. In honor of all those who have been sacrificed so pointlessly at the altar of stupidity, we can resurrect meaning by reflecting on our behavior and re-examining ourselves. There is no shame in admitting that our basic flaw is our need to belong-that our greatest fault is our need to be loved.

1.       Cline EH (2014) 1177 B.C.: The Year Civilization Collapsed. NJ: Princeton University Press, Princeton, p: 178.

2.       Cleva GD (1989) Henry Kissinger and the American Approach to Foreign Policy. Lewisburg, PA: Bucknell University Press, p: 40.

3.       Maier CS (2016) Once Within Borders. Cambridge, MA: Harvard University Press, p: 19.

4.       Flanagan L (1998) Ancient Ireland.

5.       Novella S (2012) Your Deceptive Mind. Chantilly, VA: The Great Courses, p: 3.

6.       James W (1907) Pragmatism.

7.       Chambers W (1952) Witness. Washington DC, Regency Gateway, p: 506.

8.       Mishra P (2017) Age of Anger. New York, Farrar, Straus and Giroux, pp: 114-115.

9.       Fishbein M, Ajzen I (1975) Belief, Attitude, Intention and Behavior. Reading MA: Addison-Wesley.

10.    Bronowski J (1973). The Ascent of Man. Boston, MA: Little, Brown & Co, p: 364.

11.    Ibid. 353.

12.    Ibid. 358 and 360 (Heisenberg☺).

13.    Virgil P (1967) 19 B.C. Aeneid, Bk. II. A beautiful if mythical daughter of the King of Troy, Cassandra caught the attention of Apollo, who bestowed on her the gift of prophesy. In: Varki and Brower, p: 237.

14.    Tuchman BW (1984) The March of Folly. NY: Knopf, p: 381.

15.    Durant W (1957) The Reformation. New York: Simon and Schuster, p: 167.

16.    Cahill T (2013) Heretics and Heroes. New York: Anchor Books, p: 296.

17.    Arnn LP (2015) Churchill’s Trial. Nashville, TN: Nelson Books, p: 119.

18.    Dawkins R (2017) Science in the Soul. New York: Penguin Random House, p: 4.

19.    Tetlock PE (2005) Expert Political Judgment. Princeton, NJ: Princeton University Press.

20.    Tuchman. op. cit. 381.

21.    Mosley N (1979) Human beings desire happiness. In: The Encyclopedia of Delusions edited by Duncan R and Weston-Smith M. Wallaby, New York, p: 216.

22.    Hecht J (2003) Doubt: A History. New York: Harper One, p: 448.

23.    Tuchman. op. cit. 382.

24.    Pitkin WB (1932) A Short Introduction to the History of Human Stupidity. New York: Simon and Schuster, pp: 258-263.

25.    Lansdale E (1960) In the Pentagon Papers: History of United States Decision making on Vietnam. Senator Gravel edition. Boston, MA. 1971-1972. 2: 440-441.

26.    Tuchman. op. cit. 234.

27.    Ibid. 347.

28.    Kennan G (1969) Quoted in The Limits of Intervention by Hoopes T. McKay, New York, p: 178.

29.    Tuchman. op. cit. 286.

30.    Kennedy JF (1956) Profiles in Courage. New York: Harper &, Row.

31.    Bowles C (Undated) Quoted on page 19 of Paterson’s Kennedy’s Quest.

32.    Haldeman HR (1970) The Haldeman Diaries: Inside the Nixon White House. New York: Putnam.

33.    Murphy C (2007) Are We Rome? Houghton Mifflin, New York. On Bush: Mayer J, The Dark Side, p: 48.

34.    Brooks D (2009) Wise Muddling Through. The New York Times.

35.    Ghaemi N (2012) A First-Rate Madness. New York: Penguin, p: 2.

36.    Hammond PB (1978) An Introduction of Social and Cultural Anthropology. New York: 2nd Edn. Macmillan, pp: 27-28.

37.    Bell D (1975) The end of American exceptionalism. The Public Interest, p: 197.

38.    Cottrell A (1979) Science is objective. In: Duncan and Weston-Smith. p: 162.

39.    Ibid. 163.

40.    Voltaire. Le dîner du comte de Boulainvilliers (1767) Troisième Entretien (The dinner of count Boulainvilliers. Third talk).

41.    Skinner BF (1971) Beyond Freedom and Dignity. New York: Bantam Books, p: 202.

42.    Wright FC (1840) “Divisions of Knowledge” in Life, Letters and Lectures, 1834/1844. Arno, New York.

43.    Francis G (2015) Adventures in Being Human, NY: Basic Books, p: 67.

44.    Mooney C (2005) The Republican War on Science. Basic Books New York, p: 14.

45.    Sturtevant A (1965) A History of Genetics. New York: Harper & Row, p: 2.

46.    Gillispie C (1960) The Edge of Objectivity. Princeton, NJ: Princeton University Press, p: 336.

47.    Friedman M (1969) Pathogenesis of Coronary Artery Disease. New York: McGraw-Hill, p: 77.

48.    Bernard C (1865) Introduction to the Study of Experimental Medicine. New York: Trans. H. Green. Dover Publications, p: 38.

49.    Degler C (1991) In Search of Human Nature. New York: Oxford University Press, viii.

50.    Moran G (1998) Silencing Scientists and Scholars in Other Fields. CT: Ablex, Greenwich, p: 12.

51.    Lyttleton R (1979) The Gold Effect. In: Duncan and Weston-Smith, p: 189.

52.    Ibid. 194. Specific values change, but the schematic model defines humanity.

53.    Strutt J () (Lord Rayleigh) Quoted by Lyttleton. p: 194.

54.    Campbell D (1966) Pattern Matching as an Essential in Distal Knowing. In: The Psychology of Ebon Brunswik edited by Holt KH, Rinehart &, Winston, New York, p: 103.

55.    Jaroff L (1991) Crisis in the Labs. Time 138: 46, 48-49.

56.    Wiener N () Quoted by Lyttleton. p: 196.

57.    Soley L (1995) Leasing the Ivory Tower. Boston, MA: South End Press, Chapter 7.

58.    Kahneman D (1967) Quoted in Lewis. p: 130.

59.    Jensen AR (1969) How much can we boost IQ and scholastic achievement? Harvard Educational Review 39: 1-123.

60.    Erickson SA (2006) Philosophy as a Guide to Living. Chantilly, VA: The Teaching Company, Part 2, p: 39.

61.    Smith R, Sarason I, Sarason B (1982) Psychology: The Frontiers of Behavior. Harper &, Row, New York, p: 426.

62.    Rosenhan DL (1973) On being sane in insane places. Science 179: 250-258.

63.    Schweikart L (2010) 7 Events that made America, The America. New York: Sentinel, p: 98.

64.    Smith et al., op. cit. 305.

65.    Allen R (2011) How to be a Genius? London: Collins &, Brown.

66.    Crier C (2005) Contempt: How the Right is Wronging American Justice? New York: Rugged Land, p: 136.

67.    Cottrell. op. cit. 167.

68.    Bronowski. op. cit. 364.

69.    Watson P (2010) The German Genius. New York: Harper Collins, p: 483.

70.    Snow CP (1981) The Physicists. Little. Boston: Brown & Company, pp: 90-91.

71.    Bernstein BJ (1987) Toward a Livable World. Edited by Hawkins HS, Greb GA and Szilard GS. Cambridge, MA: MIT Press, p: xxvi.

72.    Gillon S (2006) 10 Days that Unexpectedly Changed America. New York: Broadway Books, p: 178.

73.    Klotz IM (1980) The N-Ray affair. Sci Am 242: 168-175.

74.    Hopkins B, Rainey C (2003) Sight Unseen. New York: Atria, p: 50.

75.    Lippmann W (1922) Public Opinion. Quoted on p. 118 of Bartels, who spends 398 other pages proving statistically that John and Jane Q. Public are misinformed, ignorant fools who do not know what is in their own best interest.

76.    Hopkins and Rainey. op. cit. 49.

77.    Laplace P (1812) Théoriere Analytique des Probablilités. Oeuvres Completes de Lapalace. Gauthier-Villars, Paris, pp: 1878-1912.

78.    Buckle HT (1857) History of Civilization in England. London: Parker JW, p: 27.

79.    Jones GS (2016) Karl Marx: Greatness and Illusion. Cambridge, MA: Harvard University Press, p: 73.

80.    Hodgkinson TW, Bergh H (2015) How to Sound Cultured? Berkeley, CA: Publishers Group West, p: 13.

81.    Rogers CR (1981) A Way of Being. Boston, MA: Houghton Mifflin.

82.    Scruton R (2015) Fools, Frauds and Firebrands. New York: Bloomsbury, p: 80.

83.    Sartre JP (1943) Being and Nothingness. In: Trans Barnes H, Methuen, London.

84.    Hodgkinson and Bergh. op. cit. 47.

85.    Watson P (2001) A Terrible Beauty: The People and Ideas That Shaped the Modern Mind–A History. San Diego, CA: Phoenix, p: 410.

86.    Sartre JP (1975) “Existentialism” in Existentialism from Dostoevsky to Sartre edited by W. Kaufman. New York: Meridian. Abnorm Soc Psychol 38: 417-452.

87.    Mason J (1988) Taped interview with E. Wohlgelernter.

88.    Mason J Oral History Library. (American Jewish Committee, New York Public Library), Tape 3: 199.

89.    Sartre JP (1943) Being and Nothingness. (Trans. Barnes H, Methuen, London, England, pp: 606-611.)

90.    Tolstoy L (1867) War and Peace. Book IX, Chap. 1.

91.    Diderot D (1989) Réfutation d’Helvétius in Diderot, Oeuvres. Philosophie 1: 855.

92.    Smith, et al. op. cit. 437.

93.    Mason J (2003) Undated quotation on page 466 of Hecht J. Doubt: A History. Harper One, New York.

94.    Peterson P (2009) Personal communication from Mike Wood. p: 203.

95.    Brooks D (2005) Patching the Presidency. The New York Times, A33.

96.    Janis IL (1982) Groupthink. Boston, MA: Houghton Mifflin, p: 227.

97.    Clarke R (2008) Your Government Failed You. New York: HarperCollins. p: 254-255.

98.    Marx K (1845) Letter to A. Ruge. Karl Marx and Friedreich Engels Collected Works. 1: 397-398.

99.    Hammond. op. cit. 37.

100.Mamet D (1996) Make-Believe: Essays and Remembrances. Boston, MA: Faber, pp: 159-160.

101.Luce E (2017) The Retreat of Western Liberalism. New York: Atlantic Monthly Press, p: 130.

102.Bartlett J (2017) Radicals Chasing Utopia. New York: Perseus, Chapter 1.

103.James PD (1989) Devices and Desires. Boston, MA: Hall GK, p: 76.

104.McDonald J (1950) Strategy in Poker. New York: Business &, War. Norton, p: 126.

105.Berra L (1965) Quoted on page 46 of Firestein.

106.Howe F (1920) Quoted in America Enters the World by Page Smith. New York: McGraw-Hill, p: 744.

107.Churchill W (1901) Unpublished book review. Quoted on p. 187 of Arnn. Oddly, Winny seemed to justify the need for a social program to aid the poor so the underclass could provide soldiers for an imperial army, and apparently he was semi-serious. He advocated a better sewer system so the poor would have something to fight for. (Ibid. Arnn. 188.)

108.Butler-Bowdon T (2105) 50 Politics Classics. Boston, MA: Nicholas Brealey, p: 314.

109.Koestler A (1979) Nothing but...? In Duncan and Weston-Smith. op. cit. p: 200.

110.LeMay C (1997) Cited on p. 20 of H. McMaster’s Dereliction of Duty. New York: HarperCollins.

111.Potter P (1965) In a speech at the Washington Monument. WDC.

112.Halberstam D (1969) The Best and the Brightest. New York: Random House, p: 464.

113.Bolger D (2015) Why We Lost? New York: Houghton Mifflin Harcourt, p: xxviii.

114.Ibid. xxvii.

115.Holland J (2010) The Battle of Britain. New York: St. Martin’s Griffin, p: 577.

116.Austin J (1813) Pride and Prejudice. 2: xiii.

117.Janis. op. cit. 2.

118.Belin D (1996) Testimony to the Assassination Records Review Board. Los Angeles, CA. Quoted on page 375 of Ayton.

119.Barteltt J (2017) Radicals Chasing Utopia. New York: Perseus, p: 2.

120.Comte A (1865) A General View of Positivism.

121.Morey D (2017) Quoted on p. 26 of M. Lewis’s The Undoing Project. Norton, New York.

122.White R (2008) Undated but ca. Redneck Comedy Tour. Nevertheless, Louisiana governor Republican Bobby Jindal called upon the GOP somehow to stop being the “Party of Stupid”, although he did not specify exactly how to do so.

123.James 1:5. Pre-50 A.D.

124.Bernstein C (2007) A Woman in Charge. New York: Knopf, pp: 507, 524.

125.Reagan R (1969) Testimony to the House Subcommittee on Education and Labor. WDC.

126.Dewar Sir J. Ca (1920) A twist on a remark attributed to him.

127.Clarke R, Eddy R (2017) Warnings. New York: Harper Collins, p: 240.

128.Lash J (1984) A World of Love. Garden City, NY: Doubleday, p: 150.

129.Shesol J (2010) Supreme Power. Norton, NY.  p: 272.

130.Janis. op. cit. 151.

131.Peter L, Hull R (1969) The Peter Principle. New York. William Morrow &, Co.

132.Janis. op. cit. 148.

133.Ibid. 150.

134.Genesis. ca. 950 B.C. 1:3.

135.Sen A (2005) The Argumentative Indian: Writings on Indian History, Culture and Identity. London: Allen Lane.

136.Caryl C (2014) Strange Rebels. NY: Basic Books, p: 86.

137.Blanning T. (2007) The Pursuit of Glory. New York: Penguin, p: 623.

138.Range R (1924) Little, Brown and Company. New York, p: 234.

139.Durant W (1943) Caesar and Christ. New York: Simon and Schuster. Chap. XIX.

140.Ibid. 608.

141.Durant W (1926/1961) The Story of Philosophy. New York: Simon and Schuster, p: 34.

142.Durant W (1935) Our Oriental Heritage. New York: Simon and Schuster, p: 799-802.

143.Mosley. op. cit. 215.

144.de Tocqueville A (1835) Democracy in America. Trans. by Reeve H, ed. by Knopf PB, NY.

145.Kaufmann W (1968) Nietzsche: Philosopher, Psychologist, Antichrist. 3rd Edn. Vintage, New York, p: 358.

146.Glazer N (1995) Scientific Truth and the American Dilemma. The Bell Curve Wars: Race, Intelligence and the Future of America. Fraser S, ed. NY: Basic Books, p: 147.

147.Geertz C (1983) Local Knowledge.

148.Churchill W (1925) Mass Effects in Modern Life. Thoughts and Adventures, p: 295.

149.Tushnet M (2003) Defending Korematsu? Wisconsin Law Review, p: 292.

150.Erickson. op. cit. Part 1, p: 166.

151.Thoreau H (1849) Resistance to Civil Government.

152.Rue L (1994) By the Grace of Guile: The Role of Deception in Natural History and Human Affairs. New York: Oxford University Press, pp: 284-302.

153.Williamson R (1947) Anti-relativism. The Journal of Politics 9.

154.Bolger. op. cit. 419.