Credential Inflation Makes College Degree Not Worth The Cost

Belief in the value of college education was sacrosanct throughout most of the 20th century. In the early 2000s, the question began to be raised whether the payoff in terms of a better-paying job was worth the cost. For several generations, almost a taboo topic--  but once out in the open, an increasing percentage of the US population has concluded a college degree is not worth it.

The first big hit was the 2008 recession, when graduates found it hard to get jobs. But even as the economy recovered and grew, faith in college degrees has steadily declined.

In 2013, 53% of the population—a slim majority, agreed that a 4-year degree gives “a better chance to get a good job and earn more income over their lifetime.” In 2023, education-believers had fallen to 42%, while 56% said it was not worth the cost. Both women and men had turned negative in the latest survey—even though women had overtaken men in college enrollments in previous decades.  The youngest generation was the most negative, 60% of those aged 18-34. Not surprisingly; they are the ones who had to apply to dozens of schools, a rat-race of test scores, scrambling for grades, and amassing extra-curricular activities; most not getting into their school of choice, while paying constantly rising tuition and fees, and burdened with student-loan debt into middle age. Not to mention the near-impossibility of buying a house at hugely inflated prices, many still living with their parents; while all generations now agree that the younger will not enjoy the standard of living of their parents.

The only demographic that still thinks college has career value are men with a college degree or higher, who earn over $100,000 a year. They are the only winners in the tournament. Every level of education—high school, junior college, 4-year college, M.B.A. or PhD or professional credential in law, medicine, etc.—has value as an entry ticket to the next level of competition for credentials. The financial payoff comes when you get to the big time, the Final Four so to speak; striving through the lower levels is motivated by a combination of American cultural habits and wishful thinking.

The boom-or-bust pattern of rising education makes more sense in long-term perspective. For 100 years, the USA has led the world in the proportion of the population in schools at all levels. In 1900, 6% of the youth cohort finished high school, and less than 2% had a college degree. High school started taking off in the 1920s, and after a big push in the 1950s to keep kids in school, reached 77% in 1970. Like passing the baton, as high school became commonplace, college attendance rocketed, jumping to 53% at the end of the 1960s—there was a reason for all those student protests of the Sixties: they were suddenly a big slice of the American population. By 2017, 30% over age 24 had a college degree; another 27% had some years of college. It has been a long-time pattern that only about half of all college students finish their degree—dropping out of college has always been prevalent, and still is.

The growing number of students at all levels has been a process of credential inflation. The value of any particular diploma—high school, college, M.A., PhD—is not constant; it depends on the labor market at the time, the amount of competition from others who have the same degree. In the 1930s, only 12% of employers required a college degree for managers; by the late 1960s, it was up to 40%. By the 1990s, an M.B.A. was the preferred degree for managerial employment; and even police departments were hiring college-educated cops. In other words, as college attendance has become almost as common as high school, it no longer conveys much social status. To get ahead in the elite labor market, one needs advanced and specialized degrees. In the medical professions, the process of credential-seeking goes on past age 30; for scientists, a PhD needs to be supplemented by a couple of years in a post-doctoral fellowship, doing grunt-work in somebody else’s laboratory. In principle, credential inflation has no end in sight.

An educational diploma is like money: a piece of paper whose value depends inversely on how much of it is in circulation. In the monetary world, printing more money reduces its purchasing power. The same thing happens with turning out more educational credentials—with one important difference. Printing money is relatively cheap (and so is the equivalent process of changing banking policies so that more credit is issued). But minting a college degree is expensive: someone has to pay for the teachers, the administrators, the buildings, and whatever entertainments and luxuries (such as sports and student activities) the school offers—and which make up a big part of its attraction for American students. And all this degree-printing apparatus has been becoming more expensive over the decades, far outpacing the amount of monetary inflation since the 1980s. Colleges and universities (as well as high schools and elementary schools) keep increasing the proportion of administrators and staff. At the top end of the college market, the professors who give the school its reputation by their research command top salaries. 

Credential-minting institutions have been able to charge whatever they can get away with, because of the high level of competition among students for admission. Not all families can afford it; but enough of them can so that schools can charge many multiples of what they charged (in constant dollars) even 30 years ago. The result has been a huge expansion in student debt: averaging $38,000 among 45 million borrowers; and including 70% of all holders of B.A. degrees. Total student debt tripled between 2007 and 2022.

These three different kinds of inflation reinforce each other: inflation in the amount of credential currency chasing jobs in the job market; inflation in the cost of getting a degree; inflation in student debt. We could add grade inflation as a fourth part of the spiral: intensifying pressure to get into college and if possible beyond, has motivated students to put pressure on their teachers to grade more easily; in public schools, to pass them along to the next grade no matter their performance (retardation in grade, which in the 1900s was common, has virtually disappeared); in college, GPA-striving has a similar effect. Grades are higher than ever but the measured value of the contents of education, ranging from writing skills to how long the course material is remembered after the course is over  is low (Arum and Roksa 2011, 2014). College degrees are not only inflated as to job-purchasing power; they are also inflated as a measure of what skills they actually represent.

The remedies suggested for some of these problems--- such as canceling student debt by government action—would temporarily relieve some ex-students of the burden of paying for not-so-valuable degrees. But canceling student debt would not solve the underlying dynamic of credential inflation, but exacerbate it. If college education became free (either by government directly picking up the tab; or by canceling student debts), we can expect even more students to seek higher degrees. If 100% of the population has a college degree, its advantage on the labor market is exactly zero; you would have to get some further degree to get a competitive edge.

Scandals in college admissions are just one more sign of the pressures corroding the value of education. College employees collude with wealthy parents to create fake athletic skills, in a time when students apply to dozens of schools, and even top grades don’t guarantee admission. Since athletics are a big part of schools’ prestige, and are considered a legitimate pathway to admission outside the grade-inflation tournament, it is hardly surprising that some try that side-door entry. There is not only grade inflation, but inflation in competition over the pseudo-credentials of extracurricular activites and community service. Efforts at increasing race and class equity in admissions  increase the pressure among the affluent and the non-minority populations. Since sociological evidence shows that tests and grades favour children of the higher classes (whose families provide them with what Bourdieu called cultural capital), there are moves to eliminate test scores and/or grades as criteria of admission. What is left may be letters of recommendation and self-extolling essays--- what we might call “rhetorical inflation”, plus skin color or other demographic markers; but the result will do nothing to reduce the inflation of credentials. The underlying hope is that giving everybody a college degree will somehow bring about social equality. In reality, it will just add another chapter to the history of credential inflation.

Except for the small percentage of really good students who will take the tournament all the way to the most advanced degrees and become well-paid scientists and professionals, the growing disillusionment with the value of college degrees will result in more and more people looking for alternative routes to making a living. The big fortunes of the last 40 years--- the age of information technology—have been made by entrepreneurs who dropped out to pursue opportunities just opening up, instead of waiting to finish a degree. The path to fame and fortune is not monopolized by the education tournament. For the rest of us, finding more immediate ways of making a living (or living off someone else) will become more important.

P.S. The advent of Artificial Intelligence to write students’ papers, and other AI to grade them (not to mention to write their application essays and read them for admission) will do nothing to raise the honesty and status of the educational credential chase.

References

“More Say Colleges Aren’t Worth the Cost.” Wall Street Journal April 1, 2023 (NORC-Wall St. Journal survey)

www.bestcolleges.com/research/average-student-loan-debt/

U.S. Bureau of the  Census

Randall Collins. 2019. The Credential Society. 2nd edition.  Columbia Univ. Press.

Richard Arum and Josipa Roksa. 2011. Academically Adrift: Limited Learning on College Campuses. Chicago: University of Chicago Press.

Richard Arum and Josipa Roksa. 2014. Aspiring Adults Adrift: Tentative Transitions of College Graduates. Chicago: University of Chicago Press.

Three Police Tactics Led To Memphis Killing of Tyre Nichols

The fatal beating of Tyre Nichols by a group of Memphis police officers on January 7, 2023 shows the same patterns as other police atrocities.

 

Three police tactics and procedures seen in Memphis greatly increase the risk of cops becoming so aggressive and emotional that they lose self-control. The result is prolonged violence continuing long after the suspect is incapacitated; officers making frenzied, loud, even joyous noises egging each other on; mocking the victim, joking, and bragging about the incident for almost an hour afterwards. These are all signs of collective adrenaline surge-- like a group of excited sports fans-- at an adrenaline level where perception, cognition, and moral restraints are impaired.

 

The three factors are:

(1) Police anonymity: unmarked cars, no uniforms, wearing hoods, an ominous and threatening self-presentation.

(2) Large numbers of officers on the scene-- the crowd-multiplier of violence.

(3) Rumor transmission among police and support personnel, amplifying false beliefs about the dangerousness of the suspect.

 

(1) A pair of cops driving an unmarked car stop Tyre Nichols in the dark for an unspecified traffic violation-- "driving recklessly" in the initial report. It is a high-crime area in a city with a very high murder rate. The cops are part of a special unit, ominously titled SCORPION, proclaiming their intention to fight fire with fire. The officer who approaches Nichols' car (Haley) is wearing all-black clothes, a black hoody, and displays no police insignia. *

 

* In a similar incident on January 4  (3 nights earlier in the same neighbourhood)  22-year-old Monterrious Harris while visiting a cousin "was suddenly swarmed by a large group of assailants wearing black ski-masks, dressed in black clothing, brandishing guns and other weapons, hurling expletives and making threats to end his life if he did not exit his car." According to his lawsuit, "Harris thought the men were trying to rob him, and tried to back up his car... He then reluctantly exited with his hands raised and was grabbed, punched, kicked and assaulted for up to two minutes." He was arrested for being a convicted felon in possession of a handgun, criminal trespass, and evading arrest; the lawsuit accuses officers of fabricating the charges. [Associated Press, Feb. 9, 2023] Details are unverified at this time, but the incident suggests what an anonymous police stop by SCORPION looked like from the point of view of the victim. 

 

Officer Haley did not have his body camera on, but he was on a phone call at the time of the stop and was overheard cursing at Nichols, without telling him why he was being stopped or that he was under arrest. Nichols initially would not leave his car. He had no police record, and was a Fedex worker on his way home from his shift. Other officers (already on the scene, or soon arriving) pulled him from the car and beat him. After he was subdued, an officer used a Taser on him. Nichols broke free (more on this below), setting off a chase on foot. He lived just a few blocks away, and according to his cries, was trying to reach his mother to protect him from the assault.

 

(2) There were at least two police cars at the initial traffic stop. This would be in keeping with SCORPION organization in 4-to-10 person teams. Officer Hemphill (the only white officer among those identified) drove with Haley, used a Taser on Nichols, and on a body camera recording is heard saying "I hope they stomp his ass." At the second scene, after Nichols is recaptured and severely beaten, there are at least 5 officers taking part, including Haley and several others from the traffic stop; plus further officers called to the scene. Video shows "a number of other officers standing around after the beating." Altogether thirteen persons have been charged: including 3 emergency medical technicians who connived with the assaulting officers, acting more like a cheering section; 10 Memphis police or sheriff deputies.

 

Officers acted throughout as teams, pulling and restraining Nichols; egging each other on to further attacks; holding and moving him bodily into position for further beatings. Usually only two or three at a time; but the crowd-multiplier increases with the number of bystanders, providing vocal encouragement and heightening the emotional mood.

 

Look at the time-line: Nichols was stopped around 8 p.m. Haley pulls him from the car. Nichols says "I didn't do anything" as a group of officers begin to wrestle him to the ground. One officer yells "Tase him! Tase him!" Nichols calmly says, "OK, I'm on the ground." Video shows he is passive.  "You guys are doing a lot right now. I'm just trying to go home." Shortly after, he yells "Stop, I'm not doing anything." An officer fires a Taser while the others back off temporarily; Nichols breaks free and runs off. This enrages the cops, who chase after him, calling for more backup. They catch up with him a few blocks away (within a couple of minutes). A pole camera video shows "two officers standing over Nichols and striking him as he lies on the street. As he tries to get to his feet, a third officer kicks him in the head. Nichols resists the officers, and a fourth strikes him as he is brought to his feet. One of the officers then repeatedly swings and strikes Nichols in the head with his fist while other officers hold Nichols' arms back before he falls to the ground. Officers restrain his hands behind his back, then drag and prop him up beside a police vehicle." [WSJ, AP, NY Times, Jan. 28]

 

"Three officers surround Nichols as he lies in the street cornered between police cars with a fourth officer nearby. Two officers hold Nichols to the ground as he moves about, and then a third appears to kick him in the head. Nichols slumps more fully onto the pavement with all three officers surrounding him. The same officer kicks him again. The fourth officer then walks over, unfurls a baton and holds it up to shoulder level as two officers hold Nichols upright. "I'm going to baton the shit out of you," one officer can be heard saying. His body camera shows him raise his baton while at least one other officer holds Nichols. The officer strikes Nichols on the back with the baton three times. The other officers than hoist Nichols to his feet, with him flopping like a doll, barely able to stay upright. An officer than punches him in the face, as the officer with the baton continues to menace him. Nichols stumbles and turns, still held up by two officers. The officer who punched him then walks around to Nichols' front and punches him three more times. Then Nichols collapses.

 

"Two officers can then be seen atop Nichols on the ground, with a third nearby for about 40 seonds. Three more officers then run up and one can be seen kicking Nichols on the ground." [Bystanders joining in at the end.]

"Recording showed police beating Nichols for three minutes while screaming profanities throughout the attack."

 

In the aftermath, the cops are still pumped."Videos showed officers leaving him on the pavement propped against a squad car as they fist-bumped and celebrated."  A police call describing a "person who had been pepper-sprayed" led to emergency medical responders arriving about 10 minutes later (8.41pm); the EMTs did little but join in the celebration, summoning an ambulance which arrived at 8.55 and left for the hospital at 9.08. Apparently they bought the cops' version of what happened. During this period "Haley took photos with his cell phone as [Nichols] lay propped against the police car, and sent them to other officers and a female acquaintance... Officers shouted profanities at Nichols, laughing after the beating, and bragging about their involvement."  This was the same atmosphere as in the beating of Rodney King by the LAPD in 1991: 21 officers ringed the captured car, cheering while four of them did the beating. Driving back to the station, police radio traffic boasted "we really hit some home runs out there tonight, didn't we?" (Rodney King worked at Dodger stadium.) [Collins 2008: 88-90]

 

(3) Rumor transmission in the police network:

 

In initial police reports "at least two officers said that Nichols tried to grab an officer's gun-- a claim for which there is no evidence, according to the documents, while leaving out details of the beating." (NYT Feb. 8, 2023)

 

This is a standard cliché. In the telling, it is typical to exaggerate the amount of threat posed by the suspect, if there is any hitch at all at the outset. In the same way, large numbers of officers called to a potential suicide-- a man threatening to jump from a freeway overpass; or holed up inside a house-- gets amplified as the report goes around by radio traffic, dispatchers, and word of mouth to those called to the scene. A possible suicide attempt drops out the "maybe" and adds the cliché that the suspect may be dangerous;  morphing into armed and dangerous; morphing into armed and swearing not to go out without taking someone else with him. In 1998, a drunken white man sitting on a LA freeway ramp for an hour attracted dozens of police from various juridictions (highway patrol, town police forces, sheriff deputies); during that time police radio dispatchers spread erroneous reports that he was firing at police helicopters and officers on the ground. They shot him 106 times, with many more bullets hitting houses blocks away.  [Collins 2008: 113]  This is another causal path by which calling large numbers of police (and for that matter, other support personnel) to the scene promotes police violence--- larger numbers are more links for rumors to be formed.

 

Psychological experiments on messages repeated from one person to another  find the message loses all detail as it goes down the chain, turning into the most standard cliches. In a famous case in 2009 a Harvard professor, a black man, was dropped off at his home by a taxi; a "not sure if something is wrong" call-in by a passerby was transformed by the police depatcher into two black men trying to break into a house; Prof. Gates became understandably upset and was arrested-- lucky for him he didn't get shot. [Collins 2022: 282-4] Whether the story that "he tried to grab an officer's gun" started from the beginning of the Nichols arrest is unclear-- the police were already primed to find a murderous suspect, get angry at any lack of cooperation, and become livid if someone tries to run away--- but the fact that the grab-the-gun story was stated by two or more officers suggest that it emerged as the overarching story frame by the time the police and the EMTs were jovially celebrating.

 

The patterns found in the Memphis killing have been videly documented in violence research.

 

[1] The hangman phenomenon: Wearing hoods, masks, and other kinds of scarey costumes are typical among mass rampage killers. The gunman who killed 12 and wounded 70 at a Batman movie in Aurora, Colorado in 2012 wore a Joker costume and opened fire under the cover of darkness [2020: 257-8]. Kids who shoot up schools often collect military equipment to wear, including shooting-range ear-plugs which create a feeling of isolation from the victims. [2020: 261-69] The underlying social psychology is that people find face-to-face contact with a victim to be disconcerting; above all, it is eye contact that attackers avoid, since it humanizes the encounter. Videos and photos of beatings during riots (whether by crowd-control forces, protesters or hostile ethnic groups) show that victims are almost always turned away from their attackers; falling down in a frenzied demonstration acts like a trigger for attackers. [Collins 2008: 128-32; Nassauer 2019] Conversely, calmly facing one's potential attacker is the best way to fend off violence. Professional killers, such as the Mafia, deliberately attempt to take their victim from behind or when they are not looking. [2008: 239]

 

This is the hangman phenomenon: executions traditionally were carried out wearing a hood. Studies of military violence show that wearing a hood is associated with higher levels of violence and deliberate cruelty. [2008: 78-80] It is a way to avoid face-to-face intersubjectivity; when one's eyes are reduced to a little slit in face-covering darkness, the mutual exchange of emotions is cut off. The same psychological mechanism is found in the superior lethality of snipers operating through long-distance scopes-- the psychological security that the human victim is not looking back at you. [2008: 233-35] Wearing ski masks, along with all-dark clothes, are used world-wide by "elite" police and military forces, essentially as a morale-booster, and deliberate attempt to terrify their victims. William James explained the psychology: just as running away triggers the emotion of being afraid, dressing oneself up in the paraphernalia of a frightening tough guy makes one feel arrogant and aggressive.

 

No doubt American cops who dress themselves in dark, frightening outfits think they are being cool (photos of FBI raids often show the same tough-cop fashion code). Cops don't want to be square; and in the antinomian youth culture of the past half-century, criminal styles, playful or otherwise, are the definition of cool. But today's police should be aware they are emulating the demeanor and the ethos of authoritarian "secret police"--- secret in the sense of plain-clothed.

 

The Gestapo (Geheime Staatspolizei-- literally "secret state police") liked to break in and make their arrests at night. But these are the bad guys! Not like us? The Nazis regarded themselves, from their point-of-view, as the good guys, taking necessary measures against horrible enemies, mythological as they might be. Filling in the same blanks with different details, this is the same psychological pattern as the Memphis SCORPION and similar plain-clothes special operations (i.e. violence-seeking) police.

 

Besides the psychological effects of hoods and scary costumes on the perpetrators, there is a psychological effect on their targets. Individuals like Tyre Nichols, stopped by thug-like men, understandably try to escape. Even after it becomes clearer that they are police, acting the thug role makes them morph into the same thing. The Memphis killing resembles one of the first such police killings to be widely publicized: Amadou Diallo, in NYC in 1999, had the misfortune to be coming out of his apartment building when four police in a special anti-rape unit drove by; stepping back into the shallow entrance corridor set off a forward rush by the cops who fired 41 shots, at a distance of 3 meters, while Diallo reached into his pocket to show his ID. [2008: 112] The overkill--- firing went on after he was down-- is an indicator of adrenaline rush, pumping up attackers for many minutes thereafter.

 

Bottom line: Police wearing masks, hoods, and gang-like clothing should be banned by law. Respect for police does not come from looking like violent thugs. Whatever the tactical advantages police officials may think there are in these practices, more crime is prevented when the community trusts the police and cooperates with them than when they are afraid of them. *

 

* A central theme of Elijah Anderson, Code of the Street, 1999 and Black in White Space,  2022 is that black ghetto communities have high crime rates because residents do not trust the police to help them, so being tough-- or at least putting on the appearance of it-- becomes the local culture of self-defence. Combined with cops' paranoia, it makes a vicious circle.

 

[2] The crowd multiplier.  The more police at the scene of an arrest of single suspect, the more likely prolonged and emotionally out-of-control violence. [2022: 278-79] In all kinds of violence, a group against an individual produces the most vicious, prolonged, and out-of-control attacks. Photographic evidence from riots, brawls, and and ethnic violence overwhelmingly shows the pattern of 4-to-6 persons beating an isolated individual, typically lying on the ground and unable to resist. [Collins 2008: 128-32; Nassauer 2019] The pattern is found all over the world, and in any combination of social identities; police and soldiers act the same way that ethnic rampagers do. A combination of psychological mechanisms are at work: attacking from all sides ensure the victim cannot maintain eye-contact; successful violence almost always comes from attacking a weak victim. Emotional contagion accelerates in groups;  this is especially strong when there are supportive audiences [2008: 203-4, 413-30; 2022: 277] and  above all when the attackers are men and there are women in the audience.  [2008: 479]

 

Adrenaline rush is typical in most violent confrontations; when it intensifies to higher levels (indexed by heart-rates over 170 BPM) perception blurs, and trained attackers operate on auto-pilot; ignoring the victim's cries or interpreting them scornfully. The attacking group becomes an emotional cocoon, and a cognitive cocoon as well-- a state of polarization where all good and humanity is on our side, and the victim is dehumanized. This is a mini-version of what happens in genocidal massacres. [McDoom 2021] Hence the bizarre spectacle (to outsiders) of laughter and ebullience that continues while the adrenaline rush takes time to subside. [2008: 282]

 

Bottom line: Police training needs to be thoroughly revamped. As it stands, training emphasizes that a police officer is constantly at risk; weapons drills train for "muscle memory" to maximize quick response. It would take quite a revolution to train officers to prioritize monitoring their own emotions and becoming away of how they amplify each other into a collective mood. Officers need to be throughly trained in the psychology of violence, above all their own. *

 

* The best report thus far on what it is like to attempt to train officers on social factors in their work is in Jennifer C. Hunt, 2010. -- a psychoanalyst working for the NYPD Training Division. She did not feel successful in changing the  scary-macho culture.

 

[3] Transmitting stereotyped rumors. I have already noted that psychological experiments where a message is repeated through a chain find that within very few links, the message becomes shorter and simpler, losing all nuance and context. The stereotype is in the ears-and-brain of the hearer, even when the message is repeated just a few seconds later; if more minutes intervene, the message becomes the staccoto words of a cliché.  The rumor-stereotyping pattern increases the more links there are in the chain; this includes both police radio dispatchers, and the police themselves, in car-to-car radio links, or over their computers; and it can be enhanced on-the-spot as more police backup (as well as medical support) arrives.

 

Bottom line:  police dispatchers need to be better trained, specifically in awareness of the rumor-stereotyping process. Dispatchers are a low-paid, low-skilled job, which should be upgraded--- again, with social psychology in the foreground.

 

Most importantly, police training needs to be thoroughly investigated and reorganized. With each highly publicized incident of police violence, there are political calls for increased punishment, including removing qualified immunity. Whether or not this politically difficult reform is carried out, it should be noted that highly publicized trials and convictions of officers since the George Floyd killing have not stopped similar police atrocities from happening. Police throughout the country are acutely aware of the publicity; yet why do they keep on doing it?  The answer is that the behavior of police in action is subject to emotional forces, like the ones I have outlined. It is in the interest of police, and everybody else, that these social-psychological dangers should be very high in their awareness.

 

Sources:

 

News reports by Associated Press, New York Times, and Wall St. Journal, Jan. 28 - Feb. 9, 2023.

 

Elijah Anderson, 1999. Code of the Street.

2022. Black in White Space.

Randall Collins, 2008. Violence: A Micro-sociological Theory.

2022. Explosive Conflict: Time-Dynamics of Violence.

Jennifer C. Hunt, 2010. Seven Shots.

Omar McDoom, 2021. The Path to Genocide in Rwanda.

Anne Nassauer, 2019. Situational Breakdowns: Understanding Protest Violence.

Sexual Revolutions & The Future of The Family

 The family is the oldest human institution, even a pre-human institution existing among the great apes. Along with the deliberate control of fire, which Goudsblom saw as the beginning of socially-imposed self-discipline and the “civilizing process,” early humans also developed a variety of kinship institutions. These were rules about who could or could not marry whom; incest prohibitions and exogamy rules; residency rules about whose group the new wife or husband lived with; descent rules about which lines of descent were considered lineages of membership, obligation and inheritance.

 

Family and kinship have always been based on sexual behavior: the right or obligation to have intercourse is the operational definition of marriage (however sentimentalized or euphemistic the terminology might be). Intercourse reproduced the social structure from generation to generation; including status differences between children of socially recognized marriages, secondary marriages such as concubines, and illegitimate children who had no legal right to inherit. Regulated and legitimated sex was the building-block of kinship structure.

 

De-regulation of sex became systemic change in human societies when other institutions were created that took the place, in varying degrees, of family-based economic and political alliances, child-rearing, and inheritance.  Until the end of the Middle Ages, the kinship-based household was the building block of political and military power, as well as economic production and consumption. Modernity began by replacing family-based organization with bureaucracy. States began to regulate the family household from outside, inscribing everyone on the rolls of the state as individuals. The core of the family has become personal and sexual rather than political and economic. What is personal and sexual has become freer, more a matter of individual choice; at the same time sexual behavior in the non-family world has become subject to explicit political regulation, either restricting or permitting. From the early 20th century onwards, there have been increasingly militant movements on one side or another of what is sexually permitted, encouraged, or prohibited.

 

In this context, I will consider current disputes over sexuality and gender. Why is there an upsurge in anti-abortion movements just now? I will argue that abortion is primarily about freedom of sexual action. It is part of an overarching array of issues that includes homosexuality, which is to say, more kinds of acceptable erotic practices; also publicizing one’s sexual identity in schools, in using toilets, and in festivals and parades; not merely private freedom of sexuality but asserting it as one’s central identity. Politics has become more centered on sexuality than at any time in history.

 

These movements are allied in a united front with a struggle to eradicate gender distinctions. Both sides of the dispute mobilize  movements and propose laws, each protesting against the other. In larger perspective, it is a struggle over what remains of the family and what will replace it.

 

In what follows, I will sketch the many forms of family-based societies that made up most of human history, from the tribal and band pre-state period, through the feudal-patrimonial households which were displaced by the bureacratic revolution. This transition was the specialty of the two great historical sociologists, Max Weber and Norbert Elias. Both saw the world-historical importance of the transition, although they called it different things. Weber called it "rationalization" (while recognizing the ambiguities of the concept), but principally he saw modern society as increasingly penetrated by bureaucracy. The lesson of Foucault's cultural histories is similar, although he says nothing about bureaucracy as a driving factor.

 

Elias set out to historicize Freud: bodily repression of natural impulses is not primordial but dates from the late feudal period. Psychology is driven by geopolitics; conquering kings centralized territorial regimes by making the warlords spend time at court-- thereby acquiring manners and self-repression. Courtly manners were adopted by the middle class as moral obligations. This is the "civilizing process," the strengthening of a  super-ego of self-control, taken-for-granted and becoming an unconscious "second nature". Elias followers (e.g. Wouters) posit further accretions of self-inhibition through the following centuries up through today.

 

In this historical context, I will sketch the history of abortion struggles; the sexual revolution in non-marital sex; homosexual and transgender movements and the battle of pronouns; and the perceived decline of the family. This will help answer the question: why anti-abortion movements now? I will end with some sociological tools for forecasting the future of the family.

 

I hope you will excuse me for relying on American data. Some of these trends originated in Europe; on the whole it has been a world-wide trajectory (with the notable exception of the Moslem world).

 

Kin Groups versus Bureaucracy

 

Kinship was the earliest form of human organization, and a distinctive break from animal families. The history of complex organizations took off when they separated from kin-based households into distinctive organizations for politics, religion, and economy. But for many centuries these spheres remained connected in some degree with kinship and household. Big shifts in political organization during ancient and medieval times, such as recruiting warriors to join migrating and conquering  hordes, were usually created by pseudo-kinship, a pretence of being descended from some mythical ancestor. Settled states were almost entirely alliance networks among armed households. They were "patrimonial households" (a Weberian term that should not be confused with "patriarchy"), with marriage connections at their core. But a household was powerful and rich to the extent it contained many non-kin servants, soldiers, guests, hostages, apprentices, as well as prestige-giving artists and entertainers. The big break in organizational forms was the rise of bureaucracy, which as a practical matter meant that work, politics, religion, etc. were carried out somewhere other than where families live. The change was visible in the built environment; castles and homes that were simultaneously work-places gave way to governmental and commercial buildings, containing their own furnishings, weapons and equipment, treated as property of the organization rather than of particular persons.

 

Too much emphasis has been placed on the concept of bureaucracy as a set of ideals and a form of legitimacy; it was simultaneously a form of material organization: control through written rules and records, hence a roster of who belongs to the organization, what money they collect and spend, recording who does what and how they did it. It is a network of behavior according to written rules and reports. Everyone is replaceable according to the rules, which means procedures, examinations, due diligence and whatever the cliché was at the time. Schooling is another such bureaucracy, taking away instruction from the family; and thus simultaneously freeing individuals from family control, while making them targets for indoctrination by whoever controls the state.

 

This is an idealization; empirical studies of bureaucracies show that the rules were often evaded or manipulated; modern research shows that bureaucrats don't just break the rules backstage, but know how to use the rules against others, when to invoke them and when to ignore them. Being maximally rule-bound ("bureaucratic") is not the most efficient way to do things; but it is an effective form of organization for breaking the power of kin groups, inherited rule. It keeps an organization going as an impersonal entity, even if inefficiently. Every revolution and every successful social movement institutionalizes itself in new rules and government agencies to enforce them. In this ironic sense, as the Weberian scholar Reinhard Bendix remarked, democracy extends bureaucracy.

 

It is in this context that we can understand the mobilization of conflicts over abortion in particular and sexual behavior in general.

 

Abortion and Sexual Behavior

 

Abortion is argued in philosophical and theological terms: on the one hand, the protection and sacredness of life; on the other, the right to choose, rights over one’s own body. But sociologically, abstract ideas and beliefs are not the ultimate explanation of what people do. It begs the question: why do some people sometimes believe one way or the other?  When and why are they vehement about their beliefs?  When do they organize social and political movements about them?

 

Arguments about abortion are stated altruistically: it has nothing to do with me personally, I am concerned for the unborn children, for the right to life generally. On the pro-abortion side, there is a general argument that everyone has the right over one’s own body; but also sometimes personal-- I have the right to an abortion if I want one.

 

But sociologically, the ground zero is always pragmatic: a practical matter of how people live.  What is the human action at issue behind the abortion argument? Abortion is about sex-- erotic behavior.  Why do some women want abortion? Because they have sex without marriage, in pre-marital and extra-marital sex. It is freedom to fuck without worrying about pregnancy, and thus is also a form of birth control for married couples.

 

Up through the early 20th century, an unwanted pregnancy was a fatal life event for a woman. The exception was for rich women who could keep it secret and farm out an unwanted child to a woman of the lower classes to care for it. To have a child outside of wedlock was scandalous, shameful, to be hidden away if possible.  It was a badge of shame, punished by being ostracized; the Scarlet Letter, in Hawthorne’s novel about 17th Century New England puritans.  Worse yet, the mother could be executed for murder if she had an abortion; or disposed of the infant though infanticide (this was the plot line of Goethe’s Faust). 

 

That was the historical scenario.  Today, some abortions happen because married women don’t want to have a child at the time; because the child is malformed; because the mother is in danger; or because it interrupts her career. Most abortions are to unmarried women in their twenties.

 

The taboo on unmarried pregnancy fell away rapidly in some countries (first in Scandinavia, then in the US) in the 1950s and 60s. In part, this was because of much greater acceptance of sex before marriage; in part because young middle-class couples started living together without getting married-- a trend that grew very rapidly at the turn of the 1970s, and was accepted surprisingly soon by the older population. Before that time, “living in sin,” as it was called, or “shacking up” was regarded as something poor or non-white people did. But within a few years it became normal to hear someone introduced as “this is my partner” rather than “this is my husband, this is my wife.”  The further terminological shift in ordinary language was adopted by homosexual couples, who more recently have shifted to using “husband and husband” or “wife and wife,” after winning political and legal battles over gay marriage.

 

The political and legal battle for abortion happened at the same time as the revolution in unmarrried cohabitation. In Scandinavia limited abortion rights began in the 1930s and expanded; in 1973 the US Supreme Court ruled in the lawsuit Roe v. Wade that abortion was a right covered in the abstract language of the Constitution. The anti-abortion movement dates from that period.

 

The arguments pro and con are on the grounds of legal philosophy.  Translated into social practice, to restore the ban on abortion means that sex should be confined to marriage. This means rolling back the sexual revolution of mid-20th century. On the other side, my body is my own, means in practical terms: I can have sex with whoever and whenever I want.  Men traditionally had this right; why shouldn’t women?

 

We are approaching an answer to the question: why is there a resurgence of the anti-abortion movement just now? Which is to say, a movement against casual, non-marital sex.  This should be seen in the context of the sexual revolution, starting about 100 years ago.

 

Sexual Revolutions

 

Throughout human history, marriages were almost always arranged by kin groups rather than the choices of independent individuals. Pre-state kinship structures were built around marriage rules, which group should send daughters or sons to another specified group. With the rise of large-scale warfare and alliance politics, marriages and other forms of sexual exchange became used as political treaties. Sending daughters of one leading family as wives or concubines to another leading family made them allies, and also set the stage for future inheritance of territories depending on accidents of which children were born and survived into adulthood. Diplomatic marriages of this sort have continued among royal families (even among figureheads like Queen Victoria) down to the era of modern democracies (including England’s Queen Elizabeth II). At less exalted levels of social class, arranged marriages also existed among property-owning families, an arrangement for continuity in family enterprises, and sometimes as a means of status climbing where money could be traded for ancestral status.

 

Sexual/love affairs also existed in virtually all recorded societies since ancient times, but mainly outside of marriage. They were a form of personal excitement, the thrill of a private backstage (Romeo-and-Juliet) which now appeared in the otherwise privacy-denying patrimonial household. Most of what we know about such love affairs is from the literature or entertainment media of the time, which probably exaggerate them compared to the realities of ordinary life in pre-modern households. But as bureaucracy and democracy eroded the importance of household and inheritance for individual's careers, marriage markets spread among the middle class. The growth of individual marriage markets-- though still heavily influenced by parents-- can be indexed by the topics of popular literature. The new ideology of marriage for love combined with a concern for material fortune is described in the novels of Jane Austen around 1800; it developed more slowly in French literature (long focused on adulterous adventures), and sentimentally as well as moralistically in American literature. The belief became conventional that all marriages happen by falling in love, or at least this became the normative way of speaking about it.

 

The 1920s were a revolution in courtship. Parents steering their children’s marriage choices was replaced by dating and partying. From now on the younger generation mixed the sexes without supervision, creating a culture where drinking, dancing and necking was the main excitement of life rather than a transition to marriage. It was a rebellious thrill in the US where alcohol was prohibited, but the same style emerged in England and Germany also.

 

In the 1930s and 40s, divorce began to be common, no longer disreputable and scandalous. By the 1960s, almost 50 % of US marriages were ending in divorce; a level relatively constant since then. This eroded the ideal of sexual monogamy or "purity"; a large portion of the population of both sexes were having multiple sexual partners.

 

Since the transition from childhood to adulthood involves a shift from a life-stage in which sex is officially prohibited to a stage when it is allowed, the teen years are a center for sexual regulation and associated ideologies. The 1950s produced a new social category, the “teenager”. Working class youths no longer entered the labor force, as governments made them attend secondary school; with free time on their hands, teens created social clubs and gangs, got their own style of music and dancing, with a tone of rebellion against traditional middle class propriety. The rise in crime rates began at this time, and continuing from the 1950s into the 1990s. How to bring up children became a topic of controversy ever since. Apart from psychological advice on home life, the social instrument for shaping and controlling the emerging generation has become schools and the policies by which they operate. Hence a new site for political struggle.

 

The Invention of the Social Movement

 

Here we step back again to trace another offshoot of the bureaucratic revolution.  The social movement is a form of organization and politics outside of the family and household, but also outside of formal bureaucrities: that is to say, it it a mode of creating new networks that did not exist before, recruiting persons wherever they might come from, generating an alliance of individuals held together by their devotion to a common cause. Social movements are a distinctively modern form. They scarcely existed in the era of kinship politics, where household might shift alliances but individuals within them could not go out to join movements on their own. The exception was religious movements, chiefly in the monastic world religions such as Buddhism and Christianity during their early phases of expansion. But as these religions achieved success they tended to ally with the patrimonial households of the aristocracy, and religious conversion generally took place en masse by the conversion of leading aristocrats who ordered their subordinates to follow. Other large-scale religions, such as Confucianism, Hindu sects, and Islam, generally blended with and reinforced existing kinship politics.

 

Charles Tilly dates the invention of the social movement to the late 1700s in England and France. Prior to this time, there could be local protests and uprisings in periods of food scarcity and distress, but they remained localized and when serious were almost always put down by superior military power. The bureaucratic state changed the logistics of political activism; it promoted roads, canals, transport, postal services and the delivery of books and newspapers; social movements were now able to organize large number of people across long distances. And the increasingly centralization of the state gave movements a target for their grievances: the capital city and the central government itself. Movements developed a repertoire of techniques for petitioning and protesting, ranging in militancy from demanding reforms and new laws, to overthrowing the state by revolution. In democracies, social movements became an alternative to struggling for power through periodic elections; one doesn't always win the vote but protest movements can be mobilized at any time to bring pressure on the authorities to make urgent and immediate changes.

 

With the expansion of communications -- telephone, radio, film, television, computers and the internet-- the material means for mobilizing social movements have vastly expanded. In the 19th and early 20th century, the main social movements were class-based, especially labour movements; sometimes ethnic and nationalist; sometimes humanitarian reform movements. From mid-20th century through today, the variety of social movements has exploded into a cascade of social movements, all competing for attention.

 

Sexual Movements

 

What was different in the 1960s was that political and social movements became heavily based among the young (in contrast to labour movements, based on married adults). The shift was driven by a huge increase in university students. Again the underlying force was a combination of bureaucracy and democracy. State universities proliferated in response to popular demands for educational credentials once monopolized by the elite. Ironically, this set off a spiral of credential inflation, as once-valuable school degrees (secondary school diplomas; then undergraduate degrees) became so widespread that well-paying jobs increasingly demanded advanced professional degrees. The political side-effect, however, was that the group of young-adult "university age" students became a favourable base for organizing social movements: students have flexible hours, are freed from family supervision, massed together in their own spaces, and thus available for speedy communications and the emotionally engaging rituals of rallies, marches, protests, and sit-ins. With the adoption of non-violent techniques of "civil disobedience" borrowed from Gandhi's independence campaign in India, militant social movements could both claim the moral high ground, and apply pressure by disrupting public routines. Such movements could also spill over into property destruction and violence; as Tilly noted, a violent fringe has historically existed around any large public protest.

 

In the self-consciously revolutionary generation of the 1960s, we called ourselves the New Left, distinguished from the old Left by being less concerned about ideology than lifestyle. Culture icons were the hippies, drop-outs from school and career, living in communes where they shared psychedelic drugs and free love. In reality, most were weekend-hippies, and most of the free-love communes disintegrated rather quickly, over jealousy and status ranking. The main legacy of the “free love” period was that cohabitation-- living together without getting married-- became widespread, even becoming a census category in the 1970s.

 

The 70s were dominated by sexually-based movements.  First,  the feminist movement sought equal legal rights and employment opportunities for women; plus its militant lesbian branch, condemning heterosexual intercourse as the root of the problem. In the 1970s and increasing with each decade through the present, a chain of homosexual movements demanded not only freedom from discrimination but the recognition of a new public vocabulary-- gender rather than sex, gay rather than homosexual, and so on. This has been a cascade of movements, each building on its predecessors, in tactics, ideology, and lifestyle, each finding a new issue on which to fight.

 

Counter-cultures and Culture Wars

 

Recent movements are built on prior movements of cultural rebellion, going back for a century. Like the New Left, the overall ethos has been antinomian, the counter-culture of status reversal. These rebellious social movements were paralleled by shifts in self-presentation, demeanor, and in the media depiction of sexuality. In the 1920s, women’s skirts became shorter; young women adopted a more mannish look. They also began to show a lot more flesh; body-covering swim suits became briefer; women athletes exercized and competed in shorts. (The trend also existed in socialist and Soviet Communist organizations; and in the nudist movement popular in Germany.)  In  1946 came the bikini, created in France and named for an island where an atom bomb was exploded; eventually there were men in thongs and women going topless at beaches. The 60s and 70s were a weird melange of clothing fads: granny dresses and throw-back Sgt. Pepper uniforms; Nehru jackets, surgical smocks; men in pony-tails wearing pukka-shell necklaces and jewelry earrings. Most of these styles did not last long, but the prevailing mood was change for the sake of something different. The long-term result was the casualness revolution (also called informalization), which triumphed by the 1990s: wearing blue jeans, T-shirts and athletic clothes on all occasions, discarding neckties and business suits; calling everyone by their first name, no more use of titles and once-polite forms of address.

 

Simultaneously with these changes, erotic heterosexuality was coming out of the closet, in literature and the media. The “jazz age” of the 1920s was originally named after a slang word for having sex; novelists like Scott Fitzgerald and song-writers like Cole Porter were full of innuendo. James Joyce’s Ulysses in 1922 began literary depiction of the bodily details of sex, followed by D.H. Lawrence, Hemingway, Henry Miller, and Anais Nin; most of these were published in Paris but censored elsewhere until 1960, when their mass publication fueled the sexual atmosphere of the counter-culture. In 1968, Hollywood film censorship changed to a rating system, marketing soft porn as PG (“parental guidance”) and hard porn as X-rated. The 70s was the era of the so-called “Pubic Wars”: glossy magazines with nude photos tested the borders of what could be displayed, moving from breasts to pubic hair to aroused genitals and by the 1980s to penetration and oral sex. Pornographic photos had existed before, but they were cheaply produced and had a limited underground circulation; now these were some of the biggest mass-distribution magazines. Sex magazines went into decline in the 90s, replaced by porn sites on the Internet.

 

Cultural rebellion spilled over into language. Obscene words began to be used in political demonstrations; then on T-shirts, in fashion advertising, and in ordinary middle-class conversation. The remaining bastian of prohibition on obscene language is what can be said in school classrooms.  Everywhere else, flauting overt sex has been a successful form of rebellion. One might even say that the major line of conflict is no longer between economic classes, but a status division: hip and cool versus square and straight.

 

Homosexual sex came out of the closet at the same time as the porn revolution. Gay porn magazines and film followed heterosexual men’s magazines; their circulation was never as wide (Playboy and Penthouse reached peaks of 5-to-7 million), but the gay movement was more controversial and more activist. It spun off from the resistance tactics of the civil rights movement, pushing back at police raids of gay bars and meeting places. It becamc a cascade of movements: gay and lesbian joined by bi-sexual, queer (militant homosexuals rejecting gay marriage), transgender, transsexual, non-binary, and more. The growth of this acronym—now up to LGBTQIA+ -- is itself a sociological phenomenon to be explained, as new identities have been added every few years, a trajectory likely to continue into the future. This is the pattern of a social movement cascade; successful movements do not retire, declaring their cause is won, but spin off new branches, seeking new niches and issues. The phenomenon is sometimes referred to as extending social movement frames to new targets.

 

A related issue has been sex education in the schools, initially about contraceptives for the prevention of venereal disease (a term subsequently changed as too judgmental). Sex education grew as an official alternative to parental advice or to informal peer-group sexual culture; sex education is the bureacratization of sex. In the early 21st century its function expanded to teach childen about homosexuality as a protected status, and as a life-style choice. In recent years there are movements among students as young as elementary school demanding to be referred to by non-gendered pronouns; and for government-funded sex-reassignment hormones or surgery. The fields of struggle have expanded: gender-free toilets; the battle of pronouns, banning the words “he” and “she”. In 2022, adolescent children have been charged with sexual harassment for "mispronouning" -- referring to a classmate as "she" instead of "them." In 2021, the U.S. House of Representatives passed legislation banning the use of gendered words “father, mother, brother, sister” in government documents. Federal health organizations now refer to mothers as "birthing persons" and ban the term "breast-feeding" in favor of "chest-feeding." (Wall Street Journal, May 10 and May 24, 2022) There are similar efforts to create gender-neutral pronouns in French and Spanish, although thus far not very popular.

 

Why Anti-abortion Politics Now?

 

The arena of such conflicts has become increasingly political, as activists file lawsuits in the courts and demand new legislation; escalation on one side leads to counter-escalation on the other. It is in this context that we can explain why the anti-abortion movement has become much more militant in the last few years. In 2019, abortions in the US were about 20% of live births; but in fact the ratio has fallen from 25% ten years earlier; this is largely due to teenagers having fewer children and fewer abortions; and to some extent to the growth of homosexuality in the age-group below 30. The anti-abortion movement has not intensified because abortion was growing worse; it is just the most prominent way conservative legislators can strike back at the latest waves of sexual revolution.

 

Conservatives view these developments as the decline of morality and good taste;  the intrusion of government into the lives of their children; and educational policies that they regard as indoctrination. Abortion is seen as part of the sexual revolution run rampant, separating sex from the family, extolling forms of sex that turn traditional parenting into an outdated status. Militants of homosexual movements have declared that hetero-normativity is on its way out. Homosexuality has become more widespread: it was less than 2% of the Baby Boom generation; grew to almost 4% of the generation born before 1980; to 9% of those who became adults around the year 2000. In so-called Generation Z, now about 18 to 23 years old, identifying as LGBT has jumped to 16%. This is still far from a majority; but an expanding movement is full of aggressive confidence, looking forward to a time when the heterosexual family is a quaint minority.

 

Conservatives see the same trends but from a different point of view: the falling marriage rate; below-replacement fertility, now down to 1.6 children per woman in the US, the lowest in its history (and even lower in parts of Europe); 40% of all children born to unmarried parents. More people are living alone; proportionately more among the aged 65 and older; but in sheer numbers of households, the largest number living alone are working-age adults.

 

Strict laws in American states banning abortion have been created in a situation where the political split between conservatives and liberals leaves neither of them with a firm majority at the Federal level, while conservatives fall back on regional state legislatures which they control. Here also control over what goes on in the schools is increasingly contested.

 

Abortion is just one issue in a divisive cluster of issues.  Making abortion laws more restrictive will not save the family; illegal abortions would re-appear, recapitulating the conflicts of the 1960s. Conflict over abortion is a symbol of the bigger question-- what conservatives perceive as a multi-pronged assault on the family.

 

Why the Family is Not Likely to Disappear

 

But there are reasons of a different sort why the family is not likely to disappear any time soon. When the feminist revolution took off in the 1970s, men soon discovered they had an economic interest in their wives’ careers. A family with two middle-class incomes could outspend a traditional, male-headed upper-middle class household. Two working-class incomes put a family in the middle-class expenditure bracket. In the new economic hierarchy, the poorest families are those where one woman’s income has to care for her children alone. Marriage and its shared property rights continues to be the bulwark of economic stratification.  From a radical left point-of-view, this would be a reason to abolish the family; or at least take child-rearing away from the family.

 

The situation is complicated by gay marriage, beginning when gay couples demanded the tax and inheritance rights of marriage. It also creates wealthy households, since gay men are usually middle class or higher, and two such incomes makes them big spenders-- one reason why consumer industries and advertising are so favorable to the gay movement.  On the other hand, although gay couples sometimes adopt children (or use sperm donors), the number of children in gay marriages is small (only 15% of same-sex couples, married or not, have children) and unlikely to compensate for the overall decline in child-bearing. There are about 1 million same-sex households in the US; out of 128 million households, this is less than 1%. Since about 13 million Americans identify as LGBT, this implies that only 1/6th of them are living with a sexual partner; most of them are living alone. The big increase in living alone may even be driven by the rise of homosexuality, or perhaps vice versa. This seems to be particularly true in big US cities, such as Washington D.C., where one-quarter of the adult population live alone in apartments, making up half of all households. Washington is also the city where the largest percentage identify themselves as LGBT, at 10%. 

 

Can sociology predict the future of the family? What will happen hinges a great deal on government regulations, and these depend on the mobilization of political movements against each other. The Internet era has made it easier for all sorts of movements to mobilize. But government regulation may become a weapon by which one side can censor the other and try to keep it from mobilizing. The causes of conflict are easier to predict than the outcomes, especially when the sides are relatively evenly balanced.

 

Computerization and its offshoot the Internet, foreshadow a future in which almost everyone works at home; manual work is done by robots; everyone spends most of their time communicating on-line, or absorbed in on-line entertainment. The generation brought up on the Internet is the shyest generation yet; they have many on-line “friends” but few friends in the flesh; they are less sexually active; more anxious and fearful. The issue of abortion may eventually decline, because there is less sexual activity in the future generation. The immersive virtual world of the Internet, strongly promoted by today’s media capitalism, may be destroying the family by making it easy to live physically solitary lives.  Thus the recent jump in identification as homosexual (16% in the youngest generation) may be largely a matter of announced identity rather than bodily erotics; a kind of fantasy ideology more than actual sexual practice.

 

Yet this may be why the family will survive--- not as the universal social institution, but as a privileged enclave. It is privileged because it is a place of physical contact; of interaction rituals, solidarity, and emotional energy. It is also a place of reliable sex (surveys show that married and cohabiting couples have much more frequent sex than unpartnered individuals -- they don’t have to spend time looking for partners).  Add to that the two-earner effect on household income, an incentive for the family to survive.

 

The trajectory of the last 100 years has been to undermine the family; but the rise of the disembodied computer world may change that. I suspect we are heading towards a future where intact families-- father, mother, and their children of all ages-- are the dominant class economically; and media-networked or media-addicted isolates, living alone with their electronics, are wards of the welfare state.

 

References

 

Statistical sources:

U.S. Bureau of the Census

Center for Disease Control

National Center for Health Statistics

Statistica.com

Williams Institute

Gallup polls

Edward O. Laumann et. al. 1994. The Social Organization of Sexuality. Sexual Practices in the United States. Chicago: University of Chicago Press.

 

Historical and Sociological references:

 

Seth Abrutyn and Jonathan Turner. 2022. The First Institutional Spheres of Human Societies. Evolution and Adaptations from Foraging to the Threshold of Modernity.

Philip Blumstein and Pepper Schwartz.  1983. American Couples. New York: Morrow.

Randall Collins. 1986. “Weber’s Theory of the Family.” and “Courtly Politics and the Status of Women.” In Collins, Weberian Sociological Theory. Cambridge: Cambridge University Press.

Randall Collins. 2014. “Four Theories of Informalization and How to Test Them.” Human Figurations 3(2).  http://hdl.handle.net/2027/spo.11217607.0003.207

Randall Collins. 1979/2019. The Credential Society. NY: Columbia University Press.

Norbert Elias. 1939/2000. The Civilizing Process. Oxford: Blackwell.

Johann Goudsblom. 1992. Fire and Civilization.  London: Penguin Press.

Todd Gitlin. 1987. The Sixties. New York: Bantam Books.

Robbins B., Dechter A., Kornrich S. 2022. "Assessing the Deinstitutionalization of Marriage Thesis." American Sociological Review 87: 237-274.

Charles Tilly. 2004. Social Movements, 1768-2004.  Boulder, Colorado: Paradigm Publishers.

Max Weber. 1922/1968. Economy and Society. New York: Bedminster Press.

Cas Wouters. 2007. Informalization. Manners and Emotions since 1890.    London: Sage.

Lewis Yabolonsky. 1968. The Hippie Trip. Lincoln, Nebraska: Excel Press.

Benjamin Zablocki. 1980. Alienation and Charisma. A Study of Contemporary American Communes.

Multi-causal Bottom Line

Multiple causality versus simple-mindedness:

 

Glib talk is the stuff of front-stage politics

(but not of back-stage politiking).

 

Of advertisements and journalism

(but not of editorial meetings).

 

Those who are successful in the world do not

think that way, although they use talk as a weapon.

 

Yet there is advantage, even in science and intellect

in simplifying to the most powerful causes,

 

and to win the center of attention among the voices

by summing up the complexities in a term,

Kuhnian paradigm, spin, vicious and virtuous circles,

 

but not to think you've said it all

when you've only pointed where to look.

 

If one thing in the world is true, it is surely this:

everything has multiple causes.

Predictables of the Ukraine War

Written before Putin’s invasion, the progress of the Ukraine war bears out generalizations made in Explosive Conflict: Time-Dynamics of Violence

[1] Three-to-six month rise and fall in public crisis attention

[2] High-tech war reverts over time to older-style warfare

[3] Civilian atrocities in the midst of guerrilla war behind the lines

[4] Polarization and historical amnesia (Iraq, Afghanistan, Vietnam, WWII, WWI-- and Syria)

 

[1] Three-to-six month pattern

Almost every war is popular at the outset. People are outraged and energized. This goes on at a high level of intensity for about 3 months. Then enthusiasm begins to wane; more and more of the population want to return to ordinary life. By 6 months after the outbreak, enthuasiastic support is done to half of its peak. A split emerges, between those who would like to end the conflict; and those who angrily and righteously press ahead for victory and vengence, whatever the cost. Wars of course can go on much longer than 6 months, but it becomes carried more by organization and compulsion rather than popular enthusiasm. Unless wars are short and victorious, they increasingly divide into peace faction vs. victory faction; end-the-carnage and write off your losses, vs. sunk costs and their-sacrifice-shall-not-be-in-vain.

Explosive Conflict documents the pattern for the outset of wars, enemy attacks, and domestic protests.  I showed the 3-month-peak, 6-month-falling-off pattern in the flags Americans put out after 9/11/01. Enthusiasm for war swept through all the capitals of Europe in 1914, from the Sarajevo assassination in June to the stalemate of armies at Christmas; falling off thereafter into disillusionment. It is the same whether one's side feels themselves the innocent victim at the outset; WWII had the same pattern of early enthusiasm for joining in, followed by much more coercive grinding it through. I have charted similar patterns of enthusiastic turn-out for protest movements in France, Hong Kong, the US, and elsewhere: the biggest demonstrations and the highest emotional level are in the early months, dwindling off in the 3-to-6 month period of falling numbers, and a tail end of violent die-hards.

The Ukraine war began with Russia's invasion February 24, 2022.  Russian advances and defeats were front-page, top-headline news, in the first weeks and months.

This was the period when anti-Russian outrage spread contagiously. Those who did not join in were pressured: 2.28.22 AP "FIFA drew a swift backlash from European nations for not immediately expelling Russia from World Cup qualifying." 3.01.22 FIFA gave in and banned Russia. So did the World Curling Federation; while the International Olympic Committee moved to ban Russian athletes. Russian musicians and conductors were removed from concerts in Europe and the US. Tchaikovsky's 1812 Overture was removed from concert programs (7.07.22 NYT). This polarization overrode news  in early March of anti-Putin protests in Russia; and the less publicized exodus of anti-war Russians to Armenia, Turkey and other neutral places.

Exaggerated hero-stories circulated in Ukraine in the flush of repelling the Russians from Kyiv. A fighter pilot nicknamed "Ghost of Kyiv" was said to have shot down 6 Russian fighter jets in the first days of the invasion, survived being shot down, and returned to shoot down 40 Russians before dying in an air battle in early March. Ukrainian officials joined in the publicity. But in May, it admitted that the Ghost of Kyiv did not exist. (5.02.22 NYT)

Two months in, rallies for Ukraine in US cities like San Diego were down to 50 participants, compared to 300-500 in the early days of the war [San Diego Union, 4.17.22]

Three months in, by May when the war shifted to sieges in the east, there was less news from the front. The same headlines repeat day after day. Ukrainian leaders call for more arms, and more sanctions (previous sanctions not yet having visible effects). News stories shifted to inside pages. Reports from the front (where reporters are not allowed) consist of official statements, claiming or denying small advances, mentions of enemy weapons destroyed, numbers of civilian casualties (but rarely of military casualties, except for estimates of enemy casualties).

By June, morale in both armies had declined severely: 6.20.22 AP "Four months of war in Ukraine appear to be straining the morale of troops on both sides, prompting desertions and rebellion against officers' orders, British defense officials said... Ukrainian forces have suffered desertions in recent weeks... Russian morale highly likely remains especially troubled... Cases of whole Russian units refusing orders and armed stand-offs between officers and their troops continue to occur... NATO's chief warned that fighting could drag on for 'years'." 

Such stand-offs are reminiscent of widespread rebellion against US officers in remote combat zones during the later years of the Vietnam war, when over 500 incidents were reported of soldiers "fragging" them (throwing fragmentation grenades into their tent). (Gibson 1986: 211-224) This does not mean the soldiers will force a cease-fire, but rather than the war gets carried on by more coercion and material incentives. Russia announced higher pay for soldiers (6.17.22 Washington Post).

From the early days of the invasion, Western leaders pressed each other for economic sanctions as a non-violent means of punishing and deterring the enemy; cutting off oil and natural gas imports, banning all business relations with Russia, enforced through controls on international banking, and secondary sanctions on those not joining in. But within the first weeks, other parts of the world-- China, India, Indonesia, Egypt, Saudi Arabia, Turkey, Mexico, Brazil-- refused to cut economic ties with Russia, or stayed neutral. Some called it a war among white people, with little regard for anyone else's refugees (3.03.22, 3.05.22 WSJ; 3.24.22 AP).  By June, poorer countries in Asia and Africa were protesting food shortages resulting from blocked grain and fertilizer exports of Ukraine and Russia. European countries such as Germany and Italy, heavily dependent on Russian LNG, had joined officially in the sanctions but stipulating only to apply these in the future. By June and July, they were beginning to forecast winter shortages of fuel for heating, and crises in industrial production. The coalition of economic sanctions was wavering, after the early months of rhetorical support. 

Ukrainian war leaders began to worry publicly about "fatigue" on the part of the outside world. Allies and business commentators began to talk about negotiations and settlements. French president Macron spoke of what Russia would accept as "victory", willing to end combat without being "humiliated" as Germany was in the Versailles treaty of 1919. He walked back these comments after Russian rockets hit civilian areas, but reiterated them in June. (6.16.22 WSJ) 

At this time, most Ukrainians still supported fighting to take back all Russian-held territory; but few had faith in the support of their Western allies: 27% for France, 22% for Germany. (6.30.22 WSJ)

In Russia, despite tight government control of the media, apathy set in: 7.02.22 WSJ "In Russia's Biggest Cities, the War is Fading to Background Noise... While opinion polls suggest public support for the military campaign, it is largely passive... According to an independent pollster, the level of attention Russians pay to the conflict is declining. While in March, 64% said they were paying at least some attention, that number was down to 56% in May. Only 34% of [military-age] 18-to-24 year-olds said they were following the situation."

As of July 2022 (5 months in) a division was visible among allies between those pressing for a truce to end the damage; and advocates of a fall offensive to retake all Russian gains in the east.

 

[2] Limitations of high-tech warfare. 

The lesson was already there from the US wars in Iraq and Afghanistan. Advanced weapons combine targeting from an array of sensors-- satellites, high-flying aircraft, low-flying drones; tracking heat-signatures of vehicles, comparing photos of changes in formations on the ground, spotting electronic activity, locating radar-guided weapons and firing back at them; all coordinated by computers making high-speed precision calculations. The enemy has no place to hide and targets are always hit. What could go wrong?

Interviews and reports from lessons-learned conferences with US and UK veterans of Iraq and Afghanistan list some everyday problems. High-tech weapons are not always available when and where you want them, or in sufficient quantity. High-tech is expensive and requires frequent maintenance.  Computer-guided smart bombs and rockets are big, heavy to transport, and get used up in intense bombardments. High-tech vehicles and weapons platforms require a lot of fuel and maintenance. If war is carried on at a leisurely pace (as in counter-insurgency war), these problems are surmountable; but the cost mounts up over time (astronomical sums in the decades-long wars in Iraq and Afghanistan). If war is intense and between similarly armed large-scale armies, both sides suffer attrition of their most advanced equipment. The Eastern front in WWII began with motor vehicles and deteriorated back to horses and foot-soldiers.

In Ukraine, reversion from high-tech to traditional weaponry has been most evident on the Russian side. Their supply of long-distance precision rockets was largely exhausted in the early months, replaced by older, less precise rockets targeting Ukrainian urban areas, rather like carpet bombing in WWII. In battle zones, Russian radio communication was vulnerable and broke down early, shifting to a cell phone system shared with Ukraine, which also broke down. High-level Russian officers had to command personally at the front lines, like pre-modern warfare,  resulting in high officer casualties.

The Ukrainian military was hurriedly supplied with long-distance rockets and artillery to target their Russian counterparts, using US/NATO targeting information. But logistical difficulties made the supply slow and intermittent: the many different allies sending available weapons produced a mix of Soviet-style weapons and calibres (Ukraine had been the center of Soviet arms production); plus a variety of west European and Scandinavian weapons systems with specific maintenance needs and ammunition calibres; making it hard to connect the right ammunition, repairs and replacements with the places where particular weapons were being used. US high-tech missile defense and long-distance rocketry started filling the gap in the fifth month of war-- with the highly bureaucratized US military not being known for speedy delivery (having spent almost two years cranking up for the invasion of Iraq in 2003).

There is no guarantee that reliance on US high-tech will prove successful in a longer war of attrition. Ukraine has been most successful with small-group tactics, essentially Special Forces movements under the radar (so to speak), getting close to Russian tanks and artillery to destroy them with man-carried anti-tank rockets. This was crucial in the first weeks of war, during the Russian blitzkrieg rushing to Kyiv from the Belarus border. Pre-war US supplies and troop training made Ukrainian forces well-matched for countering this mechanized invasion. Avoiding most front-line confrontations, Urkainian soldiers infiltrated the long and poorly-protected Russian supply convoys, hitting them with shoulder-fired Javelin missiles. In the early phase, Special Forces-style weapons and tactics defeated a traditional mass-vehicle attack; rather like Taliban attacks on far-flung US outposts in Afghanistan. The lesson of both wars: massive, spread-out forces with expensive logistics and long supply lines are vulnerable to small, dispersed hit-and-run attacks on logistics lines.

Here we have another example of what in Explosive Conflict is called a time-fork: sudden blitzkrieg, resulting in collapse of the enemy's organizational and political structure, if successful, makes for a short war with relatively low casualties. This is what Putin was aiming at, assuming he could do something like what the US did in Iraq in 2003, scattering enemy forces and causing the government to abandon ship within weeks. But if a blitzkrieg does not succeed, the process shifts to a longer time-scale: attrition war where both sides have resources to hang on and cause damage for a long time. Ending such a war victoriously requires enormous destruction of the enemy's resource base, inevitably hitting at the civilian population as everything becomes a military target. Attrition war grinds down everything, high-tech and low-tech alike; raising the human and material cost until one side, or both, run out.

In the second phase of the Ukraine war, Russia cut its losses from the failed blitzkrieg in the west, shifting to eastern fronts less vulnerable to infantry infiltration; keeping the small-arms high tech of US-supplied Ukrainian forces at a distance by massing artillery barrages in building-by-building advances through the cities of the east. Russia countered Special Forces high-tech by returning to WWII era sieges. Russia did the same in 1995 under Yeltsin, defeating break-away Chechen guerrillas by destroying their capital city, building by building.

This could be countered by the delivery of more firepower from US weapons. But here again high-tech superiority runs up against logistical limitations. Within 2 months, the US was running low in its supply of the kinds of weapons most in demand in the Ukraine. (WSJ 4.29.22; 7.09.22)  Cranking up production to  manufacture replacements is difficult because the DOD in recent years has shifted its defense budget to future weapons systems, focusing on long-distance war with China rather than front-line combat; and because supply chains in weaponry as in other manufactures having deteriorated and backlogged in recent years. High-tech is no quick fix, except in some very short-run wars.

 

[3] Pattern of Atrocities 

Atrocities have been big news stories in the period between the first weeks of defeating the Russian blitzkrieg and the shift to artillery battles in later months. Atrocities, by definition, are shocking; but they are not beyond the scope of sociological explanation. There is a pattern when atrocities happen.  

Civilians get targeted particularly in two circumstances:

[a] When guerrilla fighters hide in the civilian population; and civilians are suspected of being lookouts and spotters if not non-uniformed troops. This was also the pattern of widespread US killing of civilians in the Vietnam war; and for incidents of US troops going on rampages in Iraq and Afghanistan, in what they perceived as houses from which hidden roadside IEDs were triggered; or in revenge for green-on-blue shootings by ostensibly allied local troops.

[b] By snipers in urban warfare with no-man’s-land fronts; where high-rise buildings provide protected places very close to dangerous ones; combined with civilians living in the war zone.

Russia atrocities were most publicized for the Kyiv suburbs in the early weeks, and in the siege of Mariupol from March to May. The former, especially the town of Bucha, fits [a]; the latter exemplifies [b].

[a] Russian troops expected an easy conquest of Kyiv and a rapid end to the war. In the early days reportedly they were more polite or friendly to locals. but became increasingly frustrated and angry as they bogged down; all the more so with lack of reinforcements or even food. Russian poor logistics, and inability to defend against attacks on their supply convoys, made soldiers both paranoid and hungry.  They began looting civilian homes for food, putting them in an elemental contest among the hungry. As in previous wars (graphically reported by Loyd for the Bosnia wars of the 1990s), this puts soldiers and civilians into close and abusive relations, spilling over into beatings and executions.

Ukrainian resistance to the Russians in this phase was largely guerrilla war, playing the part of Taliban vs. US, avoiding head-to-head battles but attacking logistics convoys; the difference being in this case that the guerrillas had high-tech man-portable anti-tank missiles. 

Russians’ perception of civilians as enemies was probably accurate in many cases. News coverage from early February up through the early days of the Russian invasion was full of photos of civilians being trained to use arms. The Ukraine government announced that weapons were being distributed to the entire population. 3.04.22 AP:  "Ukrainian leaders called on the people to defend their homeland by cutting down trees, creating barricades in cities, and attacking enemy columns from the rear. In recent days, authorities have issued weapons to civilians and taught them how to make Molotov cocktails... a video message recalled guerrilla actions in Nazi-occupied Ukraine during WWII."

Retrospective accounts emerged later: 3.06.22 LATimes "... rifles were handed out to all who were able, and homemade bombs were bottled." 3.23.22 WSJ "In a war of ambushes and skirmishes, mobile Ukrainian forces have used their knowledge of the local battlefield and sought to hit Russian forces on weak points, striking armored columns on main roads and undermining their ability to fight by disrupting supplies... Tens of thousands of ordinary citizens recently have joined territorial battalions and need body armor, helmets [an official said] ... Russian troops in many places have looted stores and homes for food, according to authorities and accounts from witnesses." 4.05.22 WSJ "Civilian Volunteers and Ukraine's Secret Weapon.... When Russian troops were massing across the border, Ukrainian civilians met during weekends to learn how to administer battlefield first aid and how to handle a weapon... they have helped build barricades, patrol roads and even attack Russian convoys and capture enemy soldiers."

5.09.22 WSJ "Civilians Helped Win Kyiv Battle... Ukrainian villagers helped in their own way, calling in artillery strikes on a lifeline Russian had mapped out for its assault on the capital.... villagers shared tips and Google map locations with authorities, turning the highway between the Russian border and Kyiv into a big logistical defeat for Moscow... 'Everyone here was doing all the could to get Russian troop movements across to our boys,' a homemaker said, who had called in soldier locations... [Her] own house was shelled in the exchanges... "The capital's Kyiv Digital app, which once helped people pay parking tickets.. was reconfigured to help users spot Russian movements and give them to the armed forces... [and] explained how to drop pins on Google maps to send to security services, and reminded users to delete their messages or prvent being caught by Russian troops... In mid-March, Russian servicemen broke into the house of [a woman} who had been sending the types and numbers of Russian armor to a Ukrainian police officer, her father. She was detained on March 24 and hasn't been heard of since."

Similar patterns were reported for other battles, such as the port city of Mykolaiv in the south: 4.16.22 WSJ "With communications jammed, Ukraine relied on an ad hoc civilian network to report report Russian positions, and inflicted heavy losses on an attempted assault... [The mayor said] 'All people who can carry a gun are ready to defend ourselves.'"

Some of the “unarmed civilian victims” of Russian atrocities were probably guerrillas. Others were suspected of sending information about Russian positions to Ukrainian forces.

After the withdrawal of Russian forces from central Ukraine at the end of March, a burst of atrocities stories filled the news. 4.06.22 WSJ  "Mayor Helped Resist, Then Was Slain...  The lifeless body [of the mayor of Motyzhyn, a small town west of Kyiv] was found in a shallow grave, her hands bound. Her husband and son lay next to her, dead... The 50-year-old mayor held together her village, cut off and near the fighting at the front [since February 27]. She delivered food and medicine. And she was a leader of the resistance, part of an undercover effort to send Russian troop positions and movements to her country's military... Residents said Russian aggression against locals surged as the Russians came under attacks from Ukrainian artillery and ambush teams... The head of the village's volunteer defense force moved in with [the mayor's family] after his house was damaged by shelling. He and her husband would head out on scouting missions... and she shared the information with Ukrainian forces via cellphone messages. Ukrainian army scouts visited the house for updates...   On March 18, a Ukrainian ambush team sneaked into the village and destroyed a Russian armored vehicle and truck with antitank weapons. The Russians responded with fury. The next day, they launched what they called a clearance operation through the village... Russian soldiers took away [the mayor and her husband], telling [her son] they would bring them back soon. [Her son] called the head of village resistance and warned him to destroy his SIM card to prevent the Russians finding it and identifying him. In the evening, the soldiers returned and took away [the son]."

Most attention was focused on Bucha, in the western suburbs of Kyiv. 4.04.22 AP "Russians Accused of New Atrocities. Reports of Tortured Bodies, Civilian Executions in Kyiv Suburbs Promote Outrage from Ukraine, Western allies. President Considering Stronger Sanctions. America's 'secondary sanctions' would target countries that continue to trade with Russia....

"Bodies with bound hands, close-range gunshot wounds and signs of torture lay scattered in a city on the outskirts of Kyiv after Russian soldiers withdrew from the area... One resident said that Russian troops went building to building and took people out of basements where they were hiding, checking their phones for any evidence of anti-Russian activity before taking them away or shooting them..."

4.10.22 AP  In Bucha "... at the beginning the Russians kept pretty much to themselves, focused on forward progress. When that stalled they went house to house looking for young men, sometimes taking documents and phones. Ukrainian resistance seemed to wear on them. The Russians seemed angrier, more impulsive. Sometimes they seemed drunk... Residents of Bucha, [now] as they venture out of cold homes and basements, offer theories... Some believe the house-to-house targeting younger men was a hunt for those who had fought the Russians in recent years in separatist-held Ukraine and had been given housing in the town. By the end, any shred of discipline broke down. Grenades were tossed into basements, bodies thrown into wells. Women in their 70s were told not to stick their heads out of their homes or they'd be killed.... At first [a 63-year-old woman said], they said they had come for three days. [They stayed a month, leaving on March 31.] Then they got hungry. They got cold. They started to loot. They shot TV screens for no reason. They feared there were spies among the Ukrainians... her nephew was detained after being spotted filming destroyed tanks with his phone. Four days later, he was found in a basement, shot in the ear.... Days later, thinking the Russians were gone, she and her neighbour slipped out to shutter nearby homes and protect them from looting. The Russians caught them and took them to a basement.... Suddenly the soldiers were called away, leaving her and her neighbour shaken but alive."

Another story emerged months later, from a town east of Kyiv: 5.27.22 WSJ On March 19, a 21-year-old farmer, walking to feed his pigs ..."caught the eye of a Russian patrol. They asked if he had been giving away their positions to Ukrainian forces.. 'Is that why we keep getting hit with artillery?' he remembered one of them asking as they searched him for tattoos that might give him away as a combatant. They scrolled through his phone to see if he had sent photos of Russian troops. ... He and a friend were taken to a nearby cellar, where they were beaten.... As days wore on, more civilians were brought in. A 25-year-old math teacher said she was watching in a nearby village as Russian forces trundled along the main road. Her father said he made an inventory of their equipment, peeping over their garden fence, as his daughter relayed the information to a friend in the military... On March 25, Russian soldiers broke into her family home and searched through her phone. She admitted sending information to Ukrainian forces... She was covered with bruises when she arrived at the boiler room. She upbraided the captors for invading Ukraine. 'She asked why they came here to ruin our peaceful lives. You should have seen the Russians' faces. From them on, until she was led out days later, the Russians left her alone and treated her with respect....

"On March 27, the Russian assault on Kyiv was being hampered by insurgent attacks on supply lines and frustrations were boiling. The Russians took [the math teacher and another] away. Nobody has heard of them since...

"Days later, a Russian soldier appeared to be intoxicated, and said he needed eight bodies... He gave them a shot of vodka and asked [the interviewee] to choose who among the other prisoners would die. He refused and told the soldier he wouldn't be able to live with himself. He volunteered to be next.... The Russian soldier pulled him out of the boiler room and led him to a nearby cemetary and told him to get on his knees. A shot rang out but the bullet went past his ear and hit the ground. The Russian pulled him up, telling him he never wanted him to talk that way again.... The next day the Russian soldier returned at 5.30 a.m. and said they were leaving. They listened for the troops' engines to start and fade into the distance.... Twelve prisoners were left in the boiler room. When they walked to the nearby graveyard, they found 6 of those who had been led away to execution were still alive." 

Yet another retrospective story from a small village in northern Ukraine, a family sheltered in the cellar of a bombed-out house with 5 Russian soldiers.  5.17.22 WSJ  "Soldiers seized villagers' phones and lined them up in front of a garage while checking their identification. [A young man] was let go after confirming he wasn't military... When his family opened the door leading down to the place where they used to store beet-root and potatoes, they found five Russian soldiers. The intruders invited them in... Elsewhere, residents said Russian soldiers threatened them and looted their homes. But in the cellar, an uneasy accommodation was reached. The Russians [whom they guessed] were tank technicians, sometimes brought food and toiletries apparently looted from the homes of Ukrainians. [One of the Russians did all the cooking] -- 'I think they were afraid we would poison them.' The family ate Russian military rations with them, sometimes contributing potatoes and preserves from their stockpile... On March 30, the soldiers appeared downcast. [until now they had assumed they were winning; next day they retreated] The family followed them out of the cellar and saw a column of Russian vehicles preparing to depart... The five Russian soldiers said goodbye and wished the family the best. 'If you had come as guests, I would say goodbye-- but not like this,' the older man said. 'You are my enemies.'"

These detailed accounts show, paradoxically, that not everyone is killed, even in situations of anger, suspicion, and prolonged strain, where all the power is on one side. Or not paradoxically: as shown elsewhere (Violence: A Micro-sociological Theory ch. 3; Explosive Conflict) face-to-face killing is psychologically difficult; the emotions have to be intense and social supports have to be aligned to carry it off.

[b] Snipers and no-go zones in urban sightlines.  Loyd's [1999] eye-witness account of the wars in Bosnia explains why some fraction of civilians stay: some are reluctant to abandon their homes and possessions; unwilling to live as refugees; some discovering they can survive dangerously, especially if the lines are slow-moving or static. But they have to venture out for water and scavenge for food; often they have to cross no-man's-land, in sight of snipers who have warned them off the streets. And both snipers and civilians are tired, strung-out and careless; snipers often don't shoot, at other times shoot unexpectedly. Taking chances becomes a routine.

4.08.22 WSJ "In early March... Russian troops halted in their advance on Kyiv... Telling local residents they were worried that somebody was reporting their positions to Ukraine's military, Russian soldiers ordered people to stay off the street... But for a 68-year-old superintendent of a home for special-needs children... the only way to get to work was straight into a Russian military no-go zone... A sniper shot him in the road in front of a shrapnel-riddled green gate... By the time the Russians retreated, 17 corpses lay on the street... [A woman] who took charge of a kindergarten where several hundred locals had sought refuge in the basement, went to search for fuel for a generator when... she bumped into two tanks. 'Are you f-ing crazy? There's a sniper here,' she recalled the tank commander warning her. He siphoned fuel from an abandoned car and gave it to her. 'If my grandfather knew I was here, he's turn in his grave,' she recalled him saying; his grandfather was born in northern Ukraine... Russian troops established a curfew, telling locals to stay indoors after 4 p.m., and placed snipers in the town's tallest buildings. Locals said they smelled alcohol on the breath of Russian soldiers at checkpoints...

[Weeks later] "Russian forces were getting bogged down.  Ukrainian army detachments worked secretly in Bucha and other Russian-occupied areas. Special-forces units lobbed grenades at Russian posts, helped guide artillery strikes, and fired small arms from high windows. The Russian soldiers began to scrutinize the local population more fiercely. 'They saw a spotter in every person who lived on the fifth floor' [said a resident]. 'They saw a commando in each of us.' ... On March 10, special Russian units swept through Bucha's residential sectors, destroying doors with fire axes and storming homes, trying to root out the cause for their continuing troubles... Russian troops forced men of fighting age to strip to the skin, and scanned their bodies for military tattoos and the shoulder bruises and trigger-finger calluses that betrayed recent use of weapons... men began disappearing, their dead bodies reappearing on the street days later with their wrists fastened behind their backs.... In the afternoons, as curfew set in, Russian snipers ascended to positions in high-rises triangulated on the intersection of (main streets). 'They told us, 'you can't cross along the road... At all. You can't go anywhere. If you set foot on the sidewalk or the road, you will be immediately killed.' People desperate to flee still made a break for it along the road [out of town]. The first killing was a woman on a bicycle. 'First I heard a shot, then I saw her' [a resident said]. 'How could a grandmother on a bicycle interfere with anyone?'"

In Mariupol and other cities gradually taken by the Russians over a period of two months, bodies piled up in hastily excavated graves [4.23.22 WSJ]. These were not necessarily mass executions; a lot of people died, some shot by snipers; some killed in the house-by-house artillery war as the remaining Ukrainian army sheltered in deep tunnels under an abandoned steel factory. Some of these soldiers, too, made periodic forays above ground for water and food in a live battle zone. Grisly mass graves would also be the result of Russian forces cleaning up the streets after victory.

We can add a third pathway to civilian atrocities: when they are hit by indiscriminate long-distance shelling and bombing of urban targets. The psychology of such attacks is not the emotions of face-to-face confrontation; but cold technical attitudes of destroying an enemy whom we never see. The American airmen who dropped atomic bombs on Nagasaki and Hiroshima never mentioned anything except the technical details of performing their mission. It was the same with the British pilots who fire-bombed Dresden; no doubt with the Russian artillerists who destroyed Grozni, capital of Chechnya. It is the same attitude as the US officer in Vietnam who said "in order to save [the town], it was necessary to destroy it." The technology of modern weapons of mass destruction makes no distinction between civilians and military; they are all in the path of high-powered modern war.

If we hope to avoid atrocities, we need to think more clearly about the overall pattern.

[4] Polarized perception and historical amnesia

Public figures and commentators refer to almost everything the enemy does as "barbaric" and "brutal." These words do little to explain it.  From an ideal, peaceful standpoint, all fighting is brutal. On calmer reflection, we cannot accurately say that everyone of enemy nationality are barbarians. If some of them commit atrocities, there is a causality of who, where, and when-- a causality that appears to be universal. As polarization declined after several months of war, news reports began to mention incidents where Ukrainian troops accused Russian-speaking residents of being spies for the Russian army, mirroring accusations in the other direction. 5.01.22 AP "Ukraine Cracks Down on 'Traitors' Helping Russian Troops." 6.03.22 WSJ "Security Officers Hunt Kremlin Backers, Spies."

At the beginning of any war, everything is simplified to innocent good guys and despicable bad guys. This is polarization. We forget everything that our side may have done in the past that isn't wonderful; and remember nothing but the worst about the other side.

The tendency to idealize our allies at the beginning of war leads to overlooking things that later come to light. Since the break-up of the Soviet Union, Ukraine was one of the most corrupt countries in Europe, both among government officials and from the mafias that sprung up in all the ex-soviet states in the transition to capitalism. Suddenly, since February 2022, the US and other western states have shown their support by offering (in lieu of their own troops) billions of dollars to the war effort, with little effort to account for what is done with the it. This sets up the likelihood of discovering in future years the kind of corruption of military aid that characterized the wars in Vietnam, Iraq and Afghanistan. This aspect of polarized perception is a time-bound process. Fitting the 3-to-6-month pattern of declining enthusiasm, warnings began to appear in the US  about blindly throwing money at an ally with a history of corruption (6.14.22 WSJ). Zelensky was elected president in 2019 on a platform to overcome corruption; before the war broke out, he was regarded as unsuccessful. His public leadership in the war-- especially his highly-publicized on-line appearances calling for aid from the rest of the world-- elevated his standing (84% high or medium trust, in Ukrainian-controlled areas); but this did not extend to the rest of the Ukrainian government (62% little or no trust in parliament: 6.30.22 WSJ). They think corruption is still there.

As the period of naive enthusiasm wanes, some people around the world will see the Ukraine war in a more realistic light. Some will press for the benefits of peace, over the costs of vengeance. Some will argue that no agreement ever holds; that all aggressors are Hitlers; that no war ever ends in a compromise. That is not the universal lesson of history. To go no further with examples, WWI could have been ended in 1916, when the costly stalemate was recognized and negotiations proposed by all the major participants except France, with Woodrow Wilson offering to mediate; a cabinet coup in England replaced the war-weary Prime Minister with one determined to press the war onwards; resulting in a victory that laid the grounds for WWII. We need better judgment about whether we are in 1939 or in 1916. And about everything else that gets fogged over in the polarized atmosphere of war.

The next few months of summer/autumn 2022 may be coming up to a switching-point. Either a cease-fire will be established, along with negotiations for a settlement; or the war will be further escalated, by an all-out campaign to retake everything that Russia has conquered since 2016. Costly as the damages of the war have been so far, they will be dwarfed by the costs in lives and livelihoods if the war is allowed to escalate, potentially for years to come, and with global entanglements yet unseen. Above I noted that after hopes for a short decisive war are dashed, a long attrition war can be carried on as long as participants' resources last. If one or another of the participants is a poor country, it is their rich allies who can choose to keep the war going indefinitely. A now-ignored example is Syria, where a multi-sided war has been going on for 11 years, sustained by arms flowing in to all sides; resulting in three-quarters of the population turned into refugees. In Ukraine, to date, about a third of the population are refugees, either internationally or internally displaced (6.03.22 NYT).  This may not even be the worst-case scenario for continued escalation of war in Ukraine.

 CIVIL WAR TWO: CONDENSED ONE-VOLUME EDITION

OUT NOW IN PAPERBACK

References

Dates and details on Ukraine war from Associated Press, New York Times, Washington Post, Wall Street Journal, Los Angeles Times

Randall Collins. 2022.  Explosive Confict: Time-Dynamics of Violence. Routledge/Taylor&Francis.

--- 2008. Violence: A Micro-sociological Theory. Princeton Univ. Press.

Anthony King. 2021. Urban Warfare in the Twenty-first Century. Polity Press.

Danilo Mandic. 2021. Gangsters and Other Statesmen. Mafias, Separatists, and Torn States in a Globalized World. Princeton Univ. Press.

Anthony Loyd. 1999. My War Gone By, I Miss It So. Grove Press. [eyewitness account of wars in Bosnia and Chechnya, 1993-95]

James William Gibson. 1986. The Perfect War: Technowar in Vietnam. Atlantic Monthly Press.

David Lane. April 2022. "What Caused Russia to Invade Ukraine?" [includes maps showing the many changes in Ukraine borders; zones of different language-speaking populations; and recent policy to make Ukrainian the exclusive language]
https://www.worldeconomicsassociation.org/files/2022/04/Issue12-1.pdf

Why States Differ On Refugees and Immigration

There are five main processes that states juggle when setting policies on immigration, including economic immigrants, refugees, and asylum-seekers.

[1] Capital accumulation vs. protectionism. Modern capitalism favours the widest possible movement of capital and labour across borders for maximizing profit. In alliance with political forces, however, it can swing towards protectionism. A pro-business party is not necessarily strong enough in its own right; often it allies with conservative and nationalist sentiments in order to get elected. 

[2] National identity rests on popular democracy. The modern state originated in a reaction against dynastic family rule and feudal alliances; we generally refer to this as the rise of democracy, but it also was a move away from internationalism (since dynastic marriages were often across linguistic and cultural borders) and towards nationalism. The replacement of feudalism with a centralized state apparatus moved towards a monopoly of legitimate force upon a bounded territory; and this too built nationalism. Along with internal pacification and policing came border guards, customs, identity checks, and passports. Since the 19th century, states have penetrated their societies with institutions of education, mass media, uniform laws, even sports leagues as well as standardization of language, all driving in the direction of greater homogenization. National identities were built, or intensified, by the territorial state. And modern states (almost all) claim legitimacy based on sovereignty of the people who live there; democracy always has a territorial referent, and democracy reinforces feelings of nationalism.

Nationalism is not necessarily xenophobic, but modern citizens cannot help being aware of distinctions between themselves and outsiders. We can call this populism, perhaps even implying that it is a dangerous form of democracy; but it is nevertheless a result of widespread public participation in politics.

[3] Internal politics in a democracy is concerned, among other things, with bread-and-butter issues of taxation, welfare expenditures (whether provided by government or by insurance), and employment. Immigration always potentially raises questions about how much it will cost, directly if refugees are given special housing and support, and indirectly in competition for jobs. These issues have different intensities depending on whether the economy is growing or stagnating. People living in different regions where their own economic experience is downwards or upwards have different attitudes towards immigrants.

[4] Liberal commitment to altruism and diversity. Many NGOs, and swatches of public opinion are dedicated to the plight of refugees and immigrants. Other social movements and ground-swells of opinion see them as potential dangers (future terrorists? revolutionaries?) or as eroders of local lifestyle and shared social identity. It is an under-theorized question in sociology why such movements lean one way or the other. Media news stories of refugees and images of individual victims (especially young children) create sympathy. But this is a time-bound phenomenon, often temporary; large flows of refugees can lead to a counter-reaction; and large numbers drown individual suffering in statistics, making international audiences jaded.

Pro-immigrant or anti-immigrant policies (or some mixture in between) is the resultant of these four vectors of political influence, impinging on state policies.

A rare example of how these four processes interact is Loyal and Quilley (2018) explaining Ireland’s refugee policies over the years: During the early years of the Irish Republic (1920s and 30s), the political focus was on Gaelic nation-building, a reaction against centuries of English domination. Cultural nationalism combined with penurious welfare and job policies, resulting in excluding all but a handful of refugees during the Nazi/World War period. Capitalist openness to capital and labour dominated during the Celtic Tiger period of economic boom, when the magic key was American investment in a low-tax country with entry to EU markets.  And a sudden proliferation of NGOs in Ireland since the 1990s, became part of an unexplained ground-swell of altruistic internationalism.

Altruistic movements tend to take themselves as the default condition, the theoretical baseline against which we study other movements. There are plenty of studies of nationalism and anti-immigrant sentiment; but altruistic/internationalist movements need a sociological explanation too. No such theory is offered here, but the Irish case and its historical context points up an important factor—not indigenous to a given state, but operating from without:

[5]  The social construction of international law. The Universal Declaration of Human Rights (1948) and the Geneva Convention on the right to asylum from evil regimes (1951) were established by international treaty. These are perhaps the strongest force favouring refugees, since none of the state-centered and domestic forces listed above are unequivocally pro-refugee, and they often act in a nativist manner. Arguably the proliferation of NGOs is a movement into the moral and conceptual niche created by international treaties. But how to improve our analysis from a recitation of arbitrary historical facts, to a theory that explains when international agreements are made, and what makes them popular? Signatories to international treaties often fail to live up to them, since implementation is left to national states.

Let me suggest some general processes. The big treaties were the result of international conferences, held among the victorious powers at the end of World War I and II, and in the Cold War period leading to the collapse of the Soviet empire. Ostensibly these treaties were created so that the causes of war and forced population movements could be remedied. The diplomats of the Great Powers tended to frame laws on human rights against the regimes they defeated or were currently opposing: genocidal regimes and ideologies, forced labor, atrocities of ethnic cleansing, stifling of political dissent.

But as new regimes and alliances have appeared, the original intent of international law found new targets: condemning atrocities by Nazis and Communists were now shifted to critiques of colonial and post-colonial regimes, and declarations of universal human rights could be aimed at segregation or discrimination by race and religion, or yet further by gender or sexual preference. For this reason, world powers like the U.S. have backed off of agreement or enforcement of international treaties such as those allowing prosecution of soldiers for their behavior abroad. In sum, Great-Power diplomacy is an unreliable basis for laws guaranteeing universal human rights.

This brings us to an unexpected source of moral commitment, the diplomacy of small states. The very fact of being militarily weak, or being outside of the major alliances (the situation of Ireland and the Scandinavian countries) gives an opportunity for international prestige, as a neutral arbiter, taking a fair and altruistic stance above the game of power. Humanitarian activists from the small and unaligned states became prominent in the early years of the United Nations and other international treaty organizations:  one thinks of Dag Hammarskjold (an activist UN Secretary General and martyr for international mediation), and Ireland’s Conor Cruise O’Brien, sending blue helmets against insurgents as de-colonization rippled through Africa; more recently, former Irish President Mary Robinson as UN High Commissioner for refugees. Max Weber argued that states enter into wars largely in order to bolster their power-prestige in the international arena; even at an economic cost, they want to be seen as major players in the game. History since 1945 suggests a corollary: small states, without military power, can achieve international prestige by staking out their position as leading internationalists.

The question is: what determines the balance of the various forces pro and con refugees in the many states of the world? Theories that assume they are the arc of history are not necessarily good predictors. Add the causal forces together and we will see.

CIVIL WAR TWO CONDENSED ONE-VOLUME EDITION OUT NOW IN PRINT

References

Steven Loyal and Stephen Quilley. 2018. State Power and Asylum Seekers in Ireland.  Palgrave Macmillan.

Sinisa Malesevic. 2019.  Grounded Nationalisms. Cambridge Univ. Press.

Michael Mann. 2005. The Dark Side of Democracy. Cambridge Univ. Press.

WHO IS INDISPENSABLE? A CRITERION FOR THE FUNDAMENTAL DIMENSIONS OF SOCIETY

If there were no women, there would be no society. (Also if there were no men.) Therefore sex (analytically distinguishable from gender) is one of the indispensable fundamentals of society.

 

If there were no workers, there would be no society. If there were no capitalists, society would still exist (and in many historical instances has existed).  Therefore workers are an analytical category as fundamental as males/females. (Workers in the analytical sense include both manual and nonmanual workers.)

 

If there were no homosexuals, society would still exist. Sexual preference is not a fundamental category, but a development of luxury, superstructures, whatever you want to call it. Historically, it is a late-expanding movement.

 

If there were no white people, society would still exist. If there were no black people, society would still exist. The same is true for any racial or ethnic groups. Therefore race/ethnicity is a derivative category, not fundamental.

 

We could fiddle with thought experiments of technological utopias in which males were replaced by biological engineering; and also in which females were replaced. But there can be no technological utopia without workers to set it up. Therefore labor is the most fundamental of all social categories.

 

Until an all-robot society disposed of human labor?

Five Kinds of Friends

The word “friends” has at least five different meanings:

Allies

Backstage intimates

Fun friends

Mutual interests friends

Sociable acquaintances

Whether social media “friends” are one of these five, or a sixth distinctive type, we shall see.

Friends are sociologically important because they are the building blocks of social networks. How we should analyze these ties, considering there are so many different kinds?  Which kinds of ties are we talking about when we say that social ties promote physical health and prevent suicide; that they make successful careers and are the key to happiness? Different kinds of ties can have opposite effects.

Allies.  The oldest meaning of “friends” is allies. In ancient Rome, to be a “friend of Rome” meant to be a military ally; and that meant you were required to bring your troops to fight alongside the Romans when demanded. If you didn’t, you were likely to be exterminated; Romans enlisted defeated enemies as their “friends” -- but punished them severely if they backed out. The legal term survives in amicus curiae, “friend of the court”, an outside party whose lawyers argue alongside one of the parties of a case. It was also a political term; “friends of Caesar” (or of Antony, etc.) were their political partisans. In the Clinton administration, FOB (“friends of Bill”) was a code word for privileged insiders (usually big campaign donors) who had Bill Clinton’s phone number.

Today in private life a lot of what gets called “friends” are people who help each other out: lending you money when you need it; recommending you for  school admission or a job; taking your side in office politics. Sometimes these relationships are reciprocal-- a chain of gift exchanges among equals-- but often they are top-sided, a patron and protégé. The mentor/sponsor gets paid back by having disciples and followers; or the prestige that goes with their success. In the short run he or she gets paid back by getting deference or at least attention. Parents usually launch their children’s careers and underwrite their expenses; but if that kind of alliance is what their relationship is about, deference is likely to be perfunctory and short-lived, increasingly so as children grow up. Modern Americans say they love their children, but in practice this often means they are in a one-sided alliance relationship.

Backstage intimates.  Network researchers have defined a network tie or friend as “someone you discuss important matters with.” These are supposed to be crucial personal decisions-- whether to risk an operation, whether to quit your job, whether to get married or divorced. More broadly, backstage implies privacy and secrecy; things are said that you don’t want to get out, discussing people you don’t like; girls discussing boys they have a crush on. Lovers and spouses ideally share such intimacy, bedroom talk being less importantly about sex (which can be wordless or monosyllabic) but about events of the day when you had to keep your feelings to yourself.* 

*Sir Francis Bacon, in his essay “On Friendship” says everyone needs someone they can unburden their heart to; and that true friends are those who loyally carry out your wishes. Thus in the Elizabethan world of 1590 he sees friends as backstage intimates and permanent allies.

Goffman viewed everyday life as alternating between play-acting frontstage roles and preparing on backstages. People who share a similar backstage are likely to be the most intimate kinds of friends. This is why celebrities-- pop stars, movie idols-- tend to marry each other (or at least their agents), because dealing with fans means dealing with persons in a state of gushy excitement, and only other insiders can be fully at ease together.

Fun friends. A shortcoming of network research is that it ignores the biggest category of friends you like to spend your time with. Among children this is the main meaning of “friends”. At my granddaughter’s day care center, the kids had posted up their answers to “what is a friend?” Most of the answers said something like “a friend is someone you play with; a friend shares their toys with you.”  Who invites you to their birthday party (which, unlike most adult parties, is an occasion for having fun). Among teenagers and young adults, the term is “hang out with” (i.e. enjoying yourselves doing nothing serious). What is most fun are adventures, pulling pranks, getting intoxicated, carousing; as we can see because these are the stories they like to tell each other when hanging out. Sociologist Tony King observed that soccer hooligans recycle tales of their fights as the staple conversation of their drinking bouts, in what he calls “narrative gratification.” These are stories told with exaggeration and laughter, fun recapitulating fun.

Observing the leisure gatherings of adults, we generally find the successful, career-obsessed upper-middle class has little fun in this sense; their “friends” are of a different kind and their parties are mostly shop-talk. Working-class and lower-middle class people tend to be very fun-oriented when they are young, but age out of it more quickly, into passive TV watching and its surrogates. But within each social class, there is usually a division between the “fast crowd” and the boringly conventional.The first chapter of Tolstoy’s War and Peace follows his main characters first at a polite soirée where ladies and gentlemen discuss political events and gossip about appointments; then to a drunken party of elite Guards officers where they dare each other to chug-a-lug a bottle while swaying on a high windowsill, and threaten a policeman with a pet bear. This kind of division between the cool/fast/hip crowd, and the nerdy/square/straight, is subjectively more important than vertical class divisions for a substantial portion between their teens and the onset of middle age.* Whether or not status ranking by carousing has recently changed to greater popularity of the “geeks” -- a trend not yet carefully measured-- it calls out for sociological explanation.

* David Grazian, On the Make, a multi-perspective ethnography of urban night life, concludes that most young middle-class persons today have a split personality, adopting their “nocturnal self” when they go out with their fun friends (among males, their “wing-man”).

Mutual-interests friends. These are persons who like to be with each other because they share a common interest: playing chess, or bridge, or poker; repairing old cars; comparing wines; cooking and talking about it. In Hitchcock’s Shadow of a Doubt, two of the side-characters who distract from the impending murder are fans of detective stories, spending their time at dinner telling each other how they would go about murdering each other. All sorts of shared interests can be the cultural capital for this kind of friendship.

We might include mutual-interest friends as a sub-type of fun friends; except that the former are usually considered rather square. Fun friends are noisy, carousing, extroverts; mutual-interest friends are generally rather quiet and sedentary. Their interests rarely reach to a peak of shared laughter or the shriek of excited children. Sociologically, we lack surveys of what proportion are in the fun friends sector and what portion of the population are in the shared interests zone. The latter may well be a bigger share, just less visible-- fun friends attract the most attention, like the zany fans in wild outfits stripping themselves in freezing weather who attract the gaze of TV cameras at football games.

Sociable acquaintances. Among modern Americans, the biggest stretch in using the word “friends” refers to people who invite each other to dinner or parties at their homes. Friends here means people who encounter each other in leisure, outside of work or public life. They invite each other to weddings. They meet for lunch; “we should get together and talk” implies something will be said that is to some degree exclusive. There is a continuum running from people you meet at a reception or big party venue; those you have a drink with, or a coffee, tête-à-tête; and those who you invite into your home. The last echoes a medieval definition of marriage, commensality and connubium, sitting at table together and bedding together. It is a continuum of degrees of intimacy.

But not the intimacy of backstage confidantes, as one can tell from what sociable acquaintances talk about. They gossip about mutual acquaintances; they gossip about themselves, making little conversational melodramas, or attempts at humor, out of the ordinary events of their lives, or just filling the time with whatever they both can talk about, a shared cultural capital of the lowest denomination. This personal quality of their shared attention marks the occasion as leisure, rather than work, off-duty rather than on. Persons who violate this boundary line can get away with it if they are sufficiently important in the public world; but they sacrifice being considered sociable friends, since they are not sociable persons. As fraternity boys say about those they would never rush: they lack social skills.

Sociable acquaintances need not be allies; they are not intimates (at least during “social occasions”); they are not fun friends, nor even shared-interests friends in the sense of people with the same hobby. They are being sociable for the sake of being sociable, avoiding whatever keeps them from being so. They perform on the archetypal Goffmanian frontstage that he documented from old etiquette books. Etiquettes change, and the generations before 1960 were much more explicitly conscious of the show they were trying to keep up. There still exists a category of people we regard as friends because we take part together in the rituals of social acquaintance.

The number of such acquaintances one has varies by social class. Upper-class persons have most social commitments, and know the most people in this superficial sense. (Rockefeller used to employ a full-time secretary just to keep track of sending out Christmas cards; John F. Kennedy had an aide file the names of spouses and children of persons he was scheduled to meet so he could mention them.) Upper-middle class professionals and managers are heavily networked among business associates; while traveling and conferencing they are expected to take a break and socialize, turning allies (or customers or rivals) temporarily into sociable acquaintances. These formally-based networks narrow towards to the lower-middle and working classes, whose sociable acquaintances (and gatherings) derive less from work and more from relatives and neighbours (which would include such groups as neighbourhood gangs); most religion-based networks are found here.

Thus the balance among the five kinds of friends varies among social classes, and probably other dimensions (such as gender, sexual preference, and race). I will explore the consequences of this shortly. For now, we need to answer the question:

What kind of friend is a social media “friend”?

A first shot at an answer is by looking at numbers. Respondents in traditional in-person network research on “friends” usually name only a few persons they “discuss important matters with.” If we go more broadly for acquaintances, sociologists found a few dozen or less for working class, up through hundreds for the professional classes. When I was active in the ASA, I’d check the index of people on the annual meeting program, finding I knew a hundred or so. Facebook friends are a different order of magnitude: most young people have hundreds; many have thousands.*

* It is reported that some persons don’t want anyone who lists only 50 or so friends to be part of their neighbourhood babysitting group, since such “isolates” might be psychopaths.

My inference is that social media friends are not backstage intimates, fun friends, or allies. They might be mutual interest friends, although given the prestige of having hundreds or thousands of friends, on-line friend-seekers resort to listing everyone they can from high school yearbooks and other remote connections, suggesting that they don’t even need mutual interests to count in the total. The nearest conventional category is sociable acquaintances, except that the numbers on-line ramp everyone up to the level of the most active social butterflies or politicians of the upper classes. If true, this is quite a revolution in social prestige. 

I will revisit the question of whether social acquantainces are a good model for the sociology of on-line friends, after we look more closely at micro-sociology of interaction.

Multiplex friends

Is overlapping friendship categories a good thing?  Spouses and domestic partners are lucky if they are simultaneously backstage intimates, fun friends and mutual-interests friends. Single-dimension couples are more likely to break up.

This also applies to political allies. I once observed a bitter struggle in a sociology department over hiring a new professor. By chance, several months before the issue arose, I had done a network analysis of the faculty, charting who taught courses together, collaborated in research, ate lunch together, or invited each other into their homes. The network held together by multiple ties won the fight; those who had none of these ties lost the vote (and some angrily resigned). It even predicted the fence-sitters in the debate-- these had some friendship ties with the dominant coalition but not multiplex ties.

Each person can have an array of different kinds of friends; but this kind of multiplexity can pull them apart: (I like you but I don’t like your friends. Bored with your work allies and their shop talk. Don’t hang around with that bunch of rowdy drunks after work.) Distinct friendship networks held up in the old-fashioned arrangement when male and female networks rarely met. David Halle in America’s Working Man found that men drank and watched football with their buddies, while their wives occasionally dragged them to weddings and church services. In the couples-centered social world of the middle and upper-classes, fitting two whole friendship arrays together is more of a strain. This may be why they put off marrying longer; and it keeps these class networks at the superficial level of sociable acquaintances.

Are friends of my friends my friends?  [FOMFMF]

Not as much as balance theory would expect. FOMFMF applies most clearly to allies. But even in international diplomacy, countries can be opportunistic. In personal life and office politics, it may hold up for a while. But retirements and new hires change the mix, and creates a drift to the new winning coalition. And young turks after a successful take-over become rivals.

Does FOMFMF apply to backstage intimates? No.

Fun friends? maybe. But even by themselves, fun friends tend to be ephemeral. Summer vacation friends.

Mutual interest friends? Could be. Not much jealousy and possessiveness among poker players or sports fans; FOMFMF is a way to expand a hobby network. But such friends are pretty much interchangeable, so the network might not expand but just shift around.

Social acquaintances? Yes, probably. Especially when people are actively “networking”, deliberately trying to expand their networks. Since these are superficial ties, it is easy to add them; although time pressures may make it hard to keep up with all of them.

Are enemies of my friends my enemies? [EOMFME]

This applies mainly to the world of allies. But in war, politics and business, opportunism pays off, and side-switching is not uncommon (this is the essence of the bandwagon effect).

In personal life, friends of friends often resist being drawn into others’ quarrels. [Martin 2009] Those who insist on EOMFME can wreck their own friendships. I knew a man who had a bitter quarrel with his son; a few years later he refused to attend his daughter’s wedding if his son were there; and this led to a permanent split with his daughter. When people say “It’s a matter of principle!” they are usually doing something self-destructive.

Conversely, friends of friends can result in new ties after a breakup. A substantial portion of people marry the friend, roommate, or sibling of their old boyfriend/girlfriend. Laumann [1994] asked in a survey “how did you meet your last sex partner?” Many said, it was a friend of their previous sex partner. Two-step network ties are intrinsically neither positive or negative; but they are easy opportunities for creating new ties.

What is love?  Sex plus successful IRs

Love is a combination of two things. One is sex. Micro-sociologically, sex is an interaction ritual (IR) focused on bodies. The ingredients are the same as other kinds of IRs: sharing the same feeling or emotion -- in this case lust; a mutual focus of attention -- each other’s body, with reciprocal awareness, drawing the world down to a here-and-now inhabited by two bodies, and excluding all else. Like all successful IRs, sufficient ingredients intensify the turn-on into rhythmic coordination (otherwise found in fine-tuned flow of gestures or conversation), here in the rhythm of making love. This too is collective effervescence: excitement whose archetype Durkheim found in pagan religious rituals; here two bodies pulsing together.

Of course, not all sex is so intense, or reciprocal. Some sex is one-sided; but that is the formula for one-sided love.

Besides sex, the other component of love is the feeling that you “click,” an easy attraction to each other in all sorts of ways. This means successful IRs in other dimensions: smoothly flowing intimate conversation; having fun together; doing and talking about things of common interest. * A love relationship checks all the boxes: backstage intimacy; fun, mutual-interests-- except social acquaintance, which is superficial and public, precisely what love is not. If they go on to become a couple, they necessarily become allies too; both because successful IRs create solidarity (as Durkheim said about religious rituals); and because living together creates an economic element, a shared household, and under modern marriage laws, shared property. The allies dimension is iffy, though, because disputes about money are a major source of couples acrimony; and low-level annoyances of living together are mainly about practical matters like heating the bedroom and picking up one’s clothes. [Emerson 2015]

*McFarland’s research [2013] on speed-dating found that couples who clicked, talked less in questions-and-answers (i.e. seeking information about each other’s demographics and life story), instead finding something they liked to talk about.

Love is a continuum, depending on the strength of each of its ingredients: highest when sexual rhythms are strongly attuned; plus the degree to which all the other kinds of friendship IRs are successful. 

The history of love; and  history of friendship

Love can be based on sex alone.  After all, that is the origin of the word-- eros, amor -- in ancient and pre-modern times.* Love was recognized in every society; I can’t think of one that doesn’t have love songs or love myths. But until very recently, love was distinct from marriage. Especially so where marriage and kinship were the building blocks of society. Both tribal and feudal/aristocratic families were built on arranged marriages; formally controlled sexual relationships were at the center of alliance politics and the transfer of status and property by inheritance. Harems and mistresses could exist alongside.

* Cupid comes from the Latin cupiditas-- lust.

In modern societies, marriage and love tend to come together, at least in ideology, and at least temporarily in reality (the few weeks before a wedding). Individuals became free to choose their own partners (we see this by 1800 in the novels of Jane Austen and George Sand). At first parents exercized influence and veto power, but parents were pretty much out of the picture by the time of the dating and partying scenes of Scott Fitzgerald’s jazz age in the 1920s.

Love only became free in the modern sense because of an historic change in social structure: the state became separate from family and household, shifting from aristocratic kin alliances to bureaucracy and democracy; and this left the field of love open for individuals' erotics and friendship. The early 1800s were called the Romantic era, among other reasons because it was the time of historical shifts in personal freedom and in sex and love.

The history of friendship shifts at the same time and for the same underlying reasons. In ancient times, “friends” meant political allies, but gradually came to mean personal relationships.

Backstage friends hardly existed when there was no privacy. People lived in small villages and crowded dwellings; large castles and palaces were full of servants and retainers, with no privacy even in the royal bed-chamber. Courtiers vied for the right to hold the royal chamber-pot. We see the change in the history of architecture; corridors and hallways only started to become common in the 1700s and 1800s; before then one room just led into another, and were full of people. Privacy becomes an expectable right only in the wealthy societies of the 20th century. Without privacy, it is hard to have a backstage.

“Fun” friends did exist, but the term did not. Cleopatra kept Antony enthralled by playing with him, such as by roaming the nights in disguise and disturbing the homes of the ordinary people. Alexander the Great was famous for wild drinking parties with his buddies. [Collins, Charisma.] There were words for carousing and jesting. But foolishness was reserved for fools; and fooling around would become valued only in the 20th century. In Shakespeare, “clown” meant a peasant; it came to mean an circus role in the 1800s; clowning around became casual entertainment only recently. In the 1600s “fun” meant to trick or hoax,* from an older word meaning to be a fool or make a fool of someone. In the late 1800s and during WWI soldiers would speak ironically of combat as “in on the fun” or “the circus.” Not until mid-20th century does “fun” become a valued form of leisure.

*In the rural South, “you’re just funning me” still had the old meaning.

In Latin, the word for “happy” was felix, meaning lucky, fortunate, successful. The English word derives from the same root as “happen” and “happenstance” -- happening by chance or good fortune. It added the meaning, a pleasurable or contented state of mind, in the 1700s; the term “happy family” appears in the 1860s. World War II coined expressions such as flak-happy, trigger-happy, and slap-happy, meaning dazed or light-headed. “Happy hour” in bars dates from 1962. [word derivations from OED.]

“Buddy” meant a working companion, originally in a mine. In the 20th century, it was extended to the “buddy system” in the army; and eventually to friends who hang out together. “Pal” has a similar history, originally meaning “brother” or “mate.”

Mutual-interests friends are probably the category that expanded the most over the centuries. There were circles of poets among Chinese gentry in the medieval dynasties; and we see European paintings of amateur musicians playing their lutes in the 1600s; gatherings to listen to someone play the piano become common in the 1800s. But "hobby" or "hobbyist" meant a silly obsession until it acquired its current meaning around 1950.*

* A “hobby” was originally a small horse; in 1818, a toy horse with wheels for children to ride. The term was soon used to mock someone as a crank. Around 1900 it began to be extended to pastimes like stamp collecting. 

Social acquaintances as a form of friends existed in some form, but was not very prestigious. In Rome, “parasites” were hangers-on at a rich man’s house, hoping for a lower place at dinner in this very status-stratified society.  A similar but more exalted pattern developed in the rank-conscious court at Versailles and its imitators in the 1600s. [Elias, The Court Society] The notion that guests and host are friends of equal status dates from the 20th century.

Overall, the amount of time spent with friends of any kind other than allies has greatly increased. The era of the Internet amplifies this even more. The generation born after 1998 spend an average of nine hours a day on their smart phones, an unprecedented amout of time with “friends” however they are defined.

What lies ahead? The meaning of friendship has shifted enormously over the centuries. Some of the biggest changes happened quite recently, as we see in the history of the words “fun,” “happy,” “buddy,” “pal,” and “hobbyist,” and the 21st century category of Internet “friend.” There is no reason to expect that such changes are going to stop now.

What difference do friendship networks make?

Now for the question about effects of friendship on health, happiness, and career. Which kind of networks are good for your health? Isolation, especially when old, is said to shorten your life. One possibility is that all kinds of friends keep you alive, even superficial acquaintances, a warm bodies effect. A more refined hypothesis is that positive health effects come only from successful IRs. We have little evidence broken down in this way, but here are some likely inferences:

Allies do not have to be warm and personal; interactions can be manipulative, artificial, or subservient. This doesn’t sound like much of a support group. Alliances can be turbulent and breakable. Such breaks can be traumatic or disappointing. Is this a blow to your health?

Backstage intimates: On a personal or family level, intimates can be quarrelsome or domineering, the opposite of supportive; indeed, a formula for suicide. As IRs, they are not only unsuccessful, they are negative.*

* Christakis’s research found that persons whose friends are obese also tend to be obese. The mechanism may be that obese persons are mutual interest friends whose hobby is eating. Another possibility is they are friends because they have similar backstages, relegated to failure in the associative market for attractive friends and lovers.

Laumann found that men are particularly unlikely to tell their friends if they have a serious health problem. This can be interpreted as a lack of backstage confidantes. A possible reason is that, in many professions, to announce you are very ill is to rule yourself out of ongoing career competition. Being an object of sympathy also signals that your job is an upcoming vacancy. This is a negative trade-off between alliance friends and backstage intimates.

Mutual-interest friends are less contentious relationships. They are rarely traumatic, but are they supportive? The healthiness of these kinds of friendships remains to be tested.

Fun friends would seem to be particularly good for health and happiness. Shared laughter is supposed to be the best medicine.  Contagious laughter is a bodily experience, intensely shared rhythms of a successful interaction ritual. Bu distinguish between spontaneous laughter and forced laughter. As one can observe, working class men and young men commonly punctuate their conversations with laughs, and so do women when they are gushing together with praise about something. The indicator of the success of social laughter as an IR is whether it is contagious to the listener (just watch the listener’s face), or if it is merely the speaker’s way of talking.

Even the uproarious fun of carousing together can sometimes turn negative. Laughter can be cruel, if the fun is bullying a helpless target. [Weenink 2014]  At fraternity parties, it is the isolated girl who keeps on drinking to the end who gets gang-raped; female friends get their drunk friends away, foreseeing beyond the moments of fun. [Sanday, Fraternity Gang Rape]

Network effects on career success have been better studied, and we have more information on what kinds are friends are involved.  Granovetter initiated a stream of research on “the strength of weak ties”, showing that casual acquaintances are better than close friends in providing information about job openings.  But acquaintance ties are valuable mainly for time-bound information where moving faster than competitors gives the advantage. Villette [2009] found that the big fortunes are made by persons who cultivate long-term ties with competitors and suppliers in their line of business. They keep close tabs on their innovations (like Steve Jobs getting the screen-mapping technique from Xerox before they knew what to do with it; and his hanger-on Bill Gates taking it away to Microsoft). Villette found that business empires were built by making loans to rivals in trouble, then taking over their business; usually by going through hard-ball law suits. Villette characterizes most ties that build fortunes as predatory.

Long-term ties from working closely together are characteristic of success in fields where winning the public over to a new style depends on creating networks of followers dedicated to spreading the style. Among modernist architects at the turn of the 20th century, Guillen and Collins [2015] found that very strong ties-- years spent as collaborator or apprentice produced the networks that spread success.  In the intellectual world, the innovators of one generation are often pupils of the famed innovators of the previous generation; ultimate success also involves a degree of betrayal, since the younger have to break away to acquire their own reputation; what they learn is the techniques of innovation, and a key is to know the competitive field thoroughly so that one can find innovative niches with intuitive feel. [Collins 1998] These kinds of intergenerational networks are also found among famous artists, and music composers.

Which of the five types of friends are these? Above all they are alliance ties, within a particular field of expertise. They don’t have to be personal friends, intimates, or fun friends; one could say they are intensely mutual-interests friends, where the obsessive interest is shop talk.

Empirical indicators

Doing research on kinds of friends is not difficult; it is only a matter of observing, or asking the right questions.

Allies:  talking about money; asking for loans; asking for letters of reference, endorsements, asking to contact further network friends for jobs or investments. In specialized fields like scientific research, talking about what journals or editors to approach, what topics are hot, giving helpful advice on drafts. In art and music: gossiping about who’s doing what, contacts with agents, galleries, venues.

Backstage intimates:  Speaking in privacy; taking care not to be overheard. Don’t tell anybody about this.

Fun friends:  Shared laughter, especially spontaneous and contagious. Facial and body indicators of genuine amusement, not forced smiles or remarks like “that’s funny” instead of laughing. Very strong body alignment, such as fans closely watching the same event and exploding in synch into cheers or curses. 

Mutual-interests friends: talking at great length about a single topic. Being unable to tear oneself away from an activity, or from conversations about it.

Sociable acquaintances:  General lack of all of the above, in situations where people expect to talk with each other about something besides practical matters (excuse me, can I get by?) Banal commonplace topics, the small change of social currency: the weather; where are you from; what do you do; foreign travels; do you know so-and-so? Answers to “how are you doing?” which avoid giving away information about one’s problems or matters of serious concern. Talking about politics can be conversational filler (when everyone assumes they’re in the same political faction), as often happens at the end of dinner parties when all other topics have been exhausted.

Two ways to collect this information:

(1) Ask people if they know someone with whom they do any of the above.

(2) Ask them to list people they know; then ask them to check the boxes for each person.

Try it yourself by making a checklist of your friends. Observing these indicators when you see them in interaction.

Finally, to give a more empirical basis for the question of what kind of friends are network friends: use the checklist to see how they interact on-line. 

CIVIL WAR TWO CONDENSED ONE-VOLUME EDITION OUT NOW IN PRINT

References

Nicholas Christakis and James Fowler. 2009. Connected: The Surprising Power of Social Networks and How They Shape Our Lives.

Randall Collins. 1998. The Sociology of Philosophies.

                        2020. Charisma: Micro-sociology of Power and Influence.

 Norbert Elias. 1983. The Court Society.

 Robert Emerson. 2015. Everyday Troubles. The Micro-politics of Interpersonal Conflict.

 David Grazian. 2008. On the Make. The Hustle of Urban Nightlife.

 2019. Mauro Guillen and Randall Collins.  "Movement-based influence: resource mobilization, intense interaction, and the rise of modernist architecture." Sociological Forum.

David Halle. 1984. America's Working Man.

Edward O. Laumann et al. 1994. The Social Organization of Sexuality.

John Levi Martin. 2009. Social Structures.

Daniel McFarland et al. 2013. “Making the connection: Social Bonding in Courtship Situations.” Amer. J. Sociology.

OED = The Complete Oxford English Dictionary. 1991.

Peggy Reeves Sanday. 2007.  Fraternity Gang Rape.

Michel Villette and Catherine Vuillermot. 2009. From Predators to Icons. Exposing the Myth of the Business Hero.

Don Weenink. 2014. "Frenzied attacks: emotional dynamics of extreme youth violence." Brit. J. Sociol.

COLLAPSE OF THE TUTORIAL-PROXY STATE: AFGHANISTAN 2021, PHILIPPINES 1942, VIETNAM 1975 AND OTHERS

The fall of Afghanistan within a week was met with surprise and recrimination. Yet this kind of collapse is not unprecedented. It fits the well-known pattern of a tipping point; in the sociology of crowd behavior, sometimes referred to as the theory of the critical mass.

A tipping point is especially volatile in a situation where it is risky to take part, such as a violent conflict. But when a conflict grows, it can reach the point where it becomes risky not to take part—when the danger is being on the losing side and subject to the vengeance of the winners. Tipping points are characteristic of revolutions, where the entire state breaks down suddenly, in a few exciting days when everyone’s attention is focused on the outcome. Tipping points can also occur in military battles, when one side becomes demoralized and disorganized, giving up the fight in a contageous collapse, simultaneously encouraging the intact army to launch a ferocious attack upon a fleeing enemy. Historically decisive battles have hinged on tipping points (Alexander the Great, Agincourt, etc.) compressed into a few hours. The same mechanism is seen in longer collapses of an entire army, spread over a period of weeks (the German blitzkrieg in May 1940 culminating in the Dunkirk evacuation); or months (the Japanese conquest of Malaya and Singapore during December 1941-February 1942). In both cases, the retreating army was unable to regroup in a position to stop the high-speed attack, resulting in a pervasive feeling of defeat.

Blitzkrieg is really about a feeling of emotional momentum, of one side having it and the other sinking further and further into paralysis: French commanders and government leaders in 1940 emotionally beaten to the point of being unable to conceive of continuing resistance; British forces in 1942 repeatedly outflanked by Japanese air superiority and landings behind their front as they retreated down the Malay Peninsula. In both cases huge forces surrendered, with relatively small casualties. They were not beaten materially, but emotionally.

Not all military campaigns end in tipping points, and some revolutions are fought out in long wars of attrition. What causes tipping points to arise? And what makes some of them tip faster or slower than others? These questions call for extensive comparison of historical cases. Here I will take up a more limited set of comparisons, all involving the United States:  Afghanistan 2021; Philippines 1942; Korea 1950; Vietnam 1975; Iraq 2014. In fact, all US military failures in living memory fit the pattern.

One feature they all have in common is that they are semi-proxy wars. In each case, local troops are trained and supplied by a culturally distant state from overseas. Traditionally this would have been called colonialism.  The Philippines was in fact an American colony, taken over from Spain in the Spanish-American War of 1898. But as a liberal democracy, the US has not regarded itself as a colonial power; it has called its overseas territories temporary arrangements, protectorates, tutorial periods during which we teach undemocratic cultures to become modern and eventually self-governing. We could call these tutorial-proxy states.

Some observers have called this a colonial empire in fact if not in name. (In 2003, Michael Mann called the American pattern Incoherent Empire.) Does it make any difference what we call it? In tipping point collapses, the mechanism is the same. In other respects, ideologically and economically, there are important differences. A colony tends to be economically much more valuable; its economy can be monopolized by the host country; natural resources and labor can be captured; jobs and careers are provided for colonists. (On this last point, there may not be much difference between a traditional colony and a tutorial-proxy state, since the latter is full of overseas NGOs and contractors.) The bottom line is that a tutorial-proxy state tends to be expensive for its patron to maintain; whereas a colony is profitable or at least meant to be profitable. Thus the motives for having tutorial-proxy states abroad must be chiefly ideological and political: they feed national pride, a sense of doing good, and what Weber called the power-prestige of the state. For these reasons, too, the political will to keep or abandon a tutorial-proxy state is unlikely to be steadfast.

To come to the central point: tutorial-proxy states are prone to tipping point collapses, above all because of the tendency for native proxy troops to suddenly fold when their big-brother tutors are not sufficiently in control.

There are 5 main patterns in US tipping-point collapses from 1942 through 2021:

 [1] Large armies beaten by smaller or low-tech forces.

[2] Sudden collapse of native/proxy forces.

[3] Unrealistic expectations, setting up surprise.

[4] Emphasis on offense over defense, retreat unimaginable.

[5] Weak political commitment as crisis develops.

 

Afghanistan 2021.  On paper, Afghan forces were 350,000, against an estimated 60,000 Taliban. Much of the Afghan troops were considered of low quality or reliability; 96,000 as effective. Much of the fighting was carried by well-trained Special Forces, operating US-style in helicopter assaults against Taliban advances, with the US providing high-tech surveillance, and air support from fighter-bombers and drones. The Taliban were armed with automatic rifles, hand-held rocket launchers and rocket-propelled grenades, plus IEDs and suicide bombers; transport was by pickup trucks and motorcycles. A vast disparity in firepower; but US/proxy troops had far greater logistics problems in fuel, ammunition, as well as basic supplies of food and water especially to remote outposts. As Taliban came to control most of the countryside, government supplies and reinforcements had to move chiefly by air; but helicopters were often grounded for maintenance, while the number of bases where they could be serviced shrank to Kabul alone. 

US forces had operated only in a support capacity since 2014. By spring 2021, US troops in country were down to 2500. More importantly, there were 18,000 military contractors, of which 6000 were American—the rest also funded by the US. When total withdrawal was announced on April 14, these maintenance personnal were among the first to be withdrawn, as US bases were closed and equipment flown out or destroyed. Another 7,000 allied troops (chiefly from NATO countries) had no choice but to cease operations as well, since all the bases were US. The result was the high-tech core of the Afghan military was essentially crippled. Huge advantages in numbers and weaponry were negated by shutting down logistics.

Sudden collapse came almost entirely without fighting. On April 13, Taliban controlled perhaps 10% of Afghanistan, the government over 20%, with the rest contested. By mid-June, Taliban had seized about 50 of 400 districts, at an accelerating pace—the last two dozen falling in a few days around June 21. So far these were rural areas; Afghan Special Forces were still fighting back and recapturing some towns. July 1 the US evacuated Bagram air base 40 miles north of Kabul—it was an enormous fortified base set up for the surge of reinforcements in 2009-10. But since the draw-down had started in 2014, the economy of the area surrounding the base had gone into decline; by closing-time it was unused and needlessly expensive. Looters pillaged the base before Afghan forces arrived. It was meant to be handed over to the Afghan air force, but they never used it.

The pace of collapse picked up a month later, the climax of a long crescendo, from andante pianissimo to allegro furioso.  Desertions from Afghan forces mounted; local troops still held all the major towns but feeling unsupplied and cut off. On August 6, the first provincial capital was lost, in a remote area;  a week later, 12 of the 34 provincial capitals were in Taliban hands. The biggest cities were still government-controlled, but the rest fell in the next 2 days, as governors and commanders made deals, fled, or changed sides. That left Kabul, where the remaining US troops had been reinforced. The Taliban had long been planning to besiege Kabul once it was isolated; they were in no hurry, but some of their locals apparently seized the opportunity for a psychological coup. On April 15 the Taliban were waving their flags in the streets – no longer detonating suicide bombs but parading openly as victors. The President, a former World Bank official with weak local roots, fled the country. It was a momentum shift, impossible to undo. The US recalibrated its goal to evacuating its remaining nationals and some of their local helpers from its one remaining airport, while keeping to its announced deadline.

Expectations among American officials (at least at the higher ranks) and top Afghan politicians remained optimistic until the next-to-the-last day, August 14.  When the withdrawal deadline of September 11 was announced in April 2021, US officials estimated  the government might fall to the Taliban within three years.  By early July, intelligence analysts concluded it could happen within six months (Wall St. Journal, July 3, 2021). July 24 Defense Secretary Lloyd Austin “voiced cautious optimism that a nascent Afghan strategy to consolidate defensive positions around important cities... along with limited American airstrikes, could enable Afghan forces to hold the line.” (New York Times, Aug. 16) The Economist (August 14), assessed the situation in the headline “Big-city Afghans are defiant in the face of advancing Taliban insurgents”, reflecting the situation August 12 when the magazine went to press. “Both Afghan and withdrawing Western commanders maintain that the Taliban are not an unstoppable juggernaut. A couple of government victories, or even battles that end in stalemate, could change the dynamic.” Defense of Kabul would be the rallying-point. On the other hand, lower-level American troops as early as April had compared the situation to “Vietnam over again,” a common disconnect between perceptions at higher and lower ranks.

A deep-seated US military style is also evident. US doctrine emphasizes offense, not defense. The aim of a military campaign is to defeat the enemy by destroying it as an organization. Especially in the era of high-tech warfare, the strategy is to attack the enemy’s command centers, destroy its communications, its ability to maneuver and to put its weapons into action. A text-book illustration was the Gulf War in 1991, when US air superiority destroyed Saddam Hussein’s aircraft on the ground; while armored and airborne troops went on a 150 mile long “left hook” out in the desert from Kuwait, patterned on Robert E. Lee’s rout of the Federal army at Chancellorsville in 1863. The invasion of Iraq in 2003 had the same design of lightning offensive, air superiority, and demolishing enemy organization. The priority of offense over defense goes back to World War II: the landings in North Africa, Italy, and France, most spectacularly in Patton’s armor races; MacArthur’s island-hopping strategy in the Pacific. The best defense is always an offense; the Battle of the Bulge was solved by sending Patton to amputate it.

Counter-insurgency war in Iraq and Afghanistan (and going back to Vietnam) required a modification. But although the plan called for strategic hamlets (in Vietnam) and clearing safe areas by a surge of troops (in Iraq), it remained a tactic of going on offense. City neighbourhoods would have to be cleared block-by-block, then insulated (now with high-tech surveillance such as facial recognition) to keep the guerrillas out. It was this method that the US bequeathed to its Afghan proxies, until it foundered under logistics overloads and ever-renewed resistance.

In the tipping-point swing during spring and summer 2021, there was an additional problem: the huge numbers of internal refugees fleeing Taliban conquest. Up to half a million Afghan civilians fled from one city to another; clogging up places where air strikes might be made; putting more demands on government resources to care for them; above all, adding to the emotional atmosphere of a world falling apart. (We will see this again in the fall of Vietnam.) The Afghan army melted away into a shifting population whose psychology resembled their own; while the Taliban, a distinctive identity with their long beards and anti-uniforms that makes their photos unmistakeable, were the one solid point in the vortex. It is this swarm of refugees, augmented by the 300,000 Afghans who had been employed by the Americans, who created the chaotic atmosphere at Kabul airport in the two weeks of frenzied evacuation.

Part of the problem, of course, is that mass evacuation had not been planned for; and that bureaucratic procedures of vetting refugees for special entry permits had been snarled with the political conflicts of the Trump administration. But judging from past cases of mass refugee flows from undesirable places, it is likely that many of those who got to Kabul airport were people with enough money or connections to get there; the task of sorting out their motivations was impossible for the soldiers and employees at the gates. Photos of crowds chasing planes on the runway and clinging to fusilage and wings are reminiscent not just of Saigon in 1975 but of trains in India in past emergencies. It was a free-for-all; airport shops were looted; stray Taliban wandered around inside firing shots in the air. In short: destruction of routine social organization and its replacement by emotion-driven crowds is a contageous social disease. The smaller, more collectively self-disciplined group survives best in such situations, and even thrives on them. In this case, it was the Taliban.

Weak political commitment is high up in the chain of causes.  Successive US administrations since the 9/11/2001 attacks have all hoped for speedy resolution of their plans. All have faced political dissent, some in the form of public demonstrations and journalistic opinion, some from elected officials on up to the Presidency itself. From the Afghan point of view the foreign master of their proxy army must have looked like a combination of capriciousness and inertia. A series of deadlines had been set for withdrawal, modified by later plans for temporary build-ups and drawn-downs. In 2003, President Bush announced an end to major combat operations in order to concentrate on the invasion of Iraq. In 2006, counter-insurgency warfare in both places led to a troop boost. In December 2009, President Obama ordered a surge adding 33,000 troops to the 67,000 in Afghanistan, with a July 2011 deadline for beginning withdrawal. In June 2011, a month after Osama bin Laden had been killed in Pakistan, Obama announced that US troops would be withdrawn and security handed over to Afghan forces by 2014. In May 2014, Obama announced that 32,800 US troops would be reduced to 9800 by the end of the year, and to zero at the end of 2016. The trend continued but slower than planned. President Trump brought the existing 14,000 down to 5000 by November 2020, which fell to 2500 by April 2021.

Throughout, US airpower, including from distant bases and aircraft carriers, and with a rising use of unmanned drones as the technology developed, continued to back up proxy forces; whether from in-country bases, or from “over-the-horizon” locations. Thus the numbers game of troop reductions was always ambiguous, as long as it was backed up by US airpower, global electronic surveillance, and a willingness to pay the bills.

This last point is an important component of political will. In addition to war weariness and war casualties (the last fallen to zero in the last 18 months before the final pull-out), there are struggles over the federal budget and the military’s share in it. High-tech war, relatively uncostly in lives, is ultra-expensive in logistics and equipment. No doubt part of President Biden’s calculations were to wipe the slate clean of a trillion-dollar drain; and this must have played a part in withdrawing not just troops but the contractors who kept the Afghan military machine running.

Tipping points are a matter of timing and rhythm. Imperial powers have extricated themselves from proxy clients on occasion without causing a collapse. When things start going bad, they go bad in all sorts of ways—what is contageous is the emotional swing. Scrambling to recover from the lack of air bases not too far over the horizon, US diplomats found Afghanistan’s neighbours had become wary of granting anything, more intent of sizing up the new situation where the US was looking like a loser. Nothing is permament in the world of geopolitical power-prestige; tipping points themselves get absorbed in the long run of fluctuating forces. But in the medium run, at least, they hurt.

 

Philippines Dec. 1941-May 1942.  Larger forces were beaten by smaller. General MacArthur had a total of 140,000 troops, including 19,000 American; 20,000 Filipino regulars, and 100,000 low-quality Filipino reserves. He also had the biggest US air force in the Pacific, with 35 B-17 bombers and over 100 advanced fighters. And Manila Bay was base for the U.S. Navy “Asiatic Fleet” (different from the Pacific Fleet based in Hawaii).  MacArthur’s forces were spread throughout the Philippine islands, with the most and best in a ring around Manila on the northern island of Luzon. The Japanese invaded Luzon Dec. 22 with a main force of 43,000 north of Manila, with an encircling force of 7000 landing on the opposite coast southeast of the city.  Elsewhere in the archipelago, small Japanese units quickly defeated Filipino troops. 

MacArthur had a 2-to-1 advantage for the main battle in Luzon. The Japanese advanced slowly, giving him time to unite his two retreating forces: 28,000 retreating before the 10,000-strong Japanese vanguard from the north; 15,000 retreating in front of 7000 Japanese from the south-east. They united at a cross-roads outside Manila, making a 10-mile traffic jam. By Jan. 6, following 2 weeks of retreating, they took up defensive positions in the Bataan peninsula, a dense jungle comprising the southwest curve of Manila Bay. By this time there were 15,000 US and 65,000 Filipino troops, plus 26,000 civilians (more on these below). Here the defense stiffened. Japanese attacks were repulsed. Amphibious landings behind US lines on the small peninsula failed, in contrast to the early days of the invasion when few landings were even contested. Both armies had many sick with malaria. By February, the Japanese had ceased serious attacks, and even withdrew air support and 20,000 troops for an opportunistic attack on the Dutch East Indies (Borneo, Sumatra, Java, rubber plantations and oil fields that were their chief aim in South-East Asia). By early March, Japanese forces were down to 3000 on their Bataan front line.

The Americans had a 3-to-1 advantage but did not know it. Later in the month, the Japanese brought back 20,000 troops, having quickly taken the East Indies with little resistance. Their renewed offensive broke through in Bataan, where 78,000 US and Filipino troops surrendered April 9. The fortified island of Corregidor, off the tip of the Bataan peninsula, held out for another month; after 3 days of intense bombardment, a Japanese landing got ashore, and another 12,000 surrendered on May 10. It was the biggest single defeat and the largest number of prisoners taken in American history.

The proxy army of native Filipinos had a mixed record. Left to themselves, they made little resistance to Japanese advance. Especially in the first weeks, there were many desertions. But united with US forces in Bataan, they performed just as well. The big problem in Bataan and Corregidor was food and morale. There was a shortage of supplies for the 80,000 troops, made worse by 26,000 refugees; from the beginning they were put on half rations, with the amount reduced during the 4 month siege. Air support was lost. The US Navy had withdrawn. No reinforcements or resupplies were to be expected. President Quezon, who had fled with his family to Corregidor, proposed on Feb. 8 to declare the Philippines neutral between the US and Japan. He was overruled by MacArthur, who warned he would become a Japanese puppet; on Feb. 20 Quezon and his family were evacuated by submarine, along with the Philippine treasury gold. At the end, the US were destroying military equipment, another echo for Afghanistan.

Unrealistic expectations led to being taken by surprise. It was widely believed that war with Japan was coming. The diplomatic situation had deteriorated to ultimatums; troop convoys were spotted, intelligence reports knew an attack was coming. The question was where and when. MacArthur had been promised a reinforcement of 50,000 American troops by February, although ammunition would take another six months. [Manchester 192] Almost everyone underestimated Japanese weapons quality and fighting capacity, misled by the pervasive stereotype of short-stature, underfed nonwhite people centuries behind in civilization. Ignored was Japanese modernity in recognizing the tactical use of air power, aircraft carriers (Japan had built more of them than any other country), amphibious landings, and the tactic of rapidly building forward airbases. All these would play a central role in the Japanese blitzkrieg through the first half of 1942. And no one in the US (or British or Dutch) military thought that Japan could strike simultaneously in so many directions: Hawaii, far out in the Pacific; Hong Kong and British possessions in Malaya and Singapore; the Philippines; the Dutch East Indies.

Thus the Dec. 7 attack on Pearl Harbor was a surprise; but it did not raise alarms elsewhere. In fact, it was taken to mean other potential targets had a temporary breathing spell. News of Pearl Harbor reached the Philippines at 2.30 a.m. on Dec. 8 (being on the other side of the International Date Line). MacArthur and his officers turned the attention immediately to their long-distance B-17 bomber force at a base 65 miles north of Manila. But their concern was offense, not defense: whether to retaliate with a raid on the nearest Japanese air base in Formosa. While discussing this and waiting for orders, a Japanese air attack arrived 9 hours later; it caught the entire air fleet on the ground refueling and destroyed most of the planes. American air superiority was destroyed even more thoroughly than the crippling attack on the Navy at Pearl Harbor.

During this time MacArthur went into an uncharacteristic mood of depression, even paralysis. [Manchester 206-7: “numbed; gray, ill, exhausted”] Usually so confident and dynamic, he seemed unsure what to do. No follow-up Japanese attack came for two weeks until the landing Dec. 22. Grasping at straws, he gave ear to vague rumours of Japanese defeats; Hong Kong had been besieged on Dec. 8, but it would not fall until Dec. 25. The Japanese were leap-frogging down Malaya, but Singapore would not fall until Feb. 15, with 100,000 British prisoners taken and the Royal Navy lost. What was the matter with MacArthur was probably that his deeply held strategic sense was being contradicted. Asked once by a reporter for his formula for defensive war, he had said: “Defeat.” [Manchester 168]

He well knew that the decades-old plan for the defense of the Philippines was to pull back into Bataan and the island-fortress of Corregidor, and wait until relief came. It was a labyrinth of layered tunnels, with supplies for 10,000 troops to hold out for 6 months. But MacArthur scorned the plan. Realistically, he expected the Japanese to land in various places on Luzon and the other islands; the best defense must be offense, hitting the landings in progress; at worst, holding them on the beach and driving them back into the sea. He had positioned his forces for this eventuality. His huge quartermaster depots were dispersed to where they could back up his offensive forces aimed at the beaches.  

This would come back to haunt him in two different ways. When it turned out the Japanese got ashore anyway, and began converging on Manila, MacArthur had to wrench his strategic gestalt into reverse; he would have to go back to the Bataan/Corregidor defensive plan that he hated. On top of this, was the problem of supplying his huge army—far more than anticipated—out of diminished stocks; plus the mass of refugees. As in Afghanistan, these must have created an atmosphere of gloom, even panic. MacArthur was close to the Philippine upper class; he and his wealthy wife habitually had socialized with them, more than with Americans. As in most disasters, the elite are always those best equipped to get out first. To be surrounded by them must have been demoralizing for MacArthur. He became overly pessimistic, over-estimating the numbers of Japanese against him; mired in defense on Bataan when he actually had an opportunity to break out by taking the offensive. Pessimism was pervasive at all levels. Already on Dec. 14 the US command had pulled out the remaining B-17s and fighters; while the Navy withdrew its ships from Manila Bay, leaving MacArthur only a few submarines and torpedo boats.  

March 10, MacArthur personally was ordered to evacuate to Australia, to take over as Commander-in-Chief. It was another psychological crisis; MacArthur was proud of his reputation for bravery under fire on the Western Front in WWI; he hated the idea of retreating, and wanted to die with his troops.  Convinced that it would be a propaganda victory for the Japanese, he left, swearing to return. His departure lowered morale even further among the troops on Bataan, who lasted another month.

Which brings us to political commitment, or lack of it. Roosevelt and Churchill had already agreed that defeating Germany took priority; the Japanese war must wait. And within the Pacific strategy, the Philippines were furthest from home, hardest to defend and resupply. The decision was quickly made to cut further losses. This became even more adamant as Japanese conquests flowed on; Rangoon fell March 8, the British being driven out of Burma; Java fell in 7 days on March 10. Australia would be next. Strategic priorities again. The Philippines was unusual in that 20,000 Americans were lost in the final collapse, a fraction of them surviving as prisoners. For the native proxy troops, losses were over 100,000, near total.

 

Korea 1950. We have to go back a few years before the North Korean attack on June 25, 1950, to get the picture of South Korea as a tutorial/proxy for the US. Korea had been a Japanese colony, in fact the springboard for its conquest of Manchuria and China in the 1930s. In 1945, Russia suddenly entered the war and took the half above the 38th parallel; the US quickly responded by sending troops to the southern half. The ROK (Republic of Korea) army was trained and armed by the US military government. In 1949, the US deemed the new civilian government capable of defending themselves and pulled out, leaving only a few hundred officers.

The North Korean army [KPA, Korean People’s Army] originated in a guerrilla force allied with the Chinese Communist Party in their north China stronghold. After 1945, they were armed by the Soviets and encouraged by the CCP. The latter were mired in their own civil war until autumn 1948, when the Nationalist government lost a huge battle to drive them from Manchuria; Nationalist forces crumbled in another big tipping point, the CCP swept south and the remnants fled to Taiwan by the end of 1949. The Korean war was a continuation of the tide of conquest.  Historically, Korea had been part of the Chinese empire, and under the banner of communism it might become so again. The new People’s Republic of China sent 50-70,000 Korean ethnic veterans to augment some 100,000 armed by the Russians.  North Korea had a population of 10 million and South Korea 20 million; but for the moment the KPA was considerably bigger than the ROK. The invading force was perhaps 100,000, with tanks and artillery; the ROK total was 98,000, but lightly armed and unable to stop the armored attack. Seoul fell in 4 days, as the army collapsed. Masses of soldiers defected to the KPA, leaving the the ROK with about 22,000, fleeing southward. July 5 US forces began arriving from Japan by air, but were initially unable to stop the rout. By the end of July two US divisions had arrived, but were forced to retreat along with the ROK to a perimeter around the port of Pusan, at the south-east tip of the Korean peninsula, 250 miles south of Seoul.

The US relied heavily on its Air Force. There were no paved roads except in a few cities; only two railroad lines; only one airport with sizable paved runways, and that had been captured at Seoul. This posed a problem because the US Air Force was in the middle of a transition to jets, which required smooth runways and a lot of maintenance. MacArthur, the C-in-C, quickly determined to bring back obsolete propeller planes like the P-51 Mustang, since it could use dirt runways. The immediate problem was to stop the KPA attack; jet fighters were ill-suited to providing combat ground support, using up more fuel at low altitudes, and operating from bases in Japan could remain in their target area as little as 20 minutes. (With a different arithmetic, these were the same problems as providing “over-the-horizon” air support in Afghanistan.)

By September 1, the MacArthur’s reinforced army around Pusan had hundreds of tanks and outnumbered the KPA 180,000 to 100,000. He had solved the problem of poor morale among the ROK troops by integrating them with Americans in a “buddy” system, one-to-one. But instead of a grinding offensive from the bottom of the peninsula, he had already planned a landing behind enemy lines; choosing Incheon, the port of Seoul, at the narrow waist of Korea near the North Korean border.  Sept. 15 the invading force of 48,000 men (called X Corps, comprising an Army and a Marine division plus 8000 ROK), successfully landed. On Sept. 25, Seoul was recaptured. Simultaneously the Eighth Army (at Pusan) broke out of its perimeter on Sept. 16, and drove diagonally up the peninsula to reach Seoul Sept. 27. KPA forces were already suffering at the end of their logistics lines, and lost most of their tanks and artillery to air attacks; it was their turn to disintegrate, only 30,000 reaching North Korea.

At this point the numbers advantage had reversed.  U.N. forces totalled 230,000 combat troops, including 80,000 ROK (apparently reorganized since their disintegration 3 months ago), with the vast majority of the rest American. Officially it was the “United Nations Command”, since the initial force had been authorized by the UN Security Council; but unlike other UN peace-keeping operations, no UN staff ever existed [Urquhart 120]. It was just another title for MacArthur’s headquarters in Tokyo. He kept the Eighth Army advancing up the west coast of North Korea, capturing the capital Pyongyang on Oct. 26. His landing force, X Corps, was reembarked and “water-lifted” around to the east side of the peninsula, landing at Wonsan and Hungnam (100 and 150 miles above the 38th Parallel), and heading towards the Yalu River 200 miles north—the Chinese border.  

The Chinese now intervened again. 300,000 were already at the Yalu. Around Oct. 19, 200,000 were sent into Korea, moving stealthily by night, camouflaging by day, maintaining strict silence when aircraft appeared. Their light weapons were carried by bicycle and pack animal, relying on sheer numbers—and as we shall see, emotional momentum—to make up for deficiency in heavy weapons. The crinkled and creviced mountainous terrain helped, along with the onset of winter.

US forces were optimistic. Nov. 24 MacArthur launched a two-pronged offensive, Eighth Army and ROK up the west side, X Corps up the east side, divided by a spiny mountain range. Soldiers called it the “Home by Christmas Offensive.”  Chinese forces hit the western prong on Nov. 25. Two days later, on the eastern side of the spine, 1st Marine Division advancing up a narrow mountain track around the Chosen Reservoir was ambushed and surrounded in freezing weather. It was the low point of the war. President Truman and the Joint Chiefs of Staff feared another Dunkirk, and by Dec. 3 began planning to evacuate all forces from Korea. With air support and supply drops, the Marines/ X Corps were able to retreat to a defensive perimeter at the Hungnam port by Dec.11, where they were evacuated by sea by Dec. 24, back to Pusan. Back to square one.

It was a war of sudden collapses, each side taking turns. Seoul changed hands four times. The US/ROK side was like a yo-yo up and down the 500-mile peninsula: down to the bottom in a month, up to the top in two months, collapsing again in December and January when all its forces retreated below Seoul before stabilizing its lines. After the second collapse, MacArthur was replaced by General Ridgeway who reverted to traditional straight-ahead slogging, playing defense until accumulating enough forces to retake Seoul; from February 1951 to July 1953 the front line oscillated around the 38th  parallel, the yo-yo running out of momentum. By the end, US forces totaled 1,780,000; China almost 3 million. The great majority of their casualties (US 36,000 killed and missing; ROK 162,000; China half-a-million to a million; KPA 200-400,000) were during the stagnating end-game.

The US first played blitzkreig offense, then World War One-style attrition. The Chinese countered both with guerrilla war on a massive scale.  Because of US air superiority and its tactic of using heavy bombers to destroy enemy supply lines and havens, the Chinese relied on infiltrating huge numbers of lightly armed troops across expanses of territory hundreds of miles wide—the endlessly convoluted hills of the north being ideal for this—then attacking local points with overwhelming numbers, swarming from all sides at night to the noise of bugles and gongs. The US lost ground, hundreds of miles of it in the panicky retreat around New Years 1951, abandoning weapons along the way; then less and less as troops became accustomed to Chinese tactics. Superior US firepower was good enough for stalemate, but not for victory. The main Chinese weapon was its willingness to take heavy casualties and continue sending reinforcements, playing the psychological game of greater perserverence until the enemy gave up.

Unrealistic assessments and over-optimism were widespread, especially in 1950. In June, the US commander in Korea said any North Korean attack would be “target practice” for the ROK. The CIA reported troop movements but interpreted it as another small-scale clash like those between communist guerrillas and the ROK in past years. After the advance into North Korea in October, it became apparent that Chinese forces were massing at the Yalu and even crossing it.  MacArthur met President Truman in the middle of the Pacific to tell him that the Chinese would be slaughtered by US air attacks if they tried to retake Pyongyang. Air Force commander Gen. Stratemeyer wrote up what he believed would be his final report on lessons learned. The offensive near the Chinese border planned for late November was known among American troops as the “Home-for-Christmas Offensive” echoing MacArthur’s remarks to the press. [Manchester 606] MacArthur was confident enough to make a personal inspection by flying the length of the Yalu River Nov. 24. Flying at low altitude, he saw no signs of troop movements, no anti-aircrack flak, no Russian MiG-15 jets. If troops had passed that way, their tracks were covered up in the snow. His giant pincers attack was launched the next day; the Chinese ambush began two days later.

Political will was shaky throughout, but MacArthur’s successes temporarily stilled most worries. Truman and the Pentagon agreed the invasion of South Korea had to be resisted. But they had just gone through a large reduction in the military budget; the Air Force was converting to jet planes, especially their new weapon, long-range strategic bombers carrying atom bombs. Soviet Russia had the A-bomb too, and the big concern was not to provoke them into an atomic war; at the same time, a Russian invasion of western Europe was the main priority. MacArthur would have to make do with what forces he had in Japan; though some armored units were sent to Korea to counter the KPA’s Russian tanks. MacArthur aggressively pressed his advantage into the North—determined not to make the same mistake twice (as in the Philippines when he was thrown on the defensive), similar to the 2003 rationale for going all the way into Iraq to bring down Saddam Hussein’s regime, rather than a limited goal like retaking Kuwait in the 1991 Gulf War.

New restrictions were given: only ROK troops could go above the 38th parallel; US could only follow their advance; only ROK could approach the Yalu. But as logistics lines stretched into the primitive tracks of the North, the war relied on air power to supply US forces, and to destroy enemy logistics. Argument centered on Chinese territory across the river as staging area and safe haven. When MacArthur launched his “home for Christmas” offensive in November, it was with the ambiguous understanding that he would stop at a buffer zone below the Yalu.

A new element entered the picture in early November, when Soviet MiG-15 fighters began to appear over the Yalu. As if by tacit agreement, they did not come much further south, but would dart into Korean territory, attack the B-29 bombers, and retreat to Chinese air space. MacArthur and Stratemeyer wanted the right to “hot pursuit” while chasing them across the river, but this was denied by the Pentagon.  The MiG-15 was superior to any US fighters then in Korea, flying at a higher ceiling, faster, with greater firepower. The USAF had a similar plane, the F-86, but it was reserved for homeland defense against Soviet nuclear bombers. In the emergency of mid-December, the Pentagon relented and sent the F-86s; and in the latter years of the war they would prevail in the dog-fights over “MiG Alley”. For the time being, with the collapse of US forces in the north, F-86s were limited by the need for high-quality runways, and these were lost with the retreat below Seoul, leaving them with a long flight from Japan—the “over-horizon” issue again.

MacArthur’s insistence on taking out enemy bases beyond the Yalu led to his being fired in April 1951. By this time it was a last grasp. Truman and the Pentagon had gotten over their New Years panic but their concerns morphed into worries over a Pearl Harbor-like attack by Russia, and an escalating nuclear arms race as both sides acquired the hydrogen bomb. Costly as it turned out to be, the Korean War was regarded as a side-show that shouldn’t have happened, but from which it was impossible to extricate oneself without loss of prestige and security. The election of Eisenhower in 1952 with a promise to end the war brought an armed truce lasting for the next 70 years.

One more similarity to the Iraq and Afghanistan wars is worth mentioning. The US Air Force in 1950 was badly prepared for the kind of war Korea turned out to be: it needed close air support in the age of jets, conventional bombing and escort protection in the age of the H-bomb. Thus it had to call back to active duty reservists and veterans of World War II; and to assign them long quotas of bombing missions. For Iraq and Afghanistan, the repeated tours of duty for reserves and National Guard troops extended even longer. But what they felt mattered less, since it was no longer an army of draftees but a smaller professional force, with a thin network of relatives among civilians at home. The Korean war was the first really unpopular war, setting the pattern for wars to follow.

 

Vietnam 1975.  First the numbers. South Vietnam forces grew sharply and steadily from 240,000 in 1960 to 1 million at the time of the US withdrawal in 1973. About half of these were ARVN (Army of Republic of Vietnam), trained and equipped by the US. The other half were local self-defense militias; also trained by the US, who distributed half a million weapons to them in the late 1960s. The US took over military support after the French colonial regime pulled out in 1954. President Kennedy sent 16,000 “advisers” by 1963 and set up a Military Assistance Command, which attempted to cut off communist guerrillas from their base by moving rural populations into “Strategic Hamlets”. Lyndon B. Johnson sent the first US combat troops, 54,000 in 1965, rising to 390,000 at the end of 1966; 495,000 end of 1968; and a peak of 543,000 in April 1969. Thereafter “Vietnamization” of the war became official policy; numbers dropping to 335,000 in late 1970; 157,000 end of 1971; and 69,000 in April 1972.

On the other side, the North Vietnam Army (NVA) as of 1968 was about 500,000; the Viet Cong guerrillas about 200,000. At this point in the war, the balance of power was heavily on the US/S.Vietnam side: about 1.5 million to 700,000. Air superiority was almost total, with US heavy bombers and combat helicopters; N.Vietnam had Russian-built jet fighters that tried to protect the north but were shot down at a high ratio by US fighters.

Vietnam is a curved banana shape 1000 miles long, widening at the top around Hanoi and at the bottom below Saigon. The two halves are bordered on the west by Laos and Cambodia, a jungle and mountain range through which ran the Ho Chi Minh trail. It played a similar role as Pakistan in the Afghanistan war, foreign territory used for military supplies and covert troop movements; diplomatically out of range but de facto terrain of secret (really just unofficial) border violations. Vietnam became a spill-over war, as communist guerrillas gained strength in Laos and Cambodia; and as the jungle trail developed from foot-path taking 4 months to traverse in the early 1960s, to a North Vietnamese military highway by 1975 (when US air power no longer threatened it).

The war was a combination of conventional battles between the NVA and US troops in the northern border areas (artillery, mortars, bombers, armored vehicles); and guerrilla war in the south, helicopter-supplied US outposts, strategic hamlets, and search-and-destroy missions. These were carried out by the high-tech of the time, armed helicopters, sudden troop landings, plus fire-bombing designed to destroy the jungle. Strategy and tactics were at US initiative. ARVN units never carried out coordinated operations with the US (unlike in Korea). It has been argued that ARVN was made passive by US hogging the initiative; and its officers were split by political factions, involved a series of coups since the 1950s. Above all, ARVN was notoriously corrupt; there was a huge black market in US weapons and supplies, in Saigon and throughout, where weapons changed sides. (Same problem with US-supplied forces in Syria and Iraq in period since the Arab Spring in 2011.) No doubt US officers were loathe to share plans with ARVN, assuming they would reach the enemy.

Guerrilla war was transformed by the Tet offensive at the end of January 1968. It was an unprecedented, coordinated attack on cities throughout S. Vietnam, by 70-85,000 NVA and Viet Cong. The scale of the attack was a shock, following a stream of reports that the counter-insurgency war was going well. The NVA openly used their heavy weapons; the Viet Cong came out in the open. If it was intended to trigger a collapse of the regime in a revolutionary uprising, it failed; ARVN for once fought well, with US forces leading the operations. In five weeks of heavy fighting, communist forces lost half their strength, killed or captured (reported numbers are inconsistent: variously cited as 50,000; or half of their prior strength of 240,000, which would be 120,000). Whichever the numbers, communist forces must have relied heavily on psychological shock, since US/ARVN could muster 1 million against them, plus total air power. The Viet Cong was virtually annihilated; the war now was carried almost entirely by the NVA.

Tet did, however, have a political effect where it counted most: in American politics. The Tet offensive ended March 7. On March 31, LBJ announced he would not run for re-election; and simultaneously a pause in the three years-long bombing of N. Vietnam, to get negotiations going. The program of Vietnamization was announced—although it would take another 2 years to get it going. Paris peace talks began May 1968, dragging out until a truce was agreed in January 1973. Like Presidents Obama and Trump in Afghanistan, negotiations went on at the same time that troop cuts had been announced and a schedule for withdrawal was being carried out. This gave away the main leverage in negotiations; the US assumed that a combination of bombing campaigns plus ARVN strength would be enough to force an agreement.

Fast forward to 1973. The Paris Accords, signed in January, allowed both sides’ forces to remain in place at the armistice. This left 160,000 NVA inside S. Vietnam; presumably US negotiators felt this was adequately balanced by 1 million total ARVN plus militia. US agreed to pull out all remaining troops by April, which were already down to 16,000 non-combat advisers and administrators.  US air power would be replaced by the American-trained S. Vietnam air force, the fourth largest in the world. In material resources, S. Vietnam was prepared to stand on its own.

During 1974, NVA and ARVN fought intermittently, more or less to a standstill. By this time Nixon was out of office; President Gerald Ford in January 21, 1975 announced the US would not re-enter the war. March 10 the NVA launched a full-scale offensive, 3 days later forcing ARVN to abandon the Central Highlands. In another 3 days, the cross-roads city Pleiku was evacuated, while refugees from the big northern cities Hue and Danang began to crowd the roads. In two weeks, Hue and Danang fell. Two million refugees clogged the escape routes. Of 60,000 ARVN troops, only 20,000 got through, no longer combat effective. A General described the evacuation of Danang in terms echoed at Kabul in August 2021: “The airfield was besieged by a frantic crowd, deserters included, who trampled the security force, overwhelmed the guards, swamped the runways, and mobbed the aircraft. It became so unsafe for the jets themselves that the airlift had to be suspended.” (Summers 198) Other generals fled or committed suicide.

Airport near Saigon April 22, 1975 (Summers p.200)

Airport near Saigon April 22, 1975 (Summers p.200)

Early in 1975, ARVN had outnumbered NVA two- or three-to-one in combat troops, tanks, artillery, and had 1,400 virtually unopposed aircraft. Why the collapse? One explanation, similar to Afghanistan, was that much equipment was inoperable due to lack of spare parts and maintenance personnel; but ARVN had been holding its own up to the final offensive, and it seems likely another factor was responsible for what shortages existed: the endemic black market. The military collapse from the Central Highlands to the coastal cities down to Saigon was a cascade, an emotional tipping point exacerbated by the tidal wave of refugees. Not only were the soldiers demoralized; at the time, officers noted “the family syndrome”—troops would go looking for their families to help them escape, military organization dissolving into a chaotic human traffic jam resembling what we recently observed outside Hamid Karzai International Airport.

On April 21, as remaining ARVN forces retreated into Saigon, President Thieu resigned, making a bitter speech accusing the US of betrayal. Four days later, he flew to safety in Taiwan. By this time, forces at Saigon had withered away to 30,000, while 100,000 NVA surrounded the city and closed the airport. This set off a last wave of panic to get out. The city fell with virtually no resistance; the new President—yet another general—surrendered on April 30, and the war was over.

I will not review the many unrealistic assessments made during the long war from 1962 through the U.S. Ambassador’s last-minute claim that Saigon could still be held. The Vietnam War was unusual in how many Americans rejected the official rhetoric. There was an anti-war movement already in late 1965, intensifying in 1967, culminating in a march by 50,000 protestors including Vietnam veterans on the Pentagon in October 1967. This was a spill-over of civil rights demonstrations and urban uprisings during those years, which also had their effect among American troops in Vietnam. Race-riots broke out at Danang and other military bases; navy ships offshore had unofficial no-go zones on their decks between black and white sailors; in the combat zone, hundreds of officers were killed in “fragging” attacks—throwing a fragmentation grenade into an officer’s tent. The atmosphere of military discipline had broken down.

A contributing cause was the Pentagon policy of calculating the progress of the war by statistics. In a war without front lines, victory was measured by attrition, and that required estimates of how many enemy were being killed before they could be replaced. And since it was war of guerrillas, wearing no uniforms and hiding among civilians who looked just like them, and inflicting casualties from ambushes and and booby traps, troops were suspicious of everybody. American troops developed a sardonic attitude: if it’s dead, it’s red. Higher command wanted a body count; officers’ performance was based on it; all incentives were to count every dead body as Viet Cong, whether man, woman or child—in the experience of helicopter-transported grunts, some of them were. Aerial bombing with napalm to burn enemy hiding places produced unidentifiable bodies. Soldiers might perform altruistic acts at one moment and callous ones at another; a combination that created the most alienated military veterans in US history. One reason some officials became willing to pull troops out was the realization that their army was going to pieces if they continued to fight in Vietnam.

Anti-war demonstrations did not sway the majority of Americans; anti-war candidates lost the Democratic presidential nomination in 1968; Nixon defeated an anti-war opponent for re-election in 1972. Nevertheless, Nixon’s concern about anti-war opponents inside the political establishment led him to the Watergate scandal, and to being forced from office. LBJ’s reaction to the Tet offensive—which could have been regarded, from the military point of view as proof of US success on the battlefield—was in keeping with US news media swinging to the anti-war side, seeing Tet as further evidence of war atrocities and cover-ups. In 1970, the Senate repealed the 1964 resolution authorizing US involvement, and prohibited US ground troops from going into Laos (spill-over around the Ho Chi Minh trail);  by 1973, it passed an amendment preventing any further US military involvement in South-East Asia. In 1974, it refused to appropriate funds requested to bolster the South Vietnam military. By 1975, President Ford was openly saying the war was over, as far as the US was concerned. He reiterated it on April 23; Saigon fell within a week.

Note: A 1977 Pentagon assessment said the US could have protected S. Vietnam with half as many divisions, if it had put them in fortified positions around its borders, instead of fighting a guerrilla that could be left to ARVN. (Summers 187) But this would have been abandoning offense for defense; not in American military doctrine or tradition.

 

Mosul, Iraq, 2014. June 2014 saw a sudden ISIS offensive in northern Iraq, not the usual guerrilla ambushes with IEDs, but openly aimed at capturing territory. The main target was Iraq’s second largest city, Mosul, population 1.5 million. Mosul was defended by 20,000 US-equipped Iraq soldiers, but they melted away in the face of 1500 ISIS fighters on pickup trucks. Since 2011 the US had turned over fighting to the Iraqi army it had trained, some 300,000 strong. But these disintegrated against an ISIS force of no more than 10,000.  ISIS took over a large territory in north Iraq and Syria, proclaiming a caliphate with its capital at Mosul, collecting taxes and violently enforcing Shariah law on the population.

The sudden collapse of the Iraqi military was blamed on lack of fuel for its vehicles and ammunition for its weapons, unpaid soldiers and low morale. The underlying problem was embezzlement of US-paid funds by Iraqi officers, and a Vietnam-style black market in weapons and fuel, selling them off to the multiple factions fighting in Syria, and even to ISIS. 

It took two years under renewed US guidance (and spending) to reconstitute the Iraqi army. In Oct. 2016 a force of 100,000 Iraqi troops started to retake Mosul, bolstered by 3000 American advisors and airpower. ISIS had about 5-8000 fighters plus local militia, a total of 12,000. They held out for 9 months, fighting block-by-block. ISIS used captured drones for surveillance, and armored vehicles camouflaged as civilian but actually carriers of suicide bombs; it was US high-tech abandoned by Iraqi proxy forces and turned against them. In the end, much of Mosul was destroyed, by US air strikes, Iraqi artillery, and eventually armored bulldozers burying ISIS positions. With the US temporarily back in command, the tutorial/proxy army won by weight of numbers and equipment. Again we see rapid tipping-points where the weaker beat the stronger; and advance of the strong by grinding attrition.

The enormous destruction at Mosul was largely overlooked by the news media, since US troops were not obviously involved on the ground. The media were another political factor in the war of hearts and minds. On the whole it was played out not for the loyalty of local civilians (in this case the Iraqis who lived there), but for insurgent recruitment, like the surge of volunteers to ISIS-held territory; and for the flux of political support, war-weariness and war-outrage among a distant American audience.

 

Take-away: Can unrealistic expectations and wavering political commitment be avoided? Over-optimistic assessments are in the nature of the beast; most engaged leaders believe in what they are doing, and successful military commanders are usually aggressive. (At least on the side of conventional armies; insurgents’ best strategies are persistence and patience, along with willingness to take enormous casualties. But we’re not playing that hand.)

Political commitments to any particular war policy waver both because conditions change, and because democracies are by definition a combination of the people, not a single personality. This is especially true when a war is distant from home, and conducted with a heavy dose of tutorial/proxy forces that don’t do well on their own. Spectacular challenges, defeats, and atrocities can bring near-unanimous political support for a war, but this lasts only 3-to-6 months before disagreements reemerge; and after that the war gets carried on by organizational momentum and a certain amount of Machiavellian manipulation by officials. Diplomatic concerns about what will provoke other states to come in against us loom large in the tug-of-war between military aggressiveness and restraint: Korea and Vietnam hinged on what to do about safe havens and reinforcements from across borders, and Afghanistan was difficult, or doomed, by unwillingness to treat Pakistan as an enemy collaborator. And there are always alternative battlefields than the one at hand, calling away resources and posing choices of what risk has greater priority.

Above all, politics in democracies are shaped by the two great eternal parties, the Ins and the Outs. The Ins always have to be more Machiavellian, more duplicitous, more culpable for surprises and dashed expectations. The Outs batten on these failings, and get to take the most virtuous line, until they get into office to implement it. The only realistic conclusion is that political wavering over foreign wars is built in; unless the war is successful, and quick.

 

References

On tipping points:

Thomas Schelling. 1960. The Strategy of Conflict.

Gerald Marwell and Pamela Oliver. 1993. The Critical Mass in Collective Action.

Sources on Afghanistan: Associated Press; New York Times; Washington Post; Los Angeles Times; San Diego Union-Tribume; The Economist Magazine.

B.H. Liddell-Hart. 1970. History of the Second World War.

John Keegan. 1997. Atlas of the Second World War.

John Davison. 2011. The Pacific War Day by Day.

Saburo Hayashi. 1959. Kogun. The Japanese Army in the Pacific War.

William Manchester. 1978. American Caesar: Douglas MacArthur, 1880-1964.

Wikipedia. “Korean War.”

John Andreas Olsen/Thomas Keaney. 2013. Air Commanders. (pp. 199-222 on Korea)  

Brian Urquhart. 1987. A Life in Peace and War. (Under-Secretary General of the United Nations)

Harry G. Summers, Jr. 1995. Historical Atlas of the Vietnam War.

James William Gibson. 1986. The Perfect War: Technowar in Vietnam.

Wikipedia. “Fall of Saigon.”

Military operations in Afghanistan and Iraq:

Anthony King. 2019. Command: the Twenty-first Century General.

Anthony King. 2021. Urban Warfare in the Twenty-first Century.

Does Masculinity Explain Violence?

Males predominate in every form of violence, from the micro-level up through wars and other macro-violence. Is it because of a biological universal--the testosterone theory of violence? Or an omnipresent cultural archetype of maleness? Statistics would appear to bear this out.

But frequency statistics by themselves do not answer the question. If violence is male by virtue of being a biological universal, it should have no exceptions. That females are under-represented in committing violence, however, shows that women are capable of it. This suggests an alternative explanation: women have been historically denied opportunities to be violent.

On the anthropological and historical evidence, women have been excluded from military and weapons training and combat sports. When combat was confined to close physical proximity ("man-to-man" or "hand-to-hand"), greater male size and musculature gave men predominance, upon which cultural exclusion was built. This was true too when distance weapons (bows, spears, slings) were muscle-powered. As weapons became mechanized, size and strength made less of a difference. Even among men, the Colt revolver was called "the great equalizer", and Billy the Kid could bring down contemporary Goliaths.

Integration of women into armies and police forces has been a long time coming, but for the past two centuries their exclusion has been largely a matter of cultural tradition and male monopolization. These have been worn down in the modern era of mass democratization and the mobilization of social movements.

The ideology persists (although now argued from a liberal pacificist direction) that women are more peaceful than men. Hence society would become more peaceful at all levels to the extent that women gain political leadership. But this does not appear to be true. Throughout history, when women were rulers, their reigns were as likely to be warlike as males: the heinous religious persecutions of Protestants under Queen Mary ("Bloody Mary") and of Catholics under Queen Elizabeth I; imperialist wars under Queen Victoria and Catherine the Great (also known for engineering assassinations of rivals at the Russian court); the Red Guards urged on by Jiang Qing (Madame Mao); Eva Peron, the Argentine dictator's wife who avenged social slights with executions; Margaret Thatcher, who repaired her lagging popularity and won re-election by launching the Falklands/Malvinas war. Violence was always easiest to unleash at the level of high command, but modern emancipated women have also been on the front line of political violence: Vera Zasulich who used a revolver to shoot a Russian Governor, setting off the populist ("terrorist") movement that assassinated Czar Alexander II in 1881; Ulrike Meinhof of the Baader-Meinhof Gang in 1970s Germany; Patty Hearst with the machine-gun wielding Symbionese Liberation Army; numerous women in the clandestine Red Brigades of 1970s Italy; in the contemporary Middle East, women suicide bombers appear to be a way for women to gain some public status among ultra-male-dominated conservative Moslems. In the assault on the US Capitol in 2021, the only violent death was a woman officer (formerly in the Air Force military police) shot leading the break-in.  (The others died of heart attack or stroke.) On the personal level, the proportion of women committing homicides and other violent crimes has been steadily rising, suggesting ongoing integration is moving on all fronts.

Such examples show that generalizations of the "Men are from Mars, Women are from Venus" variety are useless as explanations. Women have not had the opportunities, the training, or the cultural encouragement to enter potentially violent situations. When these barriers have been lowered, women act very much as men do. Not to say women are merely assimilating into masculine culture. The revelation of empirically-detailed sociology of violence is to recognize that situational dynamics determine what happens--not the background classification of individuals.

Whether the increasing participation of women as activists in the world of violence is a good thing or not is not a question to be decided by sociology. I would point out, however, that the success or failure of violence is above all a matter of emotional domination, not sheer physical strength. Women as well are men are capable of both imposing and resisting emotional domination, including in situations of sexual violence. And that surely is a good thing.

Assault On the Capitol: 2021, 1917, 1792

The iconic image of January 6 is a protestor sitting with his feet up on Nancy Pelosi’s desk, and another in the Senate Chair. These are reminiscent of Sergei Eisenstein’s 1928 film, October, a documentary of the Russian revolution of November 1917. Attacking the seat of government in Petersburg, the Winter Palace, revolutionary soldiers break into the Czarina’s bedroom: amused by uncovering the jeweled top of her chamber pot, then ripping through her feather-bedding with their bayonets. The same in the French Revolution in its many repetitions between 1789 and 1792, and its replay in 1848, where the crowd took turns sitting on the vacant throne after the guards had collapsed and the royal family had fled.

There are differences, of course. The 1917 and 1792 revolutions were successful in overthrowing the government. The 2021 Capitol assault may have had few such ambitions in the minds of most protestors; and in any case, they occupied the outer steps of the Capitol for five hours and penetrated the corridors and chambers inside for three-and-a-half, with momentum on their side for less than an hour.

The similarities are more in short-term processes: The building guards putting up resistance at first, then losing cohesion, retreating, fading away; some fraternizing with the assaulting crowd, their sympathies wavering. They had weapons but most failed to use them.

Higher up the chain of command, widespread hesitation, confusion, conversations and messages all over the place without immediate results. Reinforcements are called for; reinforcements are promised; reinforcements are coming but they don’t arrive. Recriminations in the aftermath of January 6 have concentrated on this official hesitation and lack of cooperation, and on weakness and collusion among the police.

In fact it is a generic problem. Revolutions and their contemporary analogues all start in an atmosphere of polarization, masses mobilizing themselves, authorities trying to keep them calm and sustain everyday routine. Crowd-control forces, whether soldiers or police, are caught in the middle. At the outset of surging crowds, there is always someplace where the guards are locally outnumbered, pressed not just physically but by the noise and emotional force of the crowd. They usually know that using their superior firepower can provoke the crowds even further. Sometimes they try it; sometimes they try a soft defense; in either case they have a morale problem. If there is a tipping point where they retreat, the crowd surges to its target, and is temporarily in control.

From this point of view, the lesson of January 6 is how protective forces regain control relatively quickly. Comparing the Winter Palace on the night of October 26, 1917* or the Tuileries Palace on August 10, 1792, tells us what makes for tipping points that wobble for a bit but then recover; or not.

* Russia in 1917 was still using the old-style calendar which had been updated by various states of Western Europe during preceding centuries. To convert these dates to the modern international calender, add 13 days to the date. Thus October 26 (old style) becomes November 8 (new style).

 

The normal exercize of authority is above all a smooth and expectable rhythm. (That doesn’t mean everything goes well, but the hitches are what we are used to.) In revolutions it gets worse and worse until psychological equilibrium is only re-established when one leadership team entirely replaces the other.

The wavering and indecisiveness of the guards and the incoherence of the chain of command higher up are connected. We see this particularly strongly in the Russian and French cases; but the same pattern exists, on a less extreme scale, in the contemporary American crisis. In the weeks and hours leading up to the afternoon of January 6, there are strong splits inside Congress, as well as among the branches of government, not to mention the lineup of states across the federation, and the anomalous local position of the authorities of the District of Columbia. Revolutions and revolts usually begin with prolonged splits at the top, moods which are transmitted to their own security forces. Add to the mix popular crowds which are more than a puppet of elite factions. The energy, enthusiasm, and hostility of crowds has a power of its own (in fact earlier theories of revolution usually focused entirely on this popular force from below). But even granting great causal significance to elite splits, how strong the popular hurricane blows at some point becomes the determining factor of events.

At the tipping point crisis, the two centers of emotional contagion-- the two places the political authority machine can wobble, the crowds-and-cops scene, and the elites quarreling and sending for reinforcements-- are both wobbling at the same time. The outcome depends on which gyroscope rights itself-- if at all. 

From this point of view, we will look at the assaults on the Winter Palace in 1917, and the Tuileries in 1792. These were both revolutions from the Left; the Capitol assault of 2021 was from the Right. But the dynamics of crowd confrontation with a center of authority are much the same, regardless of Left or Right ideologies.

 

Assault on the Winter Palace

The insurgents launched their attempt to take over the capital city on October 25th. The Bolshevik revolutionists had infiltrated and gained the support of most armed forces around Petersburg, units of sailors from ships stationed nearby and soldiers from fortresses and arsenals. Their officers had been arrested or reluctantly came over to the revolution, watched by political committees of their own troops. The Bolsheviks also had a strong base among factory workers and in the railroads and telegraphs, giving them control of communications. Armed workers were now throughout the city carrying rifles and revolutionary flags.  Acting together with the troops on the first day of the insurrection, they took over most of the major buildings and installations in the city: the railroad stations, bridges across the river, the electric plant, banks, government offices-- all except the Winter Palace.

It was the former palace of the Czars, now occupied by a coalition government of liberal reformers and former officials, since the Czar had abdicated in February. Here was concentrated what military forces the government still had in the capital city. Here they waited and sent out messages for reinforcements to put down the revolutionaries: recalling troops from the front against the Germans; Cossack cavalry long dreaded as the enforcers of the absolute monarchy; elite military units recruited from the respectable middle class; students from the military schools. The Winter Palace was the military stronghold and political command center; the center of political legitimacy, too, since it housed the Assembly that made the laws and the Ministry that made official decisions. As long as the Winter Palace held out, the revolution hung in the balance.

On the 26th, the Bolsheviks got their forces in a ring around the Winter Palace and began to close in. Both sides proceeded cautiously. 

“The court of the palace opening on the square is piled up with logs of firewood like the court of Smolny [the building across town where the Bolsheviks have their meetings]. Rifles are stacked up in several different places. The small guard of the palace clings close to the building... Inside the palace they found a lack of provisions. Some of the military cadets did sentry duty; the rest lay around inactive, uncertain and hungry. In the square before the palace, and on the river quay on the other side, little groups of apparently peaceful passers-by began to appear, and they would snatch the rifles from the sentries, threatening them with revolvers...” [Trotsky, History of the Russian Revolution, 387-8] Agitators also began to appear among the cadets, internal trouble-makers; they quarrel about who they should take orders from, the civilian ministers or their own school directors. They opt for the latter--severing the chain of command. They take their posts but are forbidden to fire first.

Outside on the river bank, thousands of soldiers and sailors are being disembarked who have gone over to the insurgency. Their remaining officers “are being taken along to fight for a cause which they hate.” The Bolshevik commissar announces: “We do not count upon your sympathy, but we demand that you be at your posts... We will spare you any unnecessary unpleasantness.” The most militant of the troops volunteer for action on their own. “The most resolute in the detachment choose themselves out automatically. These sailors in black blouses with rifles and cartridge belts will go all the way.” [390] The take-over of the city had mostly been by military units acting in regular order, encountering virtually no resistance, the token forces of the government letting themselves be disarmed. A real fight now looms ahead. The militants of armed workers meld with militants of the troops in a crowd-like surge.

“Hiding behind their piles of firewood, the cadets followed tensely the cordon forming on Palace Square, meeting every movement of the enemy with rifle and machine gun fire. They answered in kind. Towards night the firing became hotter. The first casualties occurred. The victims, however, were only a few individuals. On the square, on the quays, the besiegers hid behind projections, concealed themselves in hollows, cling along walls. Among the reserves the soldiers and Red Guards warmed themselves around campfires which they had kindled at nightfall, abusing the leaders for going so slow.

“In the palace the cadets were taking up positions in the corridors, on the stairway, at the entrances, and in the court. The outside sentries clung along the fence and walls. The building would hold thousands, now it held hundreds. The vast quarters behind the sphere of defense seemed dead. Most of the servants were scattered, or in hiding. Many of the officers took refuge in the buffet... The garrison of the palace was greatly reduced in number. If at the moment [of greatest reinforcement] it rose to a thousand and a half, or perhaps two thousand, it was now reduced to a thousand, perhaps considerably less...  With angry and frowning faces the Cossacks gathered up their saddle bags. No further arguments could move them...The Cossacks were in touch with the besiegers, and they got free passes through an exit till then unknown to the defenders. Only their machine guns they agreed to leave for the defense of a hopeless cause.

“By this same entrance, too, coming from the direction of the street, Bolsheviks before this had gotten into the palace for the purpose of demoralizing the enemy. Oftener and oftener mysterious figures began to appear in the corridors beside the cadets. It was useless to resist; the insurrectionists have captured the city and the railway stations; there are no reinforcements... What are we to do next? asked the cadets. The government refused to issue any direct commands. The ministers themselves would stand by their old decision; the rest could do as they pleased. That meant free egress from the palace for those who wanted it. The ministers passively awaited their fate. One subsequently related: “We wandered through the gigantic mousetrap, meeting occasionally, either all together or in small groups, for brief conversations... Around us vacancy, within us vacancy, and in this grew up the soulless courage of placid indifference.” [394-6, 401]

Artillery from the ships fired sporadically, the gunners unenthusiastic, hoping for an easy victory.  Of 35 shells fired in a couple of hours, only 2 hit the palace, injuring the plaster. [400]

“The inner resolution of the workers and sailors is great, but it has not yet become bitter. Lest they call down it on their heads, the besieged, being the incomparably weaker side, dare not deal severely with those agents of the enemy who have penetrated the palace. There are no executions. Uninvited guests now begin to appear no longer one by one, but in groups. The palace is getting more and more like a sieve. When the cadets fall upon these intruders, the latter permit themselves to be disarmed... These men were not cowardly; it required a high courage to make one’s way into that palace crowded with officers and cadets. In the labyrinth of an unknown building, among innumerable doors leading nobody knew where, and threatening nobody knew what, the daredevils had nothing to do but surrender. The number of captives grows. New groups break in. It is no longer quite clear who is surrendering to whom, who is disarming whom. The artillery continues to boom.”  [403]

The siege began in earnest about 6 p.m. With periodic excursions and lulls, it went on until 2 a.m. Lenin and the Bolsheviks at their headquarters are getting anxious, sending angry notes for all-out artillery fire. The commander decides to wait another quarter hour “sensing the possibility of a change in circumstances.” Time is almost up when a courier arrives: The palace is taken!

“The palace did not surrender but was taken by storm-- however, at a moment when the power of resistance of the besieged had already completely evaporated. Hundreds of enemies broke into the corridor-- not by the secret entrance this time but through the defended door-- and were taken by the demoralized defenders for a deputation [of supporters]. A considerable group of cadets got away in the confusion. The rest-- at least a number of them-- still continued to stand guard. But the barrier of bayonets and rifle fire between the attackers and the defenders was finally broken down.”  They are now confronting face to face-- psychologically the most difficult situation for effective use of weapons.

“Part of the palace is already filled with the enemy. The cadets make an attempt to come at them from the rear. In the corridors phantasmagoric meetings and clashes take place. All are armed to the teeth. Lifted hands hold revolvers. Hand grenades hang from belts. But nobody shoots and nobody throws a grenade. For they and their enemy are so mixed together that they cannot drag themselves apart. Never mind: the fate of the palace is already decided.

“Workers, sailors, soldiers are pushing up from outside in chains and groups, flinging the cadets from the barricades, bursting through the court, stumbling into the cadets on the staircase, crowding them back, toppling them over, driving them upstairs. Another wave comes on behind. The square pours into the court. The court pours into the palace, and floods up and down stairways and through corridors. On the befouled parapets, among mattresses and chunks of bread, people, rifles, hand grenades are wallowing.

“The conquerors find that Kerensky [head of government] is not there, and a momentary pang of disappointment interrupts their furious joy... Where is the government?”

They have long since abandoned the great assembly hall overlooking the river now full of gunboats. They have retreated to an inner room, as far away as possible. “That is the door-- there where the cadets stand frozen in the last pose of resistance. The head sentry rushes to the ministers with a question: Are we commanded to resist to the end? No, no, the ministers do not command that. After all, the palace is taken. There is no need for bloodshed. The ministers desire to surrender with dignity, and sit at the table in imitation of a session of the government.”  [403-4]

The last guards are disarmed. The door crashes open. Backed by the crowd, the Bolshevik commissar takes the ministers’ credentials and declares their arrest. The officers and cadets of the defense are allowed to go free. As the ministers are led away through square, there are shouts: “Death to them! Shoot them!”  Some soldiers strike at the prisoners. The commissar and the Red Guards stick to the ritual of victory, escorting the overthrown authorities to prison, an act of taking their place.

Physically these scenes at the Winter Palace look a lot like the Capitol in January 2021: Both buildings are labyrinths, huge complexes of assembly chambers, galleries, halls, stairwells, meeting rooms, offices. There are tunnels,  secret passages, escape routes, hidden doors. There are main entrances, back entrances, side entrances. Especially when some people are evacuating and others intruding, there are plenty of mix-ups; sometimes crowded clashes, standoffs with barely room to swing about; sometimes guards or protestors, one side or another, find themselves outnumbered; sometimes-- as we see in photos-- lone protestors striding through grand spaces with their flags or booty; sometimes arrestees sprawled on the floor under guard, sometimes a thin line of guards backed up against a door. Both attackers and defenders are swallowed up by the building, forces stretched thin and unable to be everywhere at once. Both sides are uncertain, confused, without chains of command on the spot; unclear what is behind a door, who has what weapons, how our forces are holding out or making inroads; how many are smashing through openings and preparing to rush inside. Members of Congress hunker down in the rows between the seats, and are led away by security forces to subterranean hallways, take refuge in a basement cafeteria like the Russian officers hiding in the buffet. Some attackers wander about in remote corridors; in 2021, getting into Congressional offices, taking selfies, rifling through desks. In 1917, Russian militants and defender cadets alike fill their pockets with expensive knick-knacks from the sprawling palace. Some are fighting; many are not. We will come back to the points of violence.

To summarize the pattern, so confusing in detail and lived experience, let us invoke the tottering gyroscopes of organization in varying levels of breakdown: first the point of view from below on the front lines, then the view of chains of authority from above. Start with 1917; then 2021.

 

Wavering among government forces: We have seen the Winter Palace guards, heavily armed but mostly tired, bored and discouraged. Sometimes they let their guns be taken away from them. Sometimes they fire across the courtyard, mostly missing (not unusual in the sociology of combat). Sometimes they are ordered not to fire first-- but who can tell who starts it? Their sympathies are not at all with the revolution; they are elite military cadets, although going into action for the first time. Nevertheless the mood and pressure of the situation determines whether they fire or not. They have moments of hope; the enemy is holding back, maybe they too are experiencing difficulties, maybe help is on its way.

The hardened Cossacks, an alien ethnic group amid the Russian population, used for administering whippings and massacres to uphold authority, are expected to be the bulwark of the defense. But now they hesitate. They will obey orders to support the Winter Palace; but-- first they need assurances they will not be alone, there should also be infantry, artillery, armored cars. The government assures them these will be there. In fact they are not; Cossacks get wind of it, or suspect it. They are preparing to move-- telephone messages go between barracks and Palace-- but they don’t move. A few Cossack units reach the Palace; after assessing the situation, the atmosphere, the lack of chain of command, they negotiate with the besiegers a retreat through a secret exit.

And so it goes with reinforcements from the front. The government wants to send unreliable units out from the capital, and bring back reliable units. But ministers can talk only to officers who are their sympathizers, or at least their yes-men. Chains of command are poor in the army as well; and the railroads are not under their control. Within the military units we know most about, mainly the naval forces who have mutinied to the Bolsheviks or have cowed their officers into going along with them, there remains hesitation about using force. Artillery assault is called for; but the gunners complain their guns are not ready; when they finally fire, it seems they don’t want to hit anything, hoping the situation will resolve itself. They are holding open their options, waiting to see which side is going to lose.

Wavering among government politicians:  The government is not set up to act with decisiveness, for it is a coalition of hold-overs from the czarist regime and a variety of parties of differing ideologies and militancy; of those who took part in the February revolution and those who resisted it. This is particularly true in the military side of the administration; the government is now calling on its old enemies to defend it. Meetings in the Winter Palace agree on little except resisting a second revolution, but even here politicians are split between those who demand a vigorous crackdown and those who want a softer policy of conciliation. It all depends on how much of a show of force they can muster, but this boils down to putting up a frontstage of optimism that reinforcements are on their way. They waver between optimism and pessimism. Discussions and arguments take place over the telephone, making demands to military headquarters, to citizens militias, to Cossack regiments, to the military schools.

Moments of optimism come from the confusion of communications, and indeed the confusion of events themselves. To the extent there is any chain of command, the government ministers are talking with high officials whose own authority chains are out of order. Some talk a good show; they are willing to put down the insurrection, if only they can get some coordinated support. Others become increasingly exasperated; I agree with your orders, Minister, but where are the troops to carry them out? Sometimes the revolutionaries can’t seem to get their act together either; with every lull and delay optimism of the defenders goes up a notch. It it not a bandwagon-- yet. How long can the indecisiveness last?

 

Wavering among the revolutionaries:  Generically, their problems are similar, but quantitatively better. Their forces on the ground are a mixed bag; some ideological militants; some newly joined allies in the navy and army; old-line officers of dubious loyalty; many holding back to see what will happen. Politically, too, the left-wing assembly and the local soviets (councils) are coalitions, not just Bolsheviks but other factions and splits left over from 15 years of revolutionary politics. On present policy, the divisions are among those who want to press their advantage right now, and those who are cautious, worried, or hoping for a peaceful transfer of power. Lenin, Trotsky and their faction want to present the waverers with a fait accompli, and that means taking the Winter Palace before it is reinforced. Emotionally, they have a recent bandwagon in their favor, the successful take-over of the city the previous day.

But a bandwagon has to keep moving to new adherents and new successes; if it stalls, the mood starts flowing away. The militants are mobilized; they must be put into action against the final target. But realistically, there are logistical and organizational problems to work out. Plans to use their military supporters to surround the Winter Palace, to bring combined arms into action-- all these are too complicated for a newly improvised structure. And in any case, this is counting too much of organization from the top. Their biggest resource at the moment is the spontaneity of the self-propelling crowd. The Bolshevik network is capable of getting the most militant workers and sailors on the spot, if with enough lags and delays to give hopes to the defenders. At this point a crowd surge develops. Intersecting with the mood inside the Winter Palace, the tipping point tips.

 

Top-down and bottom-up

Enthusiastic self-mobilizing crowds, and the strategies of political elites, play into each other. Politics in normal times is almost entirely the province of political elites. But when crowds repeatedly mobilize themselves with their own indigenous networks and organizations, they become social movements with momentum and tactics of their own. Such movements can change the career trajectories of politicians, on the whole more than vice versa. The world history of labor movements, or of racial/ethnic movements, give ample evidence of this.

If we need recent examples of how energized crowds carry politicians along pathways with a vehemence they may not have anticipated, consider how Bernie Sanders’ campaign in 2016 ballooned from token opposition to serious challenge to Hillary Clinton; Trump’s discovery that his reality-TV methods generated such crowd enthusiasm that he kept feeding off of rallies throughout his 4 years in office; the Black Lives Matter demonstrations  in spring and summer 2020, creating a political bandwagon whose stronghold became the Democrat-controlled House of Representatives. And which became the target for the counter-mobilization culminating in the January 6 assault. Trump’s emotional addiction to rallies took him down the slope of political psychosis, the delusion that the size of his crowds meant he couldn’t possibly have lost the popular vote-- a delusion shared by the rallies themselves.

There is always a danger, as a sociologist, of being emotionally too close to an event to see what is going on, what the patterns are and the relative weight of the various forces. Our ideological labels, Left and Right, don’t help. We have seen enough of Petersburg in 1917 to recognize the most general features of Washington in January 2021. But we have to abstract away from the particular names and issues, to get at the dynamics.  If the Presidency is on their side, can the attackers at the Capitol be a revolution, or a counter-revolution, or a coup?  Or is it Smolny Institute against the Winter Palace again? Better to invoke the imagery of two spinning gyroscopes, tottering or staying upright. From this perspective, attacker and defender are subject to the same dynamics, differing only quantitatively. Look at who wavers when and how much:

 

Wavering among official forces at the Capitol:

It needs to be appreciated that many different officials and organizations had a role in the defense of the Capitol, with no command center. Advance intelligence about possible attacks by militant groups and unorganized protestors came from the FBI, military intelligence agencies, and civilian organizations like the Anti-Defamation League. These differed widely on how seriously on-line rhetoric about violence should be taken. Advance estimates of the crowd size to be expected ranged from 2,000 to 80,000.

Forces that could be brought into action included: (a) the Metropolitan Police of the District of Columbia, reporting to the Mayor; (b) the Capitol Police, under a Police Chief, as well as a Sergeant-at-Arms for the House of Representatives, and another Sergeant-at-Arms for the Senate; these latter reporting to the Speaker and Majority Leader; (c) the Secret Service, armed plain-clothes officers protecting not only the President but all those in the chain of succession, notably the Vice President and Speaker of the House; (d) other federal officers, including FBI  SWAT teams, Dept. of Homeland Security, and Bureau of Alcohol, Tobacco,  Firearms and Explosives; (e) US military forces under the Secretary of Defense; the US Army specifically under the Secretary of the Army; (f) the National Guard forces of each state, which can be deployed under orders from each Governor, although coordinated with the Secretary of the Army; (g) the National Guard of the District of Columbia, which not being a state, could only be called out by the President. Altogether these make up at least 15 quasi-autonomous officials and agencies (not counting the 50 state Governors and National Guards). The array gave plenty of room for communication and coordination problems, not to mention differences in policy and partisan splits-- not least with President Trump urging on the protestors and resisting mobilizing Federal forces. Chains of command were sometimes upheld, sometimes breached.

To sample these disagreements: Washington D.C. Mayor on December 31, 2020 (7 days before the Electoral College count) requested calling out the D.C. National Guard, but only to provide unarmed crowd management and traffic control; this was approved by the Acting Secretary of Defense on January 4, calling up 340 troops but no more than 115 at a time. There must have been splits in the Pentagon, since on Jan. 3 some officials offered the National Guard; but Metropolitan Police Chief said later they had no intelligence that the Capitol would be invaded, and Capitol Police Chief said it would be unnecessary. The latter had 2000 uniformed cops, but assigned only normal staffing levels (ordinarily there are 4 40-hours shifts per week, so the number available would be about 500, minus administrative personnel). Accustomed to dealing with tourists and peaceful protests, they counted on a soft, friendly style to keep the crowd in hand. The Capitol Police Chief also said he didn’t like the impression it would give if armed troops were photographed around the Capitol; a sentiment echoed by some military officials.

Once the attack began, disagreements persisted for a while over how severe the breach was. Around 1 p.m., when hundreds of rioters pushed aside barriers and climbed to the higher terraces outside the Capitol, the House Representative chairing the Committee in charge of security called the Capitol Police Chief but couldn’t get through; the House Sergeant-at-Arms, assured her that the doors are locked and no one can get in. Shortly after, Capitol Police Chief (who was not on site but at his headquarters) called the House and Senate Sergeants-at-Arms for emergency declarations from their respective chambers to call the National Guard; they replied they would “run it up the chain” of command. The Democrat-controlled House side got their approval about an hour later, after windows and doors were broken in and rioters entered the building. On the Republican-controlled Senate side, the Sergeant-at-Arms apparently never did notify  the leadership. Rioters reached the Senate around 2.15, just after its doors were locked. At the same time, the House recessed briefly when Secret Service escorted the Speaker out; but resumed debate again at 2.25-- apparently thinking the disturbance was minor. They recessed for good at 2.30, as rioters noisily banged on the doors.

By this time the Capitol Police Chief in a conference call urgently requested National Guard “boots on the ground”. The conversation was described as chaotic as everyone asked questions at the same time.  The General directing the Army Staff  resisted, arguing “I don’t like the visual of the National Guard standing a police line with the Capitol in the background,” and that only the Secretary of the Army (who was in different meeting) had authority to approve the request. Finally at 3 p.m. the Secretary of Defense authorized deploying the 1,100 troops of the D.C. National Guard, but restricted from carrying ammunition and sharing equipment with police without prior approval. Since Trump resisted the order, Pence approved it on his own authority, breaking the chain of command.

In the event, it did not make much difference. Metro police sent 100 reinforcements within 10 minutes after the police line was pushed back at 1 p.m. The D.C. National Guard mobilization would take at least 2 hours for its members to assemble and get equipped at the D.C. Armory. In fact, 150 troops arrived at the Capitol at 5.40, just as the Capitol Police announce the building has been cleared of rioters.  Meanwhile, between 2.30 and 2.50, calls from D.C. Mayor to the Virginia State Police promised reinforcements, the first of which began arriving in the city at 3.15; while a request for the Virginia National Guard was authorized by the Governor but not by the Defense Dept.  About 3.40 Maryland Governor ordered mobilization in anticipation of a  request, which comes from the General in charge of the Pentagon National Guard Bureau about 4 p.m. But Maryland National Guard forces are not expected until next day. At 5 p.m. New Jersey Governor announced he was sending state police at request of D.C. officials; and in the evening New York Governor said he would send 1000 National Guards.

The invasion of the Capitol building itself lasted from about 2.10 to 5.40 p.m., the Senate having been invaded for only a few minutes around 2.30, and the House repelling an attack at 2.45 when one rioter is shot and killed by plain-clothes security. By 3 p.m., many people who entered the House side of the building were leaving. On the Senate side, clashes continued until after 4 p.m.

By this time the mostly unarmed Capitol Police were reinforced by ATF tactical teams, and by SWAT teams of the Metro Police in heavy gear. Other buildings in the Capitol complex, including the Senate Office Building were cleared by FBI and Homeland Security forces in riot gear around 4.30 p.m.  At 6.15, the Capitol Police, Metro Police, and DC National Guard had formed a perimeter around the Capitol, although several hundred rioters remained in the vicinity until around 8 p.m.

The promised reinforcements were mostly psychological in effect, building confidence among the victors. On the front line, the Capitol Police had put up a delaying resistance, taking about 60 casualties (15 seriously enough to be hospitalized), with one dead. The Metro Police had 56 injuries. The rioters apparently got off easier, 1 killed by gunfire, 5 rioters known to be hospitalized: out of perhaps 300-500 who breached the Capitol, and the thousands (10,000?) who shouted support outside. Among these latter, 3 died of heart attacks or other emotional effects of extreme excitement. The shooting was done by a Capitol Police lieutenant, which appears to have turned the tide. Heavily armored SWAT teams effectively mopped up die-hard resistance.

[sources: Associated Press; Wall Street Journal; Washington Post; Los Angeles Times; published and on-line photos and videos.]

 

Police lines retreat, violence, and crowd management

Police retreated in two phases on the West (main) front of the Capitol; another sequence at the East (rear) of the building involved a smaller crowd and fewer police. On the East side, a crowd started gathering around 12 noon. On the West side, a larger crowd gathered by 12.30.  By 12.53, the crowd began to push back police from barricades of waist-high portable fencing. (My counts from photos indicate about 2500 people visible in the crowd-- with more further back and on the wings; against a single line of about 80 police behind the fencing, with somewhat less than that number spread out in the space behind them.) Over the next 10 minutes, the crowd overran three more rows of barricades, the officers retreating to the base of the Capitol steps.  Photos and videos of this phase show what looks like a tug of war, 3-or-4 men on each side of a segment of fencing, which they push to tip over or hold upright. Occasionally someone on either side rushes forward to strike across the barrier with baton or stick. The cops are trying hard, pushing back vigorously. 

Around 1.30, a large crowd arrives from listening to the Trump rally 14 blocks away. This increases the density of the crowd pushing the police up the steps to the Capitol terraces. But on the whole, there is an hour-long standoff, lasting from 1 to 2 p.m., until the break-in to the building itself.

Meanwhile on the East side, a smaller police line loses control of the last barrier at 2 p.m. Information is lacking on when this crowd got inside, but they must have added to the chaotic situation of  intruders in the corridors and tunnels of the Capitol building complex. They probably also were those who entered other nearby buildings including the Senate office building, and breaking into and ransacking offices inside the Capitol complex that went on for several hours after the main assault crowd from the West front was dispersed.

Shortly after police lines of the East side collapsed, on the West front about 2.10 p.m., police are pushed up the grand steps. The emotional momentum is with the crowd, who break through a side door and window at 2.12 and are inside. Within a few minutes they on the second floor outside the Senate chamber. Videos show a lone cop rather coolly engaging a dozen intruders, gesturing at them, turning to climb a stairwell, looking back to make sure they are following; he has a pistol in his holster but never reaches for it. The intruders advance surprisingly slowly, hardly more than brisk walking pace; the cop misleads them away from the doors of the Senate. Alerted security locked the Senate doors at 2.15, a minute before intruders reached the gallery outside the chamber. The Senate was evacuated by 2.30, before some attackers briefly got into the viewer’s gallery, and few climbed down to sit in the presider’s chair and pose for photos.

Meanwhile, most of the crowd moved through the Rotunda into the House wing around 2.30 (the Representatives started evacuating after 2.20). As they pounded on the doors shouting to find Pelosi,  a group of about a dozen followed a side corridor to reach a windowed door into the Speaker’s lobby, near a staircase used just before to complete the evacuation. Videos show them arguing with three police who rather calmly guard the door; they wear no helmets or riot gear, and pass the word they are being relieved by a heavily armored tactical squad. In the two minutes when the police withdraw to make room for their reinforcements, the mob pounds on the door, shouting and breaking the windows in the upper doors with a helmet, fists and stick. Meanwhile, photos taken from the inside of the House chamber itself show five plain-clothes officers in suits, behind an improvised barricade of furniture, aiming handguns at the main doors where the crowd is clamoring to get in. These are not the same as the officer in the lobby at the rear of the House, who shot and killed Air Force veteran Ashli Babbitt, climbing through the broken door window at 2.44 pm. It was the last peak of momentum of the attackers.

Calm cop, gun holstered

Calm cop, gun holstered

On the whole, there is little evidence of panic among the police; they put up a strong resistance at each barricade outside the building until pushed back by crowd pressure. Inside, photos and videos show the police largely calm. The greatest tension is in the faces and body postures of the police getting ready to fire if the House door is breached.

Capitol police point guns at House door

Capitol police point guns at House door

Other photos show the most intense emotions at moments when the Rotunda is crowded with both sides mixed together: police in riot gear--helmets with plastic visors-- rioters in MAGA hats, hockey helmets, stocking caps, bare-headed, a few flags visible and more than a few mobile phones taking pictures. My count gives about 150 persons pushed together at close quarters, approximately equal numbers of both sides. In the distance along the far wall, we can see about 50 cops lined up in riot gear; the impression is they are held in reserve, as the tide has turned and the rioters are being driven into retreat. There are more cops than rioters in the foreground.

Melee in the rotunda

Melee in the rotunda

How violent was it? Although news reports noted that rioters had guns and explosives, this seems to be based mainly on discoveries away from the Capitol: home-made pipe bombs at the Republican National Committee and Democratic National Committee headquarters. A street search found a parked vehicle with a handgun, assault rifle, ammunition, and homemade napalm bombs.* These reports raised alarm in the Capitol, and spread the belief that the rioters, including the one who was shot and killed in the House lobby, were an armed threat. Except for that shooting, the weaponry used on both sides was surprisingly low-level. The Capitol police had a considerable arsenal at their disposal, but initially the officers inside the building were in regular uniform; those at the barricades outside were in riot gear, with helmets, shields and batons. Within an hour after the breach, photos show forces inside mostly in riot gear. 

* This home-made arsenal is similar to those accumulated by school rampage shooters obsessed with a private cult of accumulating weapons, few of which they actually use. Collins, Clues to Mass Rampage Killers.  

 

Some rioters wore a version of riot gear, helmets, military-style vests. These were prominent among the dozen or so who scaled the West front of the Capitol to reach the top terrace. This appears to have been showing off, since photos show the crowd was already up the side steps and behind the police lines. It may well be that the most heavily equipped  rioters were either police or military personnel (current or former), including ideological militias. In fact they seemed to believe they were taking part in a legitimate police mission of their own, carrying plastic handcuffs to arrest “traitors”. But their “weapons” were more in the nature of accoutrements; handcuffs are not offensive weapons, although strongly identified with cops; similarly with the two-way radios some carried; and with reports of “stun grenades”, what SWAT teams call “flash-bangs” used to confuse a hostage-taker, which is to say a device to avoid using lethal violence if possible.

The rioters’ main high-tech offensive weapon was “bear spray”-- high-intensity pepper spray used as protection against wild animals by outdoor campers and hikers. It is unclear how often or in what situations it was used.* What is most in evidence are flag poles (doubling as emblems), and sticks, chiefly used to break windows.

* The only photo I have seen among several hundred posted is of a young man in helmet and gas mask, outside at the base of the Capitol, who sprays a brown liquid across an empty space in the crowd while running with his head turned the other way. This was probably around 12.50 p.m. when the crowd first surged against police lines. There are several photos of police spraying a clear liquid at protestors, in these external scenes.

 

One of the most violent incidents of which we have a description took place during the peak moment of conflict outside the West front, when the crowd found a relatively lightly guarded side door where they eventually broke in. Three cops were pulled out of the defensive line (to make room for the attackers), and shoved down the steps. Cut off from support and surrounded by a large crowd, they were beaten with “hockey sticks, crutches, flags, poles, and stolen police shields”-- on the whole, improvised weapons. In the sociology of violence, this is called a “forward panic,” where a group that has been in an intense confrontation suddenly finds the balance has broken, one side is suddenly at the mercy of the other, and an emotional surge of adrenaline takes over and results in a beating characterized by piling on and overkill. [Collins, Violent Conflict]  Unlike in most military and police-chase situations, here the victims escaped alive-- the difference being, no one had guns.

The most serious casualties caused by the attackers were from improvised weapons found on the spot: fire extinguishers.  One incident happened, again at the flashpoint on the West front, after 2 p.m. as the police line was breached. [Wall St. Journal Jan. 15, 2021 A6] The attacker, retired from a Philadelphia-area fire department, threw a fire extinguisher at the police line, hitting three officers in the head (one of them not wearing a helmet). That officer was evaluated at a hospital and returned to duty. This was a separate incident from the one inside the Capitol where a police officer was killed. Apparently during a struggle in a crowded corridor, Officer Brian Sicknick was knocked down or hit from behind on the head by a fire extinguisher. Although details are lacking, this is in keeping with the typical pattern in deadly violence: no eye contact when the attack is made. The same is the case with Ashli Babbitt, who was unarmed, but the officer who shot her was at the climax of a tense situation, the House Chamber about to be invaded, a noisy threat outside the door, then a sudden intrusion right at the gun tensely held by both hands pointed at the entry window. In the sociology of violence, close face-to-face confrontations are emotionally stressful on both sides, pumping adrenaline to the level where most participants are incompetent with their weapons, unable to fire accurately; perceptually, it becomes a blur. A minority of highly trained soldiers and police control their adrenaline enough to pull the trigger under such situations; an even smaller minority hit their target.

The most striking thing about the violence at the Capitol is that so little of it came from gunfire. Many hundreds of police on the scene had guns; except at the climax of the attack on the House, none were fired, and few were drawn or aimed. A rare photo of 5 captive rioters shows them lying prone on the floor, guarded by 3 cops with a baton but guns holstered. On the side of the protestors, 5 guns were seized, although it unclear if these were inside the Capitol-- if so, they were never used. Sociologically, this is neither amazing, nor is it a point in anyone’s favor or fault: it is the most typical pattern of armed confrontations. Whether by police, gangs, robbers, or military in combat, in the vast majority of confrontations with guns, they are not used.

Victory or defeat, advancing or retreating, is far more emotional and psychological than physical violence itself. This pattern holds too at the Capitol, as it does in 1917 and 1792.

 

Fraternization

Fraternization between protestors and regime forces has played a major part in any successful revolution. In Russia in 1917, agitation by Bolshevik sympathizers inside the army and navy prepared the way by bringing them over to their side; and it was these militants and the most convinced sailors who made the attack on the Winter Palace. And in the early hours of the attack, agitators inside created confusion and promoted the defection of most of the defending troops. There are numerous examples of this pattern. The downfall of the Soviet Union was consummated in August 1991 when tanks sent to take over the parliament building were surrounded by crowds, and Boris Yeltsin climbed on top of a tank to take command from its stunned and demoralized crew. In the most famous of the Arab Spring revolts in 2011, crowds in Cairo’s Tahrir Square chanted “the army and the people are one hand” as security forces first refused to expel the protestors, then changed sides to protect them against last-ditch attacks by Mubarak’s militia enforcers.

Armed forces swinging over in a tidal wave happens when two conditions hold: when rebellion appears right and just to a vast majority of people (maybe just those who are most visible in the capital city); and when it seems inevitable, making it dangerous to hold back. The first of these conditions existed to a degree at the Capitol; the second hardly at all.

The attackers certainly made efforts at solidarity with the police. Reportedly some rioters showed police badges or military IDs as if expecting to be allowed inside. A Capitol police officer said one rioter displayed a badge and said “we’re doing this for you.”  Some intruders wore the “thin blue line” emblem of support for the police. Some videos showed police standing back and allowing rioters into the building; one officer was seen in a “selfie” with a rioter inside the building. Especially inside, where during the initial phases the police were not in riot gear, police tended to maintain normal demeanor and to talk quietly with the intruders. Afterwards, some Representatives accused the police of complicity, including giving them directions to specific offices, or giving them preliminary tours of the layout. Two Capitol police were suspended and ten or more were under investigation. One officer committed suicide.

The police were also criticized for making very few arrests (about 30 on the Capitol grounds, mostly outside), and for letting the hundreds of intruders get away once control was regained after 3 p.m.  In fact, it appears the police were most concerned to clear the Capitol, and the most expeditious way to do it was to push or lead them out the doors. Making arrests is like taking prisoners in a battle; it is the most honorific protocol, but prisoners take up manpower to guard them. Bear in mind that all this happened before reinforcements started arriving at the Capitol about 6 p.m. Most of the arrests that did happen were apparently outside in the evening, when a large number of police chased down the die-hards from the demonstration.

Most of this behavior was ambiguous. One gets the impression from watching videos made inside the building that the officers not in battle dress tried to maintain as much of an atmosphere of normalcy as possible. In the initial phase of entry, the intruders once inside walked rather tentatively, not rushing about in a frenzy but even staying inside velvet guide ropes set up for tourists. Photos in this phase generally showed thin numbers spread out in a lot of space; police presence in the halls and Rotunda was sparse or non-existent.

Thin numbers in the East Wing

Thin numbers in the East Wing

Riot-equipped forces were concentrated outside, while tactical squads in riot gear, visible in later photos, had not yet mustered inside. Under these circumstances, it is not surprising the cops were not interested in putting up violent resistance. The exception, of course, was when the intruders reached their goal-- the legislative chambers themselves; above all at the doors of the House, the only place where guns were drawn, and used. And these were the places where the crowds grew most agitated, shouting threats and slogans and trying to smash their way in.

Current or former police officers and military personnel were prominent in the front lines pushing back the barricades, and among those who got inside. Later investigations concentrated on persons identified by photos and videos or their own on-line posts; among these about one-fifth of the hundred or so investigations were police or military. Most prominent of all was Ashli Babbitt, veteran of many deployments in Iraq, who was a security officer (i.e., military police) in the Air Force.

Two comments: first, it is typical in riots that the great majority of the crowd are onlookers and noise-making supporters; only about 10 percent or less of the persons seen in riot photos are actually doing something violent, engaging the other side. It may well be the case that those who carry the battle are specialists in violence, as Charles Tilly calls them, tough guys, athletes and weapons specialists on either side of the law. (One of those charged at the Capitol was an Olympic gold-medalist swimmer.)

Second: in the overall context of recent years and months, it is not surprising that some substantial portion of American police, as well as military, are disgruntled. Among veterans and active-duty military, the suicide rate has been at a peak; the psychological toll of fighting for almost 20 years in seemingly endless wars in the Middle East; a professional (non-draftee) force repeatedly deployed, isolated from the majority of the home population; wars where victories repeatedly proved temporary and reversible; and where news publicity concentrated more on atrocities against the enemy than on American accomplishments. Since a substantial portion of police are veterans (the job where their training is most relevant), there is a bond of sympathy between the two occupations.

The police themselves have experienced the historically strongest wave of criticism in the media and from liberal politicians. Starting in the 1990s when amateur video of violent police arrests became publicized, protest has accelerated with the proliferation of mobile-phone cameras, CCTV, and near-instantaneous propagation through the Internet. Police shootings and violent arrests have resulted in a series of protest demonstrations nationwide periodically dominating the news cycle since Ferguson, Missouri in 2014, Baltimore in 2015, and others. The most intense protests were those starting in late May 2020, in the midst of dissention over the COVID shut-down; these were the most widespread and long-lasting ever, extending into September and beyond in hot spots such as Seattle and Portland. More than in any previous protests, most news media supported these Black Lives Matter protests and related actions; publicizing and endorsing their calls to defund the police; blaming local police for racism; blaming violence on Federal intervention by the Trump administration; downplaying arson and attacks on police stations, courthouses, and government buildings. Many police felt they were being unfairly blamed for the actions of a few, with little understanding for doing a tough job in a period of sharply rising homicide in minority neighbourhoods.

In the context of an election campaign, both parties rallied to the issue: Democrat politicians on the whole endorsed BLM demands for whole-sale revision not only of policing but the historical legacy of slavery and racism. A wave of tearing down Civil War statues of Confederates expanded into renaming and expunging almost anyone in US history who could be implicated in slave-holding, words or deeds detrimental to Native Americans, or European settlement of North America in general. These included Benjamin Franklin, Thomas Jefferson, Andrew Jackson, Abraham Lincoln, Ulysses Grant, and Teddy Roosevelt. In June 2020, in the midst of the protests over the death of George Floyd, the Democrat controlled House of Representatives voted to change the District of Columbia into a state renamed Douglass Commonwealth, replacing Christopher  Columbus with the abolitionist Frederick Douglass. Corporations were pressured into re-education programs at which employees were told to avow their guilt in being white.

Conflict moves by escalation and counter-escalation. Social movements on both sides mobilized from below; politicians attached themselves to the emotional momentum. An attack, both verbal and physical, on the police led to counter-mobilization. Some of it built upon existing right-wing militias and conspiracy-publicists, gaining recruits to the Proud Boys and others who took the defense of the police installations into their own hands. A strange coalition of  extremists and police was created, at least in goals and sympathies, which only became manifest in the assault on the Capitol.

This was the atmosphere in which Trump supporters, polarized against the BLM protests, the left-dominated media, and the congressional Democrats, acquired the emotional conviction that their country was being taken from them. The slogan of the stolen vote was a symbol of this larger feeling. Trump fed it with his rallies, ritualistic emotional-energy generators that swing belief into line with a surge of collective feeling. The Durkheimian collectivity always feels like we are Society, we are the People; it is not quantitative but embodied and totalistic. Riding this emotional wave, they swarmed the Capitol. The effort to fraternize with the Capitol police came out of this conviction.

But a Durkheimian political groundswell must be overwhelming; it reaches its nemesis when there is a counter-mobilization on the other side. Two wavering bodies, with their usual disorientation and lack of smooth coordination at moments of crisis, do not create the ingredient that sways the behavior of security forces at the hinge of events: the feeling that revolution is inevitable, better to join it than be left in the minority opposing it. The Capitol police, whatever twinges of sympathy or moments of soft demeanour they displayed, for the most part stayed firm.

 

Looting and ritual destruction

By ritual destruction I mean behavior that is seemingly purposeless, to outsiders and opponents. But it is meaningful, or at least deeply impulsive, for those who do it: a collective, social emotion for those involved. 

Looting is generally of this sort. It rarely takes anything of value. In riots, including those that take place in electrical black-outs, the early looters tend to be professional thieves, but the crowds that come out to look and see broken-in store fronts are often caught with goods that they have no use for; they just join in the collective mood, a holiday from moral restraints when everything seems available for free.  (This is also visible in photos taken during the looting phase of riots.)

In political protests and uprisings, looting does something else. Usually in the first phase of riot, especially a neighbourhood riot, after the first confrontation with the police, there is a lull while the police withdraw from the outnumbering crowd to regroup and bring reinforcements. In this lull, the emotional mood will drain away unless there is something for the crowd to do. Looting is a way to keep the riot going-- sometimes along with arson, even if it means burning your own neighbourhood; the smoke and flames in the sky carry a visual message of how serious the situation is. And looting is made possible, and easy, because police are visibly absent. Without opposition, the atmosphere is like a holiday; and at least temporarily it is a victory over the absent enemy. Looting is emotionally easy; there is no face-to-face confrontation. It provides a kind of pseudo-victory over the symbols of the enemy.

This was the situation in the Capitol after about 3 p.m. The attackers had been driven back from their political targets. Heavily equipped and menacing-looking tactical police squads are now pushing back the crowd, chiefly in the dense areas of the Capitol around the Rotunda. But it is a building with several wings and multiple floors, numerous stairs, a labyrinth of offices. This is the period when rioters spread out, penetrating far-flung corners where the last would not be dislodged until after 5 p.m. This is when the looting and ritual destruction mostly took place.

A prime target was House Speaker Nancy Pelosi’s office. Looters flipped over tables, ripped photos off the walls, damaged her name plate on the door. One of her laptops was stolen, as were those in other offices. The office of the Senate Parliamentarian was ransacked, as were other offices. Some places had graffiti: “Murder the media” was one of them, at Press rooms with damaged recording and broadcast equipment. These we can interpret as specific political targets.

Broken doors and cracked or smashed windows were throughout the building, leaving the floors littered with glass and debris. Some of this happened in the process of breaking into locked areas. But it continued in remote office spaces; presumably this was ritualistic destruction, just prolonging the attack-- precisely in places where guards were not present, while their main force was concentrated elsewhere.

Photos taken in the aftermath do not show a great deal of trash or destruction in the main corridors. Some of the furniture piled up was from improvised barricades by the defenders. Art works in the main galleries and display areas were not attacked-- presumably these had little meaning as enemy targets for the intruders. Some statues and portraits were covered with “corrosive gas agent residue”-- this would include tear gas and smoke bombs set off by the defenders, and (perhaps a small amount of) bear spray used by the attackers. In other words, this damage was an unintended by-product of the fighting that took place. Note too that these were “non-violent” weapons, designed to drive away opponents and avoiding lethal force.

If the looting and ritual destruction was intended to be a symbolic attack upon the Capitol, it succeeded in frightening and angering its officials. It was a ritualistic exercise on both sides-- which is to say, a war of emotions.

A far more destructive instance is the last comparison we will consider.

 

Paris, August 10, 1792

It was the day the French Revolution turned radical. Up till now it was  a Constitutional Monarchy, the King ruling together with the Assembly. But tension had grown as the King vetoed punitive laws against nobles who fled the country and priests who refused to become civil servants. Tension grew worse as foreign troops threatened French territory to restore the old monarchy.

The royal palace had already been invaded 7 weeks before. On June 20, the third anniversary of the Tennis Court Oath in 1789, when reforming aristocrats had gone over to the National Assembly, a memorial demonstration of 10,000-to-20,000 surrounded the Tuileries. Carlyle summed up: “Immense procession, peaceable but dangerous, finds the Tuileries gates closed, and no access to his Majesty; squeezes, crushes, and is squeezed, crushed against the Tuileries gates and doors till they give way.” [p. xl]  The King held them off, declaring his loyalty to the constitution, even wearing a popular “liberty cap” (the emotional force of a MAGA cap), and drinking a toast with them. Finally the Mayor of Paris arrived and persuaded the demonstrators to leave. In the aftermath, a wave of sympathy for the King split the Assembly. But efforts to swing back to moderation stalled, and news from the front raised further alarm as the enemy advanced. In Paris, everyone expected another assault on the palace, this time for keeps. 

Security was beefed up. Courtiers in the palace went around armed and prepared barriers. The National Guard-- an official militia-- were urged to defend the crown against the sansculotte mob, but their loyalty was questionable, and a force of Swiss Guard was relied upon. On the other side, contingents of volunteers poured into Paris on their way to reinforce the front. The coming assault was an open secret. The “patriots... were now openly talking of storming the Tuileries as the Bastille had been stormed, and establishing a Republic.” [Doyle, Oxford History of the French Revolution,  p. 187]

The organizational center of power was slipping away from the Assembly. The radical political clubs of Paris, the Jacobins and others, agitated in the neighbourhood sections to coordinate action in a revolutionary commune. Distrusting the National Guard drawn from the wealthier citizens, they called out the sansculottes  (those without fashionable knee breeches) of small shopkeepers and artisans. In late July, panic over the invading Prussian and Austrian armies moved the Assembly to distribute arms to all citizens-- even though the arms could and would be used against themselves.

In the small hours of the night before August 10, the continuous ringing of the tocsin bell proclaimed emergency. The central committee of the Paris sections declared an insurrection and ordered all forces to march on the Tuileries. “Arriving there at nine the next morning, they found that the King and his family had fled to the safety of the Assembly across the road.” Defending the palace were about 2000 National Guards, but these immediately defected to the Commune’s side, a crowd of about 20,000. Courtiers had put up a brave show before the attack, but now withdrew. This left the 900 Swiss Guards, professional mercenaries, who began the action by opening fire. Their initial volley did not deter the huge crowd, and their allies melting away no doubt eroded their confidence. After about an hour, “the Swiss began to retreat, pursued by mobs of bystanders without firearms who hacked them to pieces with knives, pikes, and hatchets, and tore their uniforms to pieces to make trophies... crowds rampaged through Paris destroying all symbols and images of royalty down to the very word “king” in street names.” [Doyle, 189] 

Carlyle summarized contemporary accounts in his own rhetoric of the 1830s: “Till two in the afternoon the massacring, the breaking and the burning has not ended... How deluges of frantic Sansculottism roared through all passages of the Tuileries, ruthless in vengeance; how the valets were butchered, hewn down... how in the cellars wine-bottles were broken, wine-butts staved in and drunk; and upwards to the very garrets, all windows tumbled out their precious royal furnitures: and with gold mirrors, velvet curtains, down of ripped feather-beds, and dead bodies of men, the Tuileries was like no garden of the earth... bodies of Swiss lie piled there; naked, unremoved until the second day. Patriotism has torn their red coats into snips; and marches with them at the pike’s point.” [Thomas Carlyle, The French Revolution,  1837/ 2002: 499]

Paris was now in the super-dangerous situation of rival centers of power, the Assembly and the Commune. Both of them commanding armed forces; both internally split among mutually distrustful factions, fearful of what their rivals would do, and motivated to strike first out of fear of what would happen if they didn’t. But the initiative had passed to the Commune, and its radical political clubs; they had won the big victory, and demonstrated the awesome force of the mobilized crowds of Paris. Awesome because of its emotional pressure, its all-encompassing noise, its sheer size, and its ferociousness, now several times demonstrated, when opponents wavered and it had them at their mercy. Guillotines were being set up. In future months, the King and Queen would be executed, along with thousands of others, aristocrats, priests, and just plain political rivals, anyone who aroused suspicion of whatever faction was temporarily dominant. This would go on for two years, until Robespierre was executed and a reaction began to swing back towards unitary authority and eventually dictatorship.

During these two years there was a veritable mania of renaming. Forms of address, Monsieur and Madame, were forbidden; everyone was to be called Citizen. Churches were declared temples of Reason. The old Christian calendar was abolished, its A.D. (anno domini) and B.C. (before Christ) replaced by Year One, starting with the declaration of the Republic in September 1792 (oops, old-style!) While we’re at it, all the names of the months have to go too, for instance the month of July is now called Thermidor. Symbolic politics glorifies the hopes and projects of the most radical intellectuals. These changes would remain in place until Napoleon brought back the church and reinstated the old calendar in 1801.

 

Lessons learned?

What was unusual about the Capitol assault of January 6, 2021 was how quickly and easily it was defeated. Yes, it had factional splits and dispersed centers of command, wavering and dissenting about sending reinforcements; it had police retreating before an aggressive crowd; reluctance to shoot; some fraternization between attackers and guards; some ritualistic looting at the end. It had a background of long-standing and accumulating tension between two sides, counter-escalating social movements, politicians jumping on and off of bandwagons. But in historical comparison, it had no overwhelming consensus that the regime was toppling, much less that it ought to topple. The assault was defeated, in a momentum swing of about an hour, and with an historical minimum of serious casualties. That it could be put down so easily is a testiment to American institutions. A federal democracy, with powers shared and divided at many levels among executives, legislatures, and courts, there is no place to turn the switch that controls everything. Decentralized democracies like the USA can have civil wars-- if geographical splits are severe enough and include the armed forces; but it cannot have coups at the top or revolutions in the Capitol.

 

REFERENCES

Leon Trotsky, History of the Russian Revolution. 1930.

John Reed, Ten Days that Shook the World. 1919.

Thomas Carlyle, The French Revolution. 1837.

William Doyle, Oxford History of the French Revolution. 2002.

Alexis de Tocqueville,  Recollections: The French Revolution of 1848. 1850.

Randall Collins, Violence: A Micro-sociological Theory.2008. [data sources on forward panic; firing and non-firing in violent confrontations; layers of participants in riots; looting]

Charles Tilly, The Politics of Collective Violence. 2003.

Anne Nassauer, Situational Breakdowns: Understanding Protest Violence and Other Surprising Outcomes. 2019.

Neil Ketchley, “The Army and the People are One Hand! Fraternisation and the 25th January Egyptian Revolution.” Comparative Studies in Society and History. 2014.

Randall Collins, Civil War Two. 2019. [thought experiment of what a replay of the Civil War of 1861-65 would be like with modern weapons]

William James' Whole-Body Psychology, With A Theory of Alzheimer's Disease

 

We have a truncated view of what William James is about. Bits and pieces are famous: The James-Lange theory that you feel afraid because you run away, not vice versa. A few famous lines: “the saddle-back of the present”; the world is “a blooming, buzzing confusion” (he meant, for an infant). Witty things he and Gertrude Stein said to each other when she was his student.

Pragmatist philosophy, of course: what is true is what works. This connects with his open-minded  interest in religious experiences, including mysticism, even including drugs. James is no religious conservative, defending tradition and dogma. He investigates without prejudgements what religion is like in all its varieties, and what you get out of it-- his pragmatism again.

All this obscures the fact that James was a psychologist, and he became famous for his comprehensive text-book of psychology in the 1890s, before his late-life switch to philosophy and pragmatism. And he was not just a psychologist, but a medical psychologist, grounding psychology in the physiology of the body. When James entered adulthood in the 1860s, psychology as a research field did not yet exist, and his degree was in medicine. Thus his psychology is not just of the mind or the brain, but of the entire body.

James combined existing medical research with the experimental psychology then being developed in German laboratories, and to an extent in England.  These 19th century experiments are now largely forgotten. They were launching an empirical study of the human mind, focusing on the senses by which persons experience the world, and on subjective processes such as time, idea-associations, and memory. In short, they were measuring the contents of human consciousness.  But this “introspectionist” psychology was what behaviorist psychology, taking off in America around 1915, would reject for the next 60 years. The behaviorists declared the mind was a “black box” that could not be opened scientifically; instead they devised experiments to study overt behavior, for convenience using rats, and to some extent dogs and pigeons. Only when cognitive psychology started making a come-back in the 1970s did James’ topics again become a central focus for research. 

What do we get out of James’ psychology of the 1890s?  A surprisingly modern view, and on the whole better expressed and more usefully packaged than much of the neurophysiological psychology of today. There is no big break in the kinds of things we know. Contemporary cognitive psychology follows in the wake of the 19th-century work James was drawing upon; sometimes rediscovering what his era already saw but in lesser detail without today’s laboratory instrumentation.

 

Overview:

a. Three-part model: sensory input -- central channeling -- action output

b. All sensory experience is previously channeled

c. Experience speeds both recognition and misrecognition

d. All mental schemas are fuzzy

e. Conscious / unconscious is a continuum

f. Emotions are simultaneous with action, not prior

g. Habits facilitate action and free up conscious attention

h. To break a habit, put a different habit in its starting place

i. Will power is focusing attention at the beginning of a chain

j. A theory of Alzheimer’s

 

Three-part model: sensory input -- central channeling -- action output 

From a physiological perspective, everything that we call psychological involves a 3-part sequence.  It starts with sensory input of some kind: sight, sound, senses in the body or on its surface. Input flows into central process;  what we usually call the brain, except this should not be confined within narrow borders, since the central processing involves connections not just among neurons inside the skull but all the other connections throughout the body. I suggest calling it central channeling, on the metaphor of channels becoming deeper and more strongly marked the more times an impulse has flowed through them.

Finally there is output in the form of action. As James views it, every impulse coming in from the senses and through the central channels flows out again, in the action part of the organism. At first glance, this sounds overly behaviorist, as if the organism is inert until something sensory comes in, wakes up the brain, and then the body moves its muscles and does something. But James sees the output action much more broadly: “from a physiological point of view, a gesture, an expression of the brow, or an expulsion of breath are as much movements as an act of locomotion.” [426] It would also include sweating, blushing, eye movements, changes in body temperature and blood flow to different parts of the body, stomach acid, speeding heart rate, saying something to oneself under one’s breath-- a range of things that includes what we would call emotion signs and thinking. For James, an emotion is a form of physical behavior, and so is cognition.  

At this point, we don’t have to believe it. Take it as a theoretical generalization, which we can test in every single case of experience. Among other things, James is implying that thinking to oneself always has effects somewhere in the body besides in the brain itself; and also that  any sensation that comes in not only goes into the brain system, but comes out somewhere is the body. As we shall see, much of this is in the realm of habitual channels and on the unconscious part of the continuum. The work of the psychologist is to trace it through the body. *

 

*  James was about 20 years older than Sigmund Freud, and his work was earlier. Leaving aside Freud’s famous theories about sexual and aggressive impulses from early childhood onwards, we can say that both of these medical-doctors-turned-psychologist hit on a similar formulation: what is psychological in the narrower sense is also operative throughout the body. Their theories are psycho-somatic.

 

All experience is previously channeled 

There is no such thing as pure experience, independent of filters or preconceptions. That is to say, the brain/processor is wired to see, hear, feel (etc.) certain kinds of things; and those wirings or channels change every time you perceive something. Learning a foreign language,  it takes a while to recognize what those sounds are and how to parse them into syllables, words, and meanings. An English-speaker has to learn to hear the difference between vowel-sounds in French that are not significant in English. Someone with a slight knowledge of a language is prone to make what turn out to ridiculous misinterpretations.  Successfully learning a language is a gestalt-switich; what was previously quite literally a buzzing confusion resolves itself into comprehensible utterances. An infant is in the same position, since their “native” language is foreign at first until it becomes grooved into the brain/ear/voice pathways of brain and muscle. The same thing happens later on if one learns to recognize tunes and harmonies in particular kinds of music; the first time you hear Stravinsky is a much different listening experience than when you become familiar with it; and the same was true historically when audiences had to learn to hear the sounds of Wagner, or for that matter Beethoven, Bach, or early polyphonic music. 

Similarly with sight. Learning to read begins with coming to see certain shapes as letters distinctive from each other. Learning to read a foreign language is another gestalt-switch, as when a Westerner first starts to recognize particular Chinese characters. Sight  pervasively structures our experience-- what we think of as “the world” around us is mainly what we looks like to us-- and this too had to be built up in early childhood. And throughout one’s life, as well: houses look different when you are thinking about buying one of them, or if you become interested in architecture. You see a different world each time you pay attention to it, although on autopilot it is only a familiar blur.

The same again with the physical, palpable world. The gestalt-switch is most apparent in learning to ride a bicycle rather than falling over or to swim rather than sinking in the water; these are a matter of attending to certain sensations in your muscles, your sense of balance, and where you put your attention (you become a skier when you stop focusing on your legs and focus on the spot where you are going). These actions, which are initially tricky and call for re-wiring the central channels from what they did before, show most dramatically that one’s physical sense of the world is a bundle of sensations in your body, the sensations of moving your muscles in a particular way, and its coordination with other senses like vision. A child learning to walk is also constructing a brain-channeled world that is walkable, and a body that fits in that world.  

The point is not just of philosophical or theoretical interest. All practical skills are of this sort; the difference between being good at something, passable, or inept are in the packaging of these bundles of experience. Social skills (not much of a concern in James’ psychology, but central for a micro-sociologist) are ways of shaping one’s sensation/central-channel/action pathways. These experiences account for how some persons are talkative or shy; aggressive and violent or victimized; socially connected or alienated (and in what kinds of situations).

 

Experience speeds both recognition and misrecognition

With greater experience, certain kinds of perceptions are easier and quicker. You see more at a quick glance; you can construct the unseen or blurred part from the part that your brain recognizes. My wife does not have very good eyesight, but she is an excellent driver, including at rather high speeds; she says that she recognizes what other drivers are going to do  and steers accordingly. Many successful college quarterbacks fail when they reach the professional league; it can take several years training to “see the field” in fractions of a second; and quarterbacks who manage to do this generally have much longer careers than other players who rely on muscle and quickness. It is a whole-body skill, not just in the vision and hand-eye-(legs) coordination, but in a perceptual gestalt that slows down time where other players see a blur. 

James comments that the same sensory/ channeling process that makes for experienced recognition also is responsible for illusions. At a distance, an erect figure at the side of the road may look like a person but up close turns out to be a road sign. This is the brain filling out a sight on the basis of partial information. If you do speed-reading or glancing through messages, it is easy to mistake what is actually said (expecting bad news, it looks worse than it really is). Eye-witness testimony in crimes is often unreliable; usually the event is blurred because sudden, unexpected, or highly emotional; the victim can pick the wrong person out of a line-up while feeling convinced it was the perpetrator s/he faced. The feeling of certitude comes from forming a gestalt-- not at the time of the crime, but later when you fit one of the faces into your blurred memories.

This does not mean that eye-witness knowledge is always unreliable; if that were the case, no observer would ever learn anything accurately. The difference is in the total eye-brain-body configuration of whoever is doing the observation. Ethnographers train themselves to observe the details-- they are not merely caught up in the action, but focusing on their professional task of observing, commiting key details to memory, and recording them in field notes. A blanket statement-- all personal observation is fallacious-- is inaccurate; we can specify what makes some observers more accurate than others. It is the difference between plunging a non-athlete into a pro football game, and the way the star quarterback sees the field. 

James summarizes laboratory research on the question: how long is the present, the “now” as a moment in time? Philosophers are prone to argue that the present moment is so elusive that it is virtually non-existent, if not at actual illusion. Since Xeno, the argument has been made that each little bit of time can be subdivided, and so on to infinitesimal regression. Buddhist philosophers in India argued thus as a proof for the non-existence of the world as it is humanly perceived. James, however, regards these are merely intellectual arguments; instead, look at experience when subjects are asked to compare visual images flashed at different speeds. A useful experiment is to look at something, then close your eyes, and count how long the image remains visible inside your eyelids. Especially when looking at objects which have bright and contrasting colors or light and shade, you see that the image slowly fades and become more blurred. From similar experiments, James estimates that the present moment lasts between 7 and 12 seconds. --Perception is not of an instant in time; it is perception of things that have a deep enough channel in your brain so that you can see them. There is no knife-blade of the present; better put, it is a “saddle-back of the present” as you ride the horse of your senses/brain/body along a course of experience.

This has an important relevance for social interaction. Garfinkel’s ethnomethodology of what he calls the practices of everyday reasoning includes the principle: what is communicated by other people is often ambiguous or meaningless; but we adopt an attitude of wait-and-see, expecting the meaning to emerge. Garfinkel’s famous breaching experiments were designed to show that even in situations deliberately contrived to be meaningless, subjects assumed there was a hidden meaning that would eventually emerge. A more mundane example is the experience of hearing someone say something, which at first you misrecognize-- until a few seconds later the utterance sounds meaningful in retrospect.. 

The reason you can reinterpret what you heard, eventually getting the words connected with the right set of syllables, is because auditory memory has about the same time-present as visual memory; the words are reverberating in your brain-memory for up to around 7 seconds, and this makes it possible to re-hear their meaning. It is this saddle-back of present hearing that makes it possible for simultaneous translators at international conferences to translate what the speaker has just said a few seconds back, while also listening to what s/he is saying that will have to be translated next. And it is how all of us make our way through a perceptual world that is deeply ambiguous, at least in detail, all the way through. And that is because:

 

All mental schemas are fuzzy 

We think of the world around us as full of physical objects, which mostly remain stable across the hours and days of everyday life. But is this really so?  Moving around, we view things from many different angles and distances. Tables, chairs, trees, houses, faces and bodies-- the objects may be the same but we see them from thousands of different angles, in different lights and colorings. Out of all these different mental snapshots, which one is your image of how things look? In the channeling of neural circuits, a table is not just one ideal picture of a rectangle with legs at the corners; it is all the neural circuits that have been grooved by perceiving it in different perspectives.

All of them together is the mental object; it is a fuzzy composite, not a single clear image. They hang together because parts of them overlap. They have a central core, but also a lot of non-overlap.  Your mental images have fuzzy edges-- but this is a figure of speech, because images are fuzzy all the way through.  

As a practical matter, this causes no problem in navigating our familiar surroundings; especially since deeply grooved circuits tend to complete the gestalt with only partial information. For this reason, James comments, artists have to unlearn their normal, neurally lazy way of perceiving the world, and to train oneself to look at how things actually appear in particular perspectives and lights.

Philosophers in the Platonic tradition have taken such multiplicity of experience as a reason to reject the senses and rely on pure, abstract images in the mind as the source of truth. But James points out there is no reason to believe such images exist in the brain. Experience of objects is stored in fuzzy composites. When you attempt to call to mind a particular image, it is generally fuzzy; if you close your eyes, the hynogogic images that appear to be inside your eyelids are vague and flickering, prone to quickly shift into related shapes. In  dreams, images are rarely sharp and clear, and dreamers do not stare fixedly at a sight but move through the dream-- which is why dreams morph into strange visual associations, as one would expect from attention flowing rather randomly among the channels of complicated neural circuits.  To the extent that there are mental schemas, they are fuzzy complexes. 

Philosophical Platonists argue that their Ideal types are the only way to account for truth and reason. James offers an alternative: there is no sharp divide between a correct image and incorrect ones. They are all approximations, all partial and incomplete. Relying on a small slice of experience and letting neural grooves complete the rest of the gestalt is nevertheless a practical way of getting around in the world; if it turns out to be hasty misrecognition and results in a bad mistake, the gestalt is usually reset by the shock. James’ emphasis on the fuzziness of concepts goes along with his pragmatism: truth is what you call it when the outcomes are the ones that you are aiming at.

This is also the way it works when we engage in reasoning. All of our calculating and decision-making takes place in some period of time, not in abstraction from time and place. Whether a decision turns out to be right or wrong is decided by how it turns out. John Dewey, James’ pragmatist follower, emphasized that we are always moving along trajectories of action; at any particular moment, we aim at some end, and choose means to get there. But as your chain of actions goes along, typically the end-target gets adjusted; means become ends and vice versa. This is true of scientific research as well as business, politics, and everyday life. In science, the initial research question often morphs into something else; if solving it is difficult, it becomes perceived as an ill-posed question. Most breakthroughs are shifts to new ways of conceptualizing what we are concerned with.  

It is a philosophical and intellectual prejudice that the world should be clear and exact. To assume that every assertion is either right or wrong, true or false, with no area of overlap, is unrealistic, and an impractical way to proceed. In actual experience, most problems we can pose ourselves fork three ways: probably yes, probably no, and undecidable. None of this is permanent; time and human projects move along; what is undecidable at one time may move into a clearer probability zone later. Being realistic implies we should expect new areas of undecidability will emerge as we go along.

 

Conscious/unconscious is a continuum 

There is no sharp dividing line between conscious and unconscious. What we ordinarily regard as consciousness is concentrating our attention and looking for particular kinds of things that feel significant at that moment. (Such focus is also intensified activity in particular neural circuits in the brain.) At the same time, a strong focus of attention also de-focuses other things. When we are not particularly focusing attention, things run off more or less automatically-- this is true of ordinary habitual actions like walking, or moving your fingers if you know how to type on a keyboard. There are also states of experience when one is not concerned to be attentive to anything, when you are feeling lazy, drowsy, relaxed; and this has a borderline area in which you fall asleep. As usual in James’ worldview, gradations of consciousness exist but they range along a fuzzy continuum, indeed along a fuzzy number of continuums (continua).

This has implications for his theory of emotions, as well as his theory of habits.  

 

Emotions are simultaneous with action, not prior

 “All consciousness is motor... Every impression which impinges on the incoming nerves produces some discharge down the outgoing ones, whether we be aware of it or not... We might say that every possible feeling produces a movement, and that the movement is a movement of the entire organism and of each and all its parts.” [372]  

James goes on to examine the most characteristic of these bodily movements, i.e. emotions. Here he expounds what is called the James-Lange theory. “My theory... is that the bodily changes follow directly the perception of the exciting fact, and our feeling of the same changes is the emotions.... Common sense says... we meet a bear, we are frightened and run... [But] this order of sequence is incorrect; the one mental state is not immediately induced by the other; bodily manifestations must be interposed between... We feel sorry because we cry, angry because we strike, afraid because we tremble.” [377]

James goes on to apply this model to milder emotions, and urges the reader to observe oneself. “When worried by a slight trouble, one may find that the focus of one’s bodily consciousness is the contraction, often quite inconsiderable, of the eyes and brows. When momentarily embarrassed, it is something in the pharynx that compels either a swallow, a clearing of the throat, or a slight cough...” [380]  Observing closely the sequence in time, we find “every one  of the bodily changes, whatever it be, is FELT, acutely or obscurely, the moment it occurs.” [379]  

The thought-content of the emotion comes after the bodily changes, not before. Genuine emotions overtake us via the body. A faked emotion usually does not come across as genuine, although we may mimic the more easily controlled voluntary muscles of face or posture, because it lacks the power of the involuntary changes--- like trying to imitate a sneeze, James comments.

Next comes: “... the vital point of my whole theory: If we fancy some strong emotion, and then try to abstract from our consciousness of it all the feelings of its bodily symptoms, we find we have nothing left behind, no ‘mind-stuff’ out of which the emotion can be constituted.” [380] 

James is not arguing against the existence of the mind or of consciousness. He is observing in detail the full-body process within which “mind” exists, and the up-and-down slopes of feeling and attention that constitute our consciousness.

 

Habits facilitate action and free up conscious attention 

A habit is a deeply grooved chain: an initial perception to set it off; the brain circuits; the physical action. These are chained together in a repeated circuit: in the habit of walking or running, tipping your bodily balance forward and moving your legs so that the other leg catches you before you fall; the sensation of the foot hitting the ground, the sense of where your balance point is, are the perceptual inputs, moving the chain along to another input point, and so on. Once you learn how to walk, this becomes unconscious; no attention has to be directed to these repeated connections, and this leaves you free to attend to other matters, such as changing direction, noticing other people, or looking around. Thus habitual actions at a more basic (and earlier-learned) level are the key to more complex and mindful actions. It is James’ continuum of unconscious/conscious again.  As noted, here James converges with Freud; the chief difference being that Freud is concerned with dramatic unconscious action-impulses and their physiology (sex and aggression, along with various strong emotions); James portrays unconscious perception/brain-channel/body-action as constitutive of everything that humans do (and probably other animals as well).

Habits enable you to get yourself going when you don’t feel like it. Putting off writing for one reason or another; putting off exercizing when you don’t have the energy; procrastinating... The tactic is to start the first part of a routine and let the sequence pull you into its rhythm. Don’t start with the hardest; start with something easy or something you like -- your favorite stretch, leave the crunches for later, when they will click in on their own. Overcome writer’s block by correcting your latest text; if you don’t know what comes next, recopy your notes or outline-- this gets you focusing on the sequence of topics (what point to make before what) as well as getting you started writing. Making small changes makes it easier to make bigger changes: a  new idea, some wording you can use; soon you are drawn to keyboard and finding one line leads to another.  

When you really feel lethargic, get up and do something easy and automatic. I find that walking around the garden with clipping shears in hand, or just picking off dead leaves, becomes pleasantly addictive; the more you do it, the more things you see to do. James’ sequence is set off: perceptual starting points, familiar channels, bodily actions, leaving you cued in to more starting points. Today the terminology would refer to them as “affordances,” the appeal that objects in your environment make for you to do something with them. Except that they are not affordances for everybody; it is your own distinctive habit sequences that give them their action-triggering qualities.

Professional techniques are habits at a higher skill level. What makes an artist or a musician successful are the techniques they have acquired: how to sketch the first lines that set the focus of what you are painting; how to expand a rhythmic motif and put forward-leaning tension into a chord sequence. They find their style when they acquire a fertile combination of techniques, which become engrained to run off automatically. Mozart, who started acquiring such techniques when he was three years old, eventually reached the point where starting with any little bit of tune would set him off creating something new. Because techniques have trajectories, once you launch in, it carries you with it. This is the difference between a banal routine and a habit which is enjoyable, even fun: it has a direction, so that the little details it encompasses are meaningful, part of project’s gestalt. High-level habits of this sort contain their own built-in motivation. Habits of this sort are the opposite of boredom. 

 

To break a habit, put a different habit in its starting place

James discusses “bad habits” as any kind of cue-channel-action sequence that you end up wishing you wouldn’t do. But since habits are so deeply grooved in the nervous system, how can you overcome them? They jump in automatically from the starting point. At the end you may add -- I wish I didn’t do that-- but that only adds something further to the end of the sequence. James’ solution is to break a habit by having some other habit interfere with it. * 

 

* James also discusses instincts-- habitual sequences that are hard-wired into the nervous system. If these could never be overcome, James points out, humans could never have evolved to doing anything new, and history would not exist. The hard-wiring does not disappear; humans have a flight-or-fight arousal in the hypothalamus, but persons can add other channels that shape when and how the physiological response is tripped off. [388]

 

The substituting habit must start from the same cue, the perception that triggers the undesired habit. If over-eating starts with opening the refrigerator every time you walk past it, the solution is to chain something else to seeing the refrigerator, or the food inside it. One way to do this is to create a habit of focusing attention on the feeling in your stomach: do I feel hungry, or satiated? If the latter, let the chain of thought follow-- I don’t really feel like eating, the desire is just in my head-- and the action of not looking for food. Psychological experiments show that persons who eat too much do not feel hungry all the time, but instead react very impulsively to the sight of food. *  

 

* Anthropologists have pointed out that in some Polynesian cultures, an insult or a mishap that would otherwise make persons angry, is headed off. Instead of expressing anger or taking action, the cultural response is immediately to think, has someone violated a taboo or brought evil mana here? While thinking about this (an anthropologist told me such persons would get a perplexed look on their face), the anger response calms down. They have inserted a cue immediately after the first anger arousal, that leads in a different direction.

 

Will power is focusing attention at the beginning of a chain 

At this point, James raises the question of the existence of will. But his discussion is not about free will as a metaphysical issue; it is a matter of seeing what we are referring to in practice. If there are acts of free will, we must be able to observe what they look like; where they are located in the flow of time and action. Where they are not located is after a habit has run itself off; repenting at the end, telling yourself not to do it again, are failures of will.

An act of will must come at the opening cue, acting quickly to re-route the following habit sequence. Here the saddle-back of the present helps out. Nothing is instantaneous; every psychological process takes some time, even if a short one. James estimates the opening is about a half-second: 

“Mental spontaneity... is limited to selecting amongst those [ideas] which the associative machinery introduces. If it can emphasize, reinforce, or protract for half a second either one of these, it can do all that the most eager advocate of free will need demand.” [286]

Modern research on tape-recorded conversation finds that humans can be distinctly aware of periods of 0.2 seconds-- 5 beats per second, like counting one-al-li-ga-tor, two-al-li-ga-tor, three-al-li-ga-tor. We can hear pauses as short as 0.1 second. Thus a half-second is ample time to reverse a course of action, even if it is a deeply channeled habit. The key is to focus one’s attention on that cue, instead of letting it slide by on a low level of the unconscious/conscious continuum. James refers to this as “the effects of interested attention and volition”. Your attention must be interested in what you are focusing on, if it is to turn the sequence into a volition. Will power is a high degree of conscious attention, exercized at moments where you have pre-prepared habit sequences among which you can switch.  

Thus putting a new habit in place of a bad habit involves a series of moves, some of them far back in time. Deciding something is a bad habit-- that is usually easy. Figuring out and carefully observing what is the cue that sets it off. Formulating a new habit sequence that can be inserted. And finally, making the substitution in real time.

James is not only a pragmatist. He is the most practical of psychologists. 

 

A theory of Alzheimer’s

James did not discuss Alzheimer’s disease or adult dementia, but his psychology at many points is directly relevant. Alzheimer’s is above all a breakdown of memory, starting with short-term memory; eventually it can proceed to full-scale failure of the nervous system. Thus a William James theory of Alzheimer’s can be constructed from his analysis of memory, habit, and the sensation/channel/action sequence. And being a pragmatist, his theory tells a person what to do about it-- above all the person whose own memory is failing. It is a theory that enables rather than restrains people. 

This is what James has to say:

 

Aging and the speed of time 

“... a time filled with varied and interesting experiences seems short in passing, but long as we look back. On the other hand, a tract of time empty of experiences seems long in passing, in retrospect short... Many objects, events, changes, many subdivisions, immediately widen the view as we look back. Emptiness, monotony familiarity, make it shrivel up.

The same space of time seems shorter as we grow older-- that is, the days, the months, and the years do so; whether the hours do so is doubtful, and the minutes and seconds to all appearances appear about the same... In most men all the events of manhood’s years are of such familiar sorts that the individual impressions do not last. At the same time, more and more of the earlier events get forgotten, the result being that no greater multitude of distinct objects remain in the memory... 

“So much for the apparent shortening of tracts of time in retrospect. They shorten in passing whenever we are so fully occupied with their content as not to note the actual time itself. A day full of excitement, with no pause, is said to pass ‘ere we know it.’  On the contrary, a day full of waiting, of unsatisfied desire for change, will seem a small eternity... [Boredom] comes about whenever, from the relative emptiness of content of a tract of time, we grow attentive to the passage of time itself... The odiousness of the whole  experience comes from its insipidity; for stimulation is the indispensable requisite for pleasure in an experience.” [290-91]

 

Active memory circuits and dated memories 

James goes on to make a distinction between our feeling of past time as a present feeling,  and our reproductive memory, the recall of dated things:  “Since we saw a while ago that our maximum distinct perception of duration hardly covers more than a dozen seconds (while our maximum vague perception is probably not more than a minute or so), we must suppose that this amount of duration is pictured fairly steadily in each passing instant of consciousness by virtue of some fairly constant feature in the brain-process to which the consciousness is tied. This feature of the brain-process, whatever it may be, must be the cause of our perceiving the fact of time at all.

The duration thus steadily perceived is hardly more than the ‘specious present’... Its content is in a constant flux, events dawning into its forward end as fast as they fade out of its rearward one, and each of them changing its time-coefficient from ‘not yet,’ or ‘not quite yet,’ to ‘just gone,’ or ‘gone,’ as it passes by. Meanwhile the specious present, the intuited duration, stands permanent, like the rainbow on the waterfall, with its own quality unchanged by the events that stream through it...”  

“Please observe, however, that the reproduction of an event,  after it has once completely dropped out of the rearward end of the specious present, is an entirely different psychic fact from its direct perception in the specious present as a thing immediately past... In the next chapter, we will turn to the analysis of what happens to reproductive memory, the recall of dated things.”  [292-93]

 

Ingredients for good memory 

Memory thus being altogether conditioned on brain-paths, its excellence in a given individual will depend partly on the NUMBER and partly on the PERSISTENCE of these paths.” [299]

 

Innate retentiveness of neural connections 

James recognizes individual differences in memory that are physiologically based: “The persistence or permanence of the paths is a physiological property of the brain-tissue of the individual, whilst their number is altogether due to the facts of his mental experience. Let the quality of permanence in the paths be called their native tenacity, or physiological retentiveness. This tenacity differs enormously from infancy to old age, and from one person to another. Some minds are like wax under a seal-- no impression, however disconnected from others, is wiped out. Others, like a jelly, vibrate to every touch, but under usual conditions retain no permanent mark. The latter minds, before they can recollect a fact, must weave it into their permanent stores of knowledge. They have no desultory memory.

“Those persons, on the contrary, who retain names, dates and addresses, anecdotes, gossip, poetry, quotations, and all sorts of miscellaneous facts, without an effort, have desultory memory in a high degree, and certainly owe it to the unusual tenacity of their brain-substance for any path formed therein. No one probably was ever effective on a voluminous scale without a high degree of this physiological retentiveness. In the practical as in the theoretic life, the man whose acquisitions stick is the man who is always achieving and advancing, while his neighbors, spending most of their time in relearning what they once knew but have forgotten, simply hold their own.” [299-300]  

 

Adding and losing connections; aging

Even so, everyone ages: “But there comes a time of life for all of us when we can do no more than hold our own in the way of acquisitions, when the old paths fade as fast as the new ones form in our brain, and when we forget in a week quite as much as we can learn in the same space of time. This equilibrium may last many, many years. In extreme old age it is upset in the reverse direction, and forgetting prevails over acquisition, or rather there is no acquisition. Brain-paths are [now] so transient that in the course of a few minutes of conversation the same question is asked and its answer forgotten half a dozen times. Then the superior tenacity of the paths formed in childhood becomes manifest: the dotard will retrace the facts of his earlier years after he has lost all of those of later date.” --Without using the term “Alzheimer’s”, James describes some of its symptoms. 

 

More paths, more memory

“So much for the permanence of the paths. Now for their number. It is obvious that the more there are of such [neural paths in the brain, associated with a particular event], and the more  such possible cues or occasions for the recall of [that event] to the mind, the more frequently one will be reminded of it, the more avenues of approach to it one will possess. In mental terms, the more other facts a fact is associated with in the mind, the better possession of it our memory retains. Each of its associates becomes a hook to which it hangs, a means to fish it up by when sunk beneath the surface.” 

 

Thinking keeps the circuits flowing

“The ‘secret of a good memory’ is the secret of forming diverse and multiple associations with every fact we care to retain. But this forming of associations with a fact, what is it but thinking about the fact as much as possible? Briefly, then, of two men with the same outward experiences and the same amount of mere native tenacity, the one who THINKS over his experiences most, and weaves them into systematic relations with each other, will be the one with the best memory.

“Most men have a good memory for facts connected with their own pursuits. The college athlete who remains a dunce at his books will astonish you with his knowledge of men’s records in various feats and games, and will be a walking dictionary of sporting statistics. The reason is that he is constantly going over these things in his mind, and comparing and making series of them. They form for him not so many odd facts, but a concept system-- so they stick. So the merchant remembers prices, the politician  other politicians’ speeches and votes, with a copiousness that amazes outsiders, but which the amount of thinking they bestow on these subjects easily explains.

 

Intellectual projects bond numerous fact-memories 

“The great memory for facts which a Darwin and a Spencer reveal in their books is not incompatible with the possession on their part of a brain with only a middling degree of physiological retentiveness. Let a man early in his life set himself the task of verifying such a theory as that of evolution, and the facts will soon cluster and cling to him like grapes to their stem. Their relations to the theory will hold them fast; and the more of these the mind is able to discern, the greater the erudition will be.”  On the other hand: “Unutilizable facts may be unnoticed by him and forgotten as soon as heard. An ignorance almost as encyclopaedic as his erudition may co-exist... Those who have much to do with scholars will readily think of examples.” [301-302] -- James comes down after all on mental habits more than the genetics of brain capacity.

 

Rote learning is onerous and ephemeral  

The reason why cramming is such a bad mode of study  is now made clear. I mean by cramming that way of preparing for examinations by committing points to memory during a few hours or days of intense application immediately preceding the final ordeal, little or no work having been performed during the previous course of the term. Things learned thus in a few hours, on one occasion, for one purpose, cannot possibly have formed many associations with other things in the mind. Their brain-processes are led into by few paths, and are relatively little liable to be awakened again. Speedy oblivion is the almost inevitable fate of all that is committed to memory in this simple way. ... On the contrary, the same materials taken in gradually, day after day, recurring in different contexts... and repeatedly reflected upon,  lie open to so many paths of approach, that they remain permanent possessions.

One’s native retentativeness is unchangeable.  It will now appear clear that all improvement of the memory lies in the line of ELABORATING THE ASSOCIATES of each of the several things to be remembered.”  [302]

In other words, connecting facts with a theory, or with some larger purpose, is the best way to remember them, and that means understanding how they are used. It is the difference between trying to learn a foreign language by memorizing tables of verb forms, and hearing them in all sorts of real-life contexts. 

Now to put James to work on the problem of Alzheimer’s. It affects both short-term memory and long-term memory (what James refers to as reproductive memory). Both deteriorate over time, while short-term memory appears to deteriorate first, or at least it becomes seen as a problem earlier.

Typical short-term memory problems include: losing track of practical things you are trying to do; repeating yourself in conversation without being aware of it; losing things by being unable to remember where they are. Long-term memory problems range from forgetting names of acquaintances; not recognizing familiar people; losing memory of personal experiences; losing general knowledge you once had; losing the ability to understand or speak a language; at the extreme, neural degeneration destroys motor skills and bodily functions and can be a cause of death. What does James offer by way of explanation of these phenomena? Where does his analysis suggest amelioration, and for what ranges of severity?

 

Forgetting particular names 

Short-term memory has fewer neural connections than long-term memory, so we would expect it to deteriorate first. I will discuss the various kinds of memory failure starting with those that set in earliest, and these are mostly short-term memory. But one kind of long-term memory-forgetting affects people even in their 50s or earlier, forgetting particular names.

James says: “We recognize but do not remember it-- its associates form too confused a cloud... We then feel we have seen the object already, but when and where we cannot say, though we may seem to be on the brink of saying it... what happens when we seek to remember a name. It tingles, it trembles on the verge, but it does not come. Just such a tingling and trembling of unrecovered associates is the penumbra of recognition that may surround any experience and make it seem familiar, though we know not why.” [305-06] 

Because they are particular, names of people or places have fewer occasions for use than more generic words-- unless it is a place-name you are constantly writing or referring to (like your address), or a name of someone you constantly invoke. But if you have had hundreds or thousands of professional or social acquaintances in your life, it is not surprising that you should forget most of them after a while-- even though it is embarrassing to meet someone suddenly and be unable to connect a name to the face. It should be considered bad manners to accost an old acquaintance with a question like “You remember me, don’t you?”  Today’s informal manners, which includes introducing yourself by your first name only, doesn’t help much. On the whole, forgetting particular names is trivial and ought to be recognized as such. If you feel insulted because your name isn’t on the tip of someone’s tongue, it is probably because you aren’t as important as you think you are.

Having trouble bringing to mind the names of persons who are professionally important when you talk about topics in your field-- like a sociologist unable to recall the name of Weber or Bourdieu-- is a more significant sign of loss of neural connections. But the fact that one can usually ferret out the desired name by thinking of their books or controversies indicates most of the important connections are still in working order. This is a better search method than trying to remember what letters of the alphabet are in the missing name, since letter combinations are more arbitrary, and the search field is hard to narrow down by that route.  

 

Forgetting your intention

In the realm of short-term memory, among the earliest failures are forgetting what you set out to do. These are typically minor practical tasks: going downstairs to the kitchen to get a glass of water, but getting side-tracked along the way (picking up the newspaper and reading it; putting away dishes from the drain; etc). The original intention gets lost. This is called being absent-minded or scattered, but can also be a sign of memory loss. The explanation is in James’ model: habits are set off by initial cues. If you are in the habit of keeping the kitchen cleared up, the sight of dishes now dried in the drain or the dishwasher invokes the habit of picking them up and putting them away. They are “affordances” -- objects are occasions for the actions you can do with them. But it is their link to habits that creates problems for short-term memory. James suggests putting a different habit in the place of an undesired habit. In this case, what is undesired is not the habit itself (putting away dishes), but letting it interfere with your intention (getting a glass of water). A solution would be to keep a cue with you of what you set out to do-- such as carrying an empty glass with you downstairs, and paying attention to it even as you walk past the habit-triggering dishes and other side-tracks.  

 

Can’t remember whether you did something or not

Another type of short-term memory problem has less to do with remembering what you set out to do, than remembering whether you have already done it. (Do I need to brush my teeth now, or did I just do it?) The action is a familiar habit; you haven’t lost the neural-and-muscle memory of how to do it; and the sight of the toothbrush by the sink cues it off. The problem is the opposite of getting side-tracked. This is not exclusively a problem of aging. Many people go back to check whether they locked the door behind them as they leave the house; it is a form of Freudian anxiety more than memory loss per se (something similar could be said about certain kinds of name-forgetting, like names of people you have bad feelings about).  

What is relevant here is the distinction between remembering what (some thing or action) and remembering when (you encountered the thing or did the action). Very familiar things/actions are towards the unconscious end of the continuum anyway; you have been through them so many times it is impossible to remember every time. James wrote: “If we remembered everything, we should on most occasions be as ill off as if we remembered nothing. It would take as long for us to recall a space of time as it took the original time to elapse, and we should never get ahead with our thinking.” [306] It is useful to forget most of the time-dating so you can get on with your life.

But in short-term memory, remembering-when is important since it keeps you from repeating the same thing over and over. This is an aspect of consciousness over and above the content of the short-term memory itself: not what it was, but also that it is marked in a span of time. Remembering-when, for banal everyday activities with no particular demand on attention and no emotional significance, is perhaps the least-connected in neural circuits of any experience. On the whole, it is destined to be left behind in the rear-view mirror of memory rather quickly. How quickly? James says the specious present is about 12 seconds and “our maximum vague perception is probably not more than a minute or so”. When you can’t remember whether you just brushed your teeth or not, how long does it take before that thought-process sets in? I don’t know if anyone has tried to measure this; a James-based conjecture would be-- certainly not within 12 seconds. Would it be in the “vague perception” range between 12 seconds and about 60 seconds? Beyond a minute, you have no sense of immediacy at all, and it is in the realm of past memories, which supposedly get drained out when you next sleep and dream.

I raise the point mainly as a practical suggestion for persons who have the problem of remembering whether or not you did something recently. The strength of short-term memory is affected by how much focused attention is placed on it:  James argued that will-power consists in adding a half-second of sharply focused attention to any cue/action sequence. If you set off a stop-watch every time you finished brushing your teeth, this might furnish information about where your zone of remembering-when-failure is located in time. But aside from any contribution to psychology, it might add intensity to the remembering-when process-- it would be a practical antidote.

And as an antidote, it may well have a future history. Would the stop-watch routine start fading out as an antidote to the remembering-when failure? At some point,  would you forget what the purpose of the stop-watch was? Would it extend the befuddlement-free zone for x minutes at first, then a shorter x-minus and so forth? This would be a measure of deterioration as well as a self-monitoring technique. 

 

Repeating yourself in conversation

Similar analysis would apply to repeating yourself in conversation. Since this is a matter of degree, it is not clear whether forgetting whether you just brushed your teeth happens earlier than forgetting what you just said. But repeating in conversation is more socially disruptive and therefore more noticable; forgetting in little personal tasks is more private, and not on a whole a serious practical problem unless one spends all one’s time brushing one’s teeth.   

Before setting it down to Alzheimer’s, we should note there are circumstances in which people of all ages repeat themselves. In arguments, especially as they grow heated, antagonists tend to interrupt each other, trying to talk each other down-- struggling over controlling the speaking turn as well as the content of what is said. Recorded conversational analysis shows that as opponents talk at same time, it becomes useless to make any connected argument, and the determined arguer repeats the words they most insist upon. Arguments that escalate to violence typically reach the phase of  extended verbal repetition of insults. [data and references in Collins 2008] On a larger scale of contention, participants in protest demonstrations usually chant the same slogans over and over. Repetition here is a combination of emotional and strategic.

Drunks also tend to become very repetitive as they get more intoxicated (which is why being around drunks is boring if you are not drunk yourself). But in a bleary way drunks epitomize the central quality of sociable talk (phatic talk), where the main concern is just to keep the conversation going. Conviviality visibly fails when there is nothing to say, or just when there are embarrassing pauses; hence banal small talk about the weather. If it is a party or reception where many people are present, often the same things are said over and over in different conversations. Thus it is not just people with Alzheimer’s who repeat themselves.

What is distinctive must be the lack of conscious awareness that one is being repetitive; together with repetition in a short period of time. Couples or friends who have had many sociable conversations will often bring out the same reminiscences, stories and jokes; usually there is a sense that we have heard this before, but it is enjoyable to tell it again, like singing a song. This is phatic repetition. In leisure chatting, it is not unusual for the same topics, even the same phrases, to repeat. Such talk is deeply grooved verbal habits; it is a chatting routine. Lack of conscious awareness of repeating oneself is not necessarily a sign of neural deterioration. The best criterion, I suggest, is the length of time between one repetition and the next.

Here again there is a lack of data with the necessary detail. My impression is that very repetitive old people do not repeat themselves within 12 seconds-- the stretch of the Jamesian present. My guess it is more in the realm of 5 minutes or so. For people not yet at that advanced stage, it can be half an hour or more, even a day. Measuring the gap between repetitions would be an indicator of the extent of deterioration.

On the practical side, if a person repeats something once, there is no way of knowing when to start the stop-watch-- of knowing what phrase is going to be repeated. But if they typically repeat multiple times, one could note the second instance and measure subsequent repetition gaps. Doing this yourself may raise self-consciousness about repetitions, and thus improve neural connection. This is not the same as the other person saying, “You already said that,” -- which does not seem to have any effect.  The heightened consciousness would have to be forward-looking: not so much looking out to avoid repetitions, but just to pay attention to the time-span itself.

 

Loss of long-term memory

There is a certain amount of long-term memory loss in the normal course of life. As James says, you need to forget a lot of detail, especially about memory sequences, if you are to have your brain free to do something else. Much of forgetting particular names falls into this category. Losing long-term memory includes the following, roughly in the order in which they happen:  losing general knowledge; getting lost by not recognizing your familiar environment; not recognizing persons you know well or forgetting their names; losing language; losing muscle memory for motor skills.

On the whole, this is a process of losing neural connections; at first by lack of current activity and interest; further undermined by deterioration through aging. An example: My grandmother lived to be 103. Her husband for 53 years died when she was 75; a few years later her daughter (her only child) moved her out of the house she had lived in for 30 years, and near where her relatives were located, to a distant city. My mother insisted that her mother stay out of the kitchen and just sit down.  Since in my memory (having lived with my grandparents for considerable periods growing up) Granny spent most of her life in the kitchen, this was tantamount to taking away most of her interests, her routines, and her neural circuits.  A few years later, mother moved Granny into a nursing home. By the time she was in her late 80s, when I would visit Granny, she did not recognize me. At times, she would call me by the name of her nephew in Germany. She stopped being able to speak English (which she had spoken since age 22). When she did speak, she was still able to speak German; when her nephew did come for a visit, they were able to converse. In her nursing home years, she spent all her time in bed, barely able to move. This continued for about 20 years until she died. Apparently she could still recognize her daughter, who visited her from time to time. When mother died of cancer, no one told Granny, assuming she could not understand what was said. Granny died a few weeks after mother no longer came to see her.  

These are typical patterns. Brain-circuits stay alive to the extent they are activated; if not, they fade out.

The main counteractive (if not cure) to long-term memory loss is to do things frequently that increase neural connections. As James emphasizes, this means paying attention with a sense of interest; and this happens when doing things not as a flat routine but with a trajectory. In his view, persons like Darwin were not unusually intelligent nor had especially retentive brains; it is having a project that connects things into a sought-for pattern that increases memory. His advice would be: if you have any intellectual interests, cultivate them even more as you get older. The same would be true for any interest that involves poring over a lot of information (sports statistics or stock market, whatever). If one is concerned about losing memory of life events, a counter would be to collect old photos and materials and compose your autobiography. If that is overly self-centered, you could make a project out of the biographies of other people.  

In advanced stages of Alzheimer’s, people have difficulty following a conversation. Learning to focus on the non-verbal signs other speakers are giving off would be a project that provides interest and increases attention; probably it would reduce one’s sense of isolation and may add to a more comfortable feeling for the others. These are the self-reinforcing spirals. Isolation and passivity reduces neural connections and breeds more of the same. Active participation, on whatever level, has the opposite results.

 

Ending 

It is conventional to regard Alzheimer’s as a disease, and therefore something for which there is a specific medical cure. But it may be that there is no such thing as “Alzheimer’s disease” or “dementia”; these are merely labels for summaries of observed symptoms. No one gets antibodies to Alzheimer’s; there is no physiological equilibrium-maintaining mechanism like fever as the body’s response to invasive micro-organisms. All this suggests these medical labels are just names we put on the normal process of aging.

All physical things eventually decay; this is an empirical generalization that appears to have no exceptions. When the Buddha lay dying at the age of 80 around 460 B.C., he said that his body was falling apart like an old wooden cart. It has been observed that humans are like automobiles in the following respect: medical expenditures in the last year of life are typically about equal to all previous medical expense; and repair costs of cars ramp up sharply just before they give out entirely. This implies deterioration of the nervous system, like any other deterioration of a living body, is a process wider than life itself.

It may be that the extreme stages of Alzheimer’s are the way that very healthy people die; they live longer because they have not died from cancer, stroke, or heart disease. As cures for common illnesses have improved, there is an increasing residue of persons who live long enough to die of Alzheimer’s.

The medical metaphor of Alzheimer’s as a disease has the further disadvantage of promoting an externally imposed, disease-control policy. What is more relevant is to discover practical methods for living with a fading memory. No doubt everything deteriorates sooner or later, but how long it takes is to some extent under our personal control.


From the grandson of Randall Collins:

References

Randall Collins. 2008. Violence: A Micro-sociological Theory.

Carroll L. Estes. 2019. Aging A-Z: Concepts toward Emancipatory Gerontology

Harold Garfinkel. 1977. Studies in Ethnomethodology.

John A. Goldsmith and Bernard Laks. 2019. Battle in the Mind Fields.

William James. 1894. Psychology: Briefer Course. 

Sociology of Masks and Social Distancing

Throughout human history, people have generated almost all of their solidarity face-to-face, by physical co-presence. This has been disrupted by a world-wide natural experiment: making people stay home, avoid public gatherings, avoid interacting with strangers except when wearing masks and staying six feet apart.

Since the publication of Interaction Ritual Chains (Collins 2004), the issue has been discussed whether mediated forms of interaction, especially electronic communication in real time,  substitute effectively for face-to-face (F2F) interaction. On the whole, this literature has found that electronic media do not substitute for it, but instead supplement it. Studying cell-phone use, Ling (2008) found that persons tend to call the same people that they normally interact with, and much of what they communicate is where they are and how they can meet. He also found there is some feeling of social solidarity-- personal belonging-- in talking over a mobile phone, but that it is a weaker feeling than F2F. This may explain why cell-phone users spend much more time telephoning than traditional land-line users did; in this respect similar to drug addicts who increase their dose as its effects decline.

Are humans are infinitely malleable, entirely determined by social construction, so that we become acclimated to whatever is “the new normal” (perhaps with a measurable time-lag)?  Or is it that technologies become increasingly better at ferreting out what kinds of things happen in F2F interaction, that can be mimicked electronically? My review of currently available evidence is carried out with these questions in mind, along with a third possibility: that some features of F2F interaction are deeply engrained in the human genome, and that eliminating them leads to resistance and new forms of social conflict.

The Ingredients of Interaction Ritual (IR)

[1] Co-presence:  people are physically near to each other where they can see, hear, and otherwise sense which each other is doing.

[2] Mutual focus of attention: they focus their attention on the same thing, and become aware that they are doing so.

[3] Shared mood or emotion: they feel the same emotion, whether excitement, joy, fear, sadness, anger, boredom or any other.

[4] Rhythmic entrainment: they get into the same rhythm, with voice or body.

Feedback processes take place among these ingredients. As people pay more attention to each other, they tend to converge on a shared emotion and intensify it; conversely shared emotion intensifies mutual focus. As these increase, rhythmic entrainment increases.

Successful interaction rituals (in contrast to failed rituals where these ingredients are missing or weak) have the following outcomes:

[5] Social solidarity. Individuals feel like members of a group, and recognize others as co-members.

[6] Emotional energy (EE). Individuals feel pumped up by a successful interaction ritual; persons with high EE are confident, proactive, and enthusiastic. Persons with low EE are the opposite: they are depressed, passive, alienated. These are the results of failed interaction rituals.

[7] Collective symbols. Durkheim called these “sacred objects”, referring to the emblems, places, books, etc. that are the focus of religious worship; and he extended this to political symbols like flags. Collective symbols include all our ideals and strong beliefs.

[8] Moralities of right and wrong. For any group with successful rituals, the fundamental standard of morality is whether people respect its rituals and sacred objects. The worst offense is disrespect for its emblems; attacking its symbols creates moral outrage. This results in the most heated forms of social conflict, and rituals of public punishment for the enemy group and their symbols. We can see this process in the conflicts that arose during the coronavirus emergency, and in the public demonstrations that spread across the US in June 2020.

In sum, successful interaction rituals are the micro-process that generates almost everything that we refer to as “social order.”  If we  get rid of interaction rituals, or weaken them considerably, what would happen?

How important is physical co-presence?

Co-presence [1], in the scheme as developed by Durkheim and Goffman, is the point of departure. It is when people come together that the other ritual ingredients [2-4] can be brought into action. Can we say, though, that as media become more ubiquitous and mimic more aspects of F2F interaction, social connections become increasingly transferred to media connections while the bodily interactional basis fades away?

Co-presence is important because it facilitates mutual focus, shared emotion, and rhythmic entrainment.  By seeing another person’s eyes and face, and the orientation of their body, you know what they are paying attention to. An exchange of glances communicates, I-see-you-seeing-me, and also I-recognize-what-we-are-both-looking-at. Looking at the other person’s facial expressions and bodily gestures, as well as hearing their tone of voice and its loudness or softness, communicates what emotions are being felt. The James-Lange principle applies here: moving the muscles of one’s face, eyes, and body intensify the felt emotion; also it is triggered and intensified by closely monitoring the other person’s emotional expressions. Not only does running away with the rest of a crowd make you feel more afraid, but shouting happily, or angrily, with others makes one more happy or angry. Rhythmic entrainment is most strongly felt when it is in all bodily channels: not only seeing and hearing, but the proprioceptive feelings in muscles, breathing, heart rate, and bodily chemicals that make an emotional mood a felt experience, not merely a detached cognition. These kinds of embodied experiences are the glue that creates moments of social solidarity.

What happens when people are prevented from bodily F2F encounters, or are restricted to a small number of sensory channels?


Masked social distancing in public

Here we have a partial restriction of the ingredients of IR: people are bodily co-present, but the F2F aspect is greatly reduced. Masks cover the mouth and lower face, making it harder to recognize emotions, as well as harder to hear what the other person is saying. Thus we would expect shared emotion and mutual focus of attention would be harder to attain, IRs would weaken, and solidarity decline.

Nevertheless, what we find in observing people on the streets was the opposite, at least for a period of time. Simmel’s theory of solidarity through conflict says that when a group is shocked by a enemy-- we can widen this to a natural disaster or other shared emergency-- solidarity goes up. I tested this immediately after the 9.11.2001 attacks [Collins 2004a], and found that it has a time-pattern: using the display of American flags as an indicator, the pattern looked like this. After the first few days of hushed uncertainty, people started putting up flags on windows and cars; this reached its maximum within two weeks. It stayed at a plateau for 3 months, a period during which there were also repeated displays of flags and ceremonies honoring police and firefighters killed in the attacks. After 3 months, articles starting appearing discussing “can we take our flags down now?” Political controversy, which was almost entirely stifled during this period, started up again. By 6 months, the level of flag-display had declined by more than half, with a long diminishing tail thereafter.

In the US, public alarm over the coronavirus surged about March 16, when schools and gyms were shut down. By March 20, many states had ordered people to stay indoors. Wearing masks away from home became a requirement in the next two weeks, delayed because of shortage of supplies and controversies over effectiveness. Effective or not, wearing masks now became a social marker of joining the effort against the epidemic, along with keeping 6 feet away from other people. I anticipated that this period of solidarity would last no more than 3 months. Since the period after 9.11.01 had many public assemblies, often highly emotional, honoring the heroes of the attacks, whereas in 2020 public assemblies were prohibited as dangerous incubators of the epidemic, I expected the period of public solidarity would be shorter, probably 1 or 2 months.

For several years I was in the habit of walking or running for a half hour or more almost daily in my neighbourhood or public parks, and thus have a baseline for normal street behavior. By early April (about 2 weeks after the lockdown began), I noted that the number of people out walking was up by a factor of two or three from the pre-epidemic period; people deprived of exercise had found something they could do. Soon almost all walkers were now wearing masks, and when meeting others on the sidewalk, one or the other would step out into the street to maintain distance. When doing so, almost everyone waved or called out a friendly greeting. The main motivation would be that deliberately avoiding someone would be a mark of fear or an insult; so we countered that by a friendly wave or greeting. This is also Simmelian solidarity. It is clearly related to the onset of the shared emergency; in my walks in previous months and years, I would estimate the proportion of F2F encounters on the street where there was a greeting was less than 20% (chiefly among older people; noticably absent among the young). 

The time-pattern of decline in Simmelian solidarity was the following: By late April (one month after the lockdown), the number of people out walking had noticeably increased. The proportion of people greeting each other declined; this was particularly true in areas along the harbor or ocean-front (the beaches and parks being closed and patroled by guards); perhaps there was the beginning of a tone of defiance. Younger adults in particular were ignoring social distancing; and friendly waves or greetings were absent (including towards each other).

I began to make systematic counts of how many people were wearing face masks, distancing, and greeting. My focus was on adults who were walking on sidewalks or streets (children at this point rarely wore masks). I did not count runners or bicyclists, since they almost never wore masks-- a constant pattern from this point onwards. This may be due partly to decreased lateral visibility, but especially to difficulty breathing when doing heavy exercise. I did not count gardeners or other outdoor workers or delivery persons: the latter usually wore masks (as they worked for bureaucratic organizations that demanded it); manual workers usually did not, nor did they practice social distancing among themselves. One can see here a social class divide in the observance of social distancing etiquette. For walkers, the height of symbolic solidarity (mask-wearing and greetings) was in April; during May the proportion wearing masks gradually declined, as did greetings when social distancing (very noticeable around May 22-23).  For this period, a Gallup poll reported 1/3 each said they always, sometimes, or never work masks outdoors (New York Times June 3, 2020); given the desirability bias in surveys, the mask-compliant numbers are probably exaggerated.

A sharp break occurred in the first week of June, as Black Lives Matter protests and marches broke out. This was 10 weeks after the lockdown began. During the most militant period (the first 4-5 days), when many protest demonstrations were accompanied by burning, property destruction, or violence, photos indicate that few protestors wore masks, and participants massed close together. This happened despite official warnings that big assemblies, especially when shouting and chanting together, broadcast the virus. A rival source of Simmelian solidarity had been created, and it overrode the already-declining solidarity rituals of the social distancing etiquette. Most of the participants in the protests were young (as one can see in news photos); young people already were largely ignoring social distancing, and signs of solidarity among the young in ordinary public street behavior had been low. They were further IR-starved by the banning of sports and concert participation as audiences, or even as performers. The predominant participation of white youth in the protests (in most photos far outnumbering minority participants) were at least in part the response to the sudden opportunity to regain experiences of mass solidarity. Police violence and other grievances have been long-standing [https://sociological-eye.blogspot.com/2020/06/seven-reasons-why-police-are-disliked.html, but why have protests mushroomed now?  The timing of these unprecedentedly widespread protests throughout the youth cohort is also connected to their intensified alienation in the social distancing regime, as I will document in the next section.

In subsequent weeks, as protests became smaller, photos show participants more often spread out, maintaining social distancing (also no big crowds) and at least half wearing masks. This is probably the effect of being more deliberately organized rather than spontaneous, with organizers and (mostly white middle-class) participants making a conscious effort to present a good appearance by following official coronavirus etiquette.

In California, parks and beaches were opened up again around June 10, along with reiterated regulations on masking and social distancing. My observations for pedestrians June 10-27:

Totals for public parks: 54 of 267 wore masks (20%); 3 greetings (6% of mask-wearers, 0% of unmasked).

For neighbourhoods: 23 of 91 wore masks (25%); 15 greetings (43% of mask-wearers, 9% of unmasked).

Those who continued to wear masks showed some solidarity (although declining over time) by greetings; this was more likely in residential neighborhoods (at least middle class) than in public parks, where greetings had largely disappeared.

Occasional conflicts were observed, in the following pattern (mid-June):  middle-aged woman says to an unmasked woman approaching her closely: “Could you please stand back? Where is your mask?” Reply: “Don’t be rude!”  It appears that both sides felt collective morality is on their side: a formula for intense social conflict. News reports a month earlier  noted an upsurge of confrontations between maskless shoppers who grew angry when retail store employees who told them to wear masks; violent incidents however were rare. (Wall Street Journal, May 18, 2020) We have no trend data on conflicts over masking, so we don’t know whether this was just a transitional pattern.

When everyone is wearing masks, it becomes more difficult to hear what people are saying; also some of the cues that we use to fill in likely words are missing because we cannot see their mouth and facial gestures, nor can one use facial feedback from the listener to correct one’s articulation. Thus masked interactions even in ordinary utilitarian situations give rise to misunderstandings, raised voices usually associated with anger, and sometimes gestures of annoyance. I have observed this frequently in grocery stories. Anything that limits multi-modal interaction takes it toll, even in situations where solidarity mainly takes the form of routine civility.

Family Solidarity

Also on the positive side, it appears that at first solidarity increased, at least for some family members. Children of elementary school age and younger seemed happy, as they had more time with parents and attention from them. I observed a large increase in families bicycling together on neighbourhood streets (seldom seen before the epidemic); since bicyclists rarely wear masks, and children at this time never did, one could see that their expressions were on the whole happy. It is unlikely that teenagers were similarly affected; I almost never saw them bicycling or walking with adults in neighbourhoods or parks. Not surprisingly, as teen culture is mostly concerned with being independent of adults, and being seen with parents is a status loss except on formal occasions (Milner 2016). Given that teens were prevented from gathering (I only occasionally saw teens out together, and hardly any male-female young couples other than parents), I would predict that data on the level of alienation and anxiety among teenagers would increase for this period. Even though teens are the most media-connected and media-obsessed of all age groups, they are the ones least likely to find it a compensation for a further drop in F2F experience.

On the negative side, doctors report an increase in child-abuse cases, although official statistics show a decline (all attention being focused on COVID-19). [San Diego Union-Tribune June 5, 2020] A national child-abuse hotline reported a 20% increase in calls and 440% increase in text messages over the prior year [Wall Street Journal, May 19, 2020]  The stay-at-home situation is favourable to some, perhaps most families with adequate space and resources; where there is family tension, isolation increases abuse, as has long been established [Collins 2008: 137].  Psychiatrists interviewed generally regard remote video counseling as less effective than F2F, especially the difficulty in reading emotions and conveying empathy [San Diego Union-Tribune May 18, 2020]. A national survey carried out in May found that reports of clinical symptoms of depression had doubled (compared to a 2014 baseline) to 24% of the US population; depression was especially high among young adults and women, even though they were less vulnerable to COVID-19 [Washington Post, May 27, 2020]. But embodied social interaction in the smartphone generation was already in decline, especially among teenage girls [Wall Street Journal, August 17, 2019]. By 2018, American teens were spending 6-to-9 hours daily on-line. Since 2007, time spent on seeing friends or going out in public had fallen sharply, as did dating. In 2019,  36% of girls said they were extremely anxious every day.

We have no data on sexual behavior during this period. Likely the birth rate will spike 9 months after the onset of the epidemic. On the other hand, monthly marriage rates must surely drop, as will the frequency of sexual behavior among couples of all kinds; casual hookups as well as commercial sex likely will be found to drop drastically. (I have very occasionally seen an unmasked male/female couple necking in the park; formerly active gay pick-up areas look deserted.) Sexual activity had already declined in the Internet generation; in 2018, 23% of Americans age 18-29 had no sex in the previous year, doubling the percentage of sex-less lives in the pre-social-media 1990s [Wall Street Journal, May 18, 2019]. Looking for a bright side in the coronavirus shutdown, The Wall Street Journal (May 30, 2020) touted “Distancing Revives Courtship,” an interview-based story of how dating has gone on-line, returning to almost Victorian manners, at best watching each other on-line drinking a glass of wine (definitely no touching). If sex is a form of solidarity, it must surely decline among those who do not already have intimate live-in partners. The same would be true of ordinary fun involving any kind of physical activity together. Research may well find that social distancing makes little difference to upper-middle class professionals whose social gatherings consist entirely of conversation or playing cards, but more active persons would likely feel deprived. This is one reason why after bars re-opened in late June 2020, these suddenly crowded venues (photos showed an absence of social distancing and mask-wearing) became hotspots for coronavirus infections. In the tradeoff between lively sociability and risk of sickness, many choose the former.

Remote Schooling

By all accounts, this has not been very successful. Leaving aside issues such as the extent of the school population who lack internet access; and schools adopting a no-grading policy; we find that on-line schooling has a negative effect on student motivation.  On-line daily absences of students who don’t log in are 30% or more; surveys find there is little interaction with teachers; 50% of students said they don’t feel motivated to complete on-line assignments. [Wall Street Journal, June 6, 2020] Teachers complain they can’t read the body language of students and can’t pick out cues for whom to engage with at what opportune moment. I have watched my 8-year-old grandson during on-line classes; these usually last less than half an hour, while the teacher goes over the assignment in a pleasant voice, talking to no one in particular. He spent the time playing with a slinky held beneath the level of the screen.  Posts on Reddit by college students showed: students complained about noise from parents or siblings while they were trying to hear a lecture or take an exam. [San Diego Union-Tribune May 23, 2020] Some students said they liked not having to go to campus, since they did not need to find a place to hang around between classes; apparently these were students who did not live near campus, or who had jobs. One student said he liked being able to watch a lecture while doing his homework in bed; on-line viewing reduced the need to pay attention. But we have no baseline of how much students normally pay attention in class (usually they pretend to, but often their laptops are not being used for taking notes, as any teacher can observe by walking around the classroom). We cannot assume that F2F classrooms are automatically successful Interaction Rituals.

Some college students complained about the anti-cheating protocol during a virtual exam, where they were required to keep their face and hands visible on the webcam at all times. Other Reddit posts said they felt isolated at home, missed their school friends, and were generally apathetic and unmotivated. This suggests a divide between students who are entirely utilitarian in their orientation, and those for whom school is a social experience. Hypothesis: grinds like on-line learning, party animals don’t; those who value networks, whether intellectual or career, also miss personal contact even though it consists in more than fun.

Besides passive feelings of alienation and deprivation, some students actively took the opportunity to counter-attack. Some coordinated on-line pranks with fellow-students, such as simultaneously switching off their cameras so that the teacher finds oneself suddenly alone surrounded by blank rectangles. Others organized campaigns to destroy the ratings of apps such as Google Classroom. [Wall Street Journal, June 2, 2020] Others hacked into Zoom conference calls, playing loud pop music, shouting insults and obscenities, or inserting pornographic images on the screen. [Washington Post April 5, 2020; Associated Press April 8, 2020] Mass rebellions by students in classrooms against unpopular teachers are not unknown in the past; but they were rare. On-line hacking may be a mixture of pranks, fun, alienation, or hostility. The comparison shows that interactions in person result in more conformity, a  Goffmanian front-stage show of respect for the situation, and thus at least a mild form of solidarity. This social pressure or entrainment disappears at a distance; violence, too, is difficult to carry out F2F, and much easier at a distance, above all when there is no reciprocal view of each others’ eyes. (See Collins 2008, especially pp. 381-87 on snipers, whose mode of killing hinges on seeing their target through a telescopic lens but cannot be seen by them.) It is reciprocal eye contact that generates intersubjectivity and its constraints.

Working Remotely

There is disagreement whether working remotely is effective. Some people prefer working from home. What they like about it are: no commuting; reduced meetings which they feel are a waste of time; and fewer distractions in the workplace. Some dislike working at home; what they dislike are more distractions in the household; less team cohesion; and technical and communication difficulties. (Wall Street Journal, May 28, 2020: based on a survey of hiring managers) Similar points were made by the head of a state judicial unit, who emphasized that much additional time by management personnel was now spent on meetings, and attempts to keep up morale by remote contact; meetings were often frustrating because considerable time was wasted trying to get the communications technology working for all participants. (repeated interviews during March-June 2020)  She sometimes went  to her office in order to use secure communications, and found it refreshing whenever encountering a colleague in person. Efforts to re-open court business, with social distancing and masking precautions, were welcomed by part of the staff and opposed by others. The characteristics of one group or the other are unknown; a hypothesis is that the those more committed to their career and professional identity want to return to their customary work setting; those for whom work is more of a routine prefer to stay home.

Hollywood film professionals said they liked spending less time on planes flying around the country; and less high-level meetings which they considered more habitual than necessary. [Los Angeles Times May 3, 2020] One producer said: “I don’t think video conferencing is a substitute for being in a room with someone, but it is better than just talking on the phone. There are so many ways you communicate with your expression... when it’s delayed and small, you just lose all that. My feeling is it’s 50% as good as an in-person meeting.” [p.E6] In the actual work of making movies, most emphasized that it is a collective process, and some insisted that spontaneous adjustments on-set were the key site for creativity. They also reiterated the point that live audiences are the only way to reliably tell whether a film is coming across, and larger audiences amplify both comedy and drama (i.e. via emotional contagion).

Some businesses have tried to compensate by having “virtual water-cooler” sessions several times a week, where any employee can log in and chat. It is unclear what proportion took part,  how enthusiastically, or with what pattern over time. Some managers reported that company-wide “town-hall meetings” to reassure employees lost interest over time [Wall Street Journal, June 6, 2020]. DiMaggio et al. (2019) however, found that on-line “brainstorming events” for employees in a huge international company were consonant with some patterns of interaction rituals; this research was carried out in 2003-4, long before the epidemic. The degree of involvement and solidarity in town-hall meetings is a matter of scale; the court administrator reported that feedback about morale was positive after on-line sessions involving group of around 10; but in larger groups it was hard to get a Q&A discussion going. This is similar to what any speaker can observe in ordinary lecture presentations and panel discussions; even with physical presence, most people are reluctant to “break the ice” after the speakers have been the sole center of attention; but once someone (usually a high-status person in the audience) sizes up the situation and says something, it turns out that many others find they also have comments to make. This is a process of micro-interactional attention, which is especially difficult to handle on remote media.

Many managers said that innovativeness was lost without serendipitous, unscheduled encounters among individuals. [Wall Street Journal, June 6, 2020] In a PricewaterhouseCoopers survey, half of employers reported a dip in productivity with on-line work. Longer trends, going back before the coronavirus epidemic, indicate that the promise of on-line work was not highly successful. During 2005-15, the era of the high-speed Internet, the percentage of persons in the US regularly working from home increased slowly;  those working from home at least half-time reached a pre-epidemic peak of only 4%. [www.npr.org/sections/money/2020/04/28/846671375/why-remote-work-sucks]  During this period several big corporations, initially enthusiastic, tried to shift to primarily on-line work but abandoned it after concluding it was less effective. In the market-dominating I-T companies, the trend instead was to provide more break rooms, food, play and gym services to keep their workers happy on site. This was abruptly reversed in the coronavirus period.

Zoom fatigue

Popular video-conferencing tools such as Zoom attempt to reproduce F2F interaction by showing an array of participants’ faces on the screen, along with one’s own face for feedback in positioning the camera. Reports on how well it works in generating IR-type rhythm and solidarity are mixed. CEOs of high-tech companies tend to claim that it works well. Among rank-and-file participants, however, complaints are widespread and it even acquired a slang term, ‘Zoom fatigue.’ [Wall Street Journal, May 28 and June 17, 2020]  Achieving synchrony with others is hard to do with a screen full of faces, delayed real-time feedback, and lack of full body language. Since there is a limit to how many individual faces can be shown, in larger meetings some persons are seen only occasionally, and leaders looking for responses often find they get none. Some of the ingredients of IR (not necessarily under that name) are now being recognized by communications specialists; these include fine-grained synchrony and eye movements. In ordinary F2F conversation, persons do not stare continuously at others’ eyes, but look and look away (Tom Scheff made this point to me in a personal communication during the 1980s; for detailed transcripts of multi-modal interaction see Scheff and Retzinger 1991). Thus seeing a row of faces staring directly at you is artificial or even disconcerting. Some readers responded with advice: cut off the video to reduce zoom fatigue, go audio-only. Some found hidden benefits in zoom conferencing: once the round of social greetings is over, turn off the video and your mic and do your own work while the boss goes through their agenda.

Continuously seeing one’s own face on the screen is another source of strain. Of course, as Goffman pointed out, everyone is concerned with the presentation of their self, in terms of status as well as appropriateness for the situation. But one does not have one’s image constantly in a mirror; and when interaction starts to flow, one loses self-consciousness and throws oneself into the activity, focusing more on others’ reactions than on oneself. Those who cannot do this find social interaction embarassing and painful.  But enforced viewing of one's own image feels unnatural.

Prolonged video conferencing as a whole seems to have about the same effects as telephone conference calls. In my experience on the national board of a professional association, our mid-year meeting was canceled by a snowstorm, and a 2-day conference call was substituted. The next time I saw the board in person, I polled everyone as to whether they liked the conference call: 18 of 20 did not. Lack of shared emotion was apparent during the event; for example, when it was announced that we had received a large grant, there was no response. No wonder: applause and cheers are coordinated by looking at others, and it is embarrassing to be the only person applauding. [Clayman 1993] Work gets done remotely, after a fashion; it just lacks moments of shared enthusiasm.

Assemblies and Audiences

Participating in large audiences or collective-action groups is intrinsically appealing, when it amplifies shared emotions around a mutual focus of attention. This is a main attraction of sports and other spectacles, concerts, and religious congregations; and it is what creates and sustains enthusiasm in political groups and social movements. Thus the ban on large participatory gatherings should be expected to reduce commitment.  Especially vulnerable is the practice of singing together, because it spreads aerial germs more than any other form of social contact. We lack current data on these effects; but the prediction of Durkheimian theory is that religious commitment and belief will fall off as the group is prevented from assembling. How long will this take? Judging from patterns of religious conversion, my hypothesis is that beliefs fall off drastically if there is no participation for 1-to-2 years. When the epidemic finally ends, the level of church attendance will give an answer; during the epidemic, surveys of religious belief on a monthly basis should show a trend-- although allowing for desirability bias (which makes religious surveys overstate religious practice) [Hardaway et al. 1998].

Can technology substitute for collective practices like singing together in a congregation? Some Christian organizations have created virtual choirs, where individuals sing their parts alone and their recordings are compiled by sound engineers; the resulting performance is presented on-line, either showing a series of faces of individual singers, or several faces simultaneously on screen. [interview with international religious organization staff]  Such videos have been widely viewed, and convey the singers’ enthusiasm. It remains to be seen, over a period of time beyond the onset of the world epidemic, whether participation and commitment levels change.

Similar techniques have been attempted for performances of operas and orchestras. [Wall Street Journal, April 27, 2020]  Achieving good sound quality is difficult, since this depends on minute timing and adjustments of volume. (Sound quality of amateur efforts by church congregations is admittedly poor.) Making music together works best when there is a strong beat and repeated musical motifs--- i.e. when there is a pronounced rhythmic coordination, as in successful conversational IRs. More complex music is more difficult to produce by remote coordination. No doubt it will be possible to compare such recordings with conventionally produced ones over the coming year.

When sports events are played without live audiences, can crowd enthusiasm be supplied by canned cheers? There is, in fact, considerable experience over the years with TV broadcasts, including the long-standing practice of laugh tracks in comedy shows. Most listeners find these artificial; research is needed, however, comparing the sounds and laughs audiences make when they are at a live show or when watching it with a sound track. We also know that important games attract enthusiastic fans even when ticket prices are high-- and here TV viewers can actually hear the sound of a live crowd reacting to the action.

What is the extra ingredient of group emotional contagion needed? A natural experiment occurred in March 2013 when a Tunisian soccer match banned fans because of political tensions. [Wall Street Journal, May 27, 2020]  Fans were able to download an app that connected to loudspeakers in the stadium, producing recorded cheering that got louder as more people tapped on their smart phones more frequently. Fans could thus could hear the effect of their own remote “cheering”, and presumably so could the players on the field (although there are no interviews about the players’ experiences). Audience enthusiasm was high, and much local publicity was given to the experiment. The key ingredient is feedback, from one individual fan to another; they were able to monitor how their own action fit into the dynamics of making collective sounds. This feeling of collective participation should be highest, not when sound is kept at a maximum, but when participants can perceive rising and falling levels in accordance with their own actions. This is what happens in real audiences, who can monitor each other in all perceptual channels (such as recognizing when doing the wave is going around the stadium and when it is fading out). If remote-communications technology is to generate the solidarity and energy of embodied gatherings, it is such details of the IR mechanism that must be reproduced.

Summing Up

We can now provisionally answer the questions posed at the outset. Theory of interaction rituals does not disappear; we do not need to invent a new sociology and psychology for the IT era (at least not until robots start replacing human beings entirely, and even then the issue remains to what degree such autonomous robots would incorporate current human qualities). As far as human beings are concerned, political authorities and technological developments may force people to forego much embodied interaction. People are culturally quite malleable, but if that means that after a period of acclimation, we can get used to anything, it does not follow that we can do so without paying a price. If people are deprived of embodied interactions, it is a likely hypothesis that they will be more depressed, less energetic, feel less solidarity with other people, become more anxious, distrustful, and perhaps hostile.

From the grandson of Randall Collins:

A book for all ages

Available from independent booksellers and beyond

References


Clayman, Stephen E. 1993. “Booing: the anatomy of a disaffiliative response.” American Sociological Review 58: 110-130.

Collins, Randall. 2004.  Interaction Ritual Chains.  Princeton Univ. Press.

Collins, Randall. 2004a.  “Rituals of solidarity and security in the wake of terrorist attack.”  Sociological Theory 22:  53-87.

Collins, Randall. 2004.  Violence: A Micro-sociological Theory.  Princeton Univ. Press.

DiMaggio, Paul, Clark Bernier, Charles Heckscher, and David Mimno. 2019. “Interaction Ritual Threads: Does IRC Theory Apply Online?” in Elliot B. Weininger, Annette Lareau, and Omar Lizardo, Ritual, Emotion and Violence: Studies in the Micro-sociology of Randall Collins. New York: Routledge.

Durkheim, Emile. 1912/1964. The Elementary Forms of Religious Life. New York: Free Press.

Goffman, Erving. 1967. Interaction Ritual. New York: Doubleday.

Hardaway, C. Kirk, Penny Marler, and Mark Chaves. 1998. “Overreporting Church Attendance in America.” American Sociological Review 63: 123-130.

Ling, Rich. 2008. New Tech, New Ties. How Mobile Communication is Reshaping Social Cohesion. Cambridge MA: M.I.T. Press.

Milner, Murray, Jr. 2016. Freaks, Geeks, and Cool Kids: Teenagers in an Era of Consumerism, Standardized Tests, and Social Media. New York: Routledge.

Scheff, Thomas J. and Suzanne Retzinger. 1991. Emotions and Violence: Shame and Rage in Destructive Conflicts. Lexington Mass: Lexington Books.

Seven Reasons Why Police Are Disliked

A theme of protest demonstrations since late May 2020 is that police violence persists despite previous episodes of public outrage and efforts at reform. The problem has not been solved, including by the protests themselves.

Police violence was prominent in triggering the uprisings of the 1960s. The two most destructive riots were both started by police arrests: Newark in June 1967 (26  dead); Detroit in July 1967 (43 dead). In Newark 5 days of riots began after a taxi driver was arrested; in Detroit, when police attempted to raid a popular after-hours club, patrons fought back by attacking police cars; backup was called and eventually the National Guard; fighting with snipers, arson and looting lasted 4 days. The pattern continued in riots over the acquittal verdict in the Rodney King beating by the LAPD in 1992, and a long series of highly publicized cases through the Ferguson Missouri protests of 2014 and down to today.

There have been occasions where police have been adulated; notably in the public ceremonies so prominent in the months after the 9/11/2001 attacks, when police and firefighters were repeatedly honored for their sacrifices at the Twin Towers. On the other side of the ledger, there are a series of reasons why large portions of the public -- not just African-Americans-- dislike the police, and will join in protests against them.

[1] Police are used for collecting fines for municipal budgets. This has been a long-standing practice in speed traps, where heavy fines are levied on drivers, usually on highways outside of town; since locals know where the speed traps are, it falls mostly on strangers (similar to resting your budget on hotel taxes in popular tourist destinations). Cities where there is strong resistance to tax increases, or which have serious budget short-falls, often explicitly adopt the policy of increasing fines for all sorts of infractions. It then becomes the police duty to seek out offenses, however trivial; they are expected to produce at high rates, sometimes with quotas set by police officials (Moskos 2008). This was a notorious practice in Ferguson, where the protests began after police shot a young man who defied an order about walking in the street.

In Philadelphia, Alice Goffman (2014) showed how computerization of court records and police communications has intensified pressure on persons (mostly minorities in the ghetto) who have some kind of previous record. Offenses may range from drugs to violence to gang association; police stops on the street immediately run a computer check in their car, above all for outstanding warrants. These often involve failure to appear for a court hearing, or failure to pay fines, since the penalties for everything include fines. It becomes a vicious cycle as fines mount up. The courts are overburdened, and this combined with attempts to reduce over-sentencing to prison, results in most offenders being released but required to make future appearances and pay fines which they can’t afford. Persons caught up in the system no longer can get a bank account, a legitimate job, or driver’s license -- which generates further fines. Police, as the front-line enforcers of the system, are understandably unpopular. On their side, police also regard the criminal justice system as a revolving door.

[2] Police are used for enforcing unpopular regulations. A long history includes prohibition on alcohol (now mostly passé except for prohibitions on young people); prohibitions on marijuana (ditto). All of these promote counter-cultures of defiance. There have been many examples during the stay-at-home lockdowns during the coronavirus plague. Public parks have been closed, playing ball prohibited, beaches and/or their adjacent parking lots are closed; children’s playgrounds roped off. In many instances, ordinary people find these prohibitions inconsistent or irrational-- areas closed even if people maintain their distance; young people who  have heard the statistics and know that their chances of surviving the coronavirus are above 99 percent. It appears that another counter-culture of defiance is building up today, likely to become exacerbated during the phase of opening up public activities under a regime of masking and social distancing. To a considerable degree, this  coincides with conflict between age groups.

What many people regard as trivial offenses can escalate when officials enforce the rules. In San Diego, a black man walking his dog in a state park (actually the old Spanish settlement) was accosted by park rangers; when he refused to leave, they called police backup, who arrested the man; when exiting the police car downtown, he slipped his handcuffs, ran away, and was shot and killed. His mother said he was schizophrenic and did not understand the order to wear a face mask.  (San Diego Union-Tribune, May 6, 2020)  This is the archetype of many such events: one damn thing leads to another.

[2a] Police hypocrisy and cynicism. In both [1] and [2] police are required to carry out the dirty work of government. When this becomes the primary part of their job, it makes them cynical and hardened. They know that it doesn’t necessarily make sense to punish harmless violations, or that they are lying when they say their city-mandated increase in traffic stops are purely in the interest of public safety. In their own work lives, they are under a regime that demands hypocrisy; after a while, this unpleasant feeling turns into a bitter that’s-the-way-it-is.  Like prison guards who have to play the role of the bad guy, they embrace the tough-cop image. (Striking descriptions of this are in Jennifer Hunt’s 2010 close-up ethnography of the NYPD.)  Citizens who argue with cops about these things  increase the tension; one reaction is to be more aggressive. Taking videos of the police is felt as threatening them; and this can lead to attempts at retaliation.

[3]  Police dislike defiance. Jonathan Rubinstein (1973), a sociologist who joined the Philadelphia police in order to study their everyday life (similar to Peter Moskos in the Baltimore PD 30 years later), found that their number-one priority is to be the person in control in all encounters with civilians. For the most part, a cop is out there alone, or with a single partner; they are almost always outnumbered by civilians. Particularly in areas where they know they are unpopular, they feel it is imperative to not let things get out of control. They want to be the one who starts and ends the encounter, who sets the speaking turns (micro-sociology of conversation), who sets the rhythm of the interaction. Acts of defiance, whether micro-actions on the level of voice and gesture, or more blatant words and body movements, will cause a cop to increase their own aggressiveness in order to maintain dominance (Alpert and Dunham 2004). This a reason why trivial encounters with the police can escalate to violence far beyond what seems called for by the original issue.

[3a] Inner-city black code of the street emphasizes defiance. Elijah Anderson’s ethnography of black street life (1999; also Krupnick and Winship 2015) point out that in dangerous areas, where the police are distrusted, most people adopt a stance of being hyper-vigilant about threats and disrespect, and portray themselves as ready to use violence. Anderson says this is mostly a Goffmanian frontstage, a pretence at being tough designed to avoid being victimized. When dealing with the police, this leads to another vicious circle. Black people, particularly on their home turf, are more defiant of police than are whites; often this is no more than a confrontational way of talking, but these are micro-interactions that arouse police aggressiveness. Anderson notes that one reason people in the ghetto are wary of calling police is that they themselves may end up being arrested, because of the tone of these micro-interactions. Donald Black (1980), who pioneered observer ride-alongs in police cars, found that police arrested black suspects more than whites, but this happened when black people were defiant, which was more often than whites. Martín Sánchez-Jankowski (1991) in his gang ethnographies (including black, hispanic, and white) describes the culture of gang members as “defiant individualism.”  The pervasiveness of the street code in black lower-class areas, even among the majority who are not sympathetic with a gang life-style, hardens mutual hostility between citizens and police.

[4] Police dislike property destruction.  Anne Nassauer [2019] who studied protest demonstrations in the US and Germany by compiling videos of these events, was able to pin-point the conditions that led to a turning point where violence broke out. One of the major conditions was when police could see protestors destroying property, but were unable to do anything about it; this happened if they were under orders not to respond, or when they had relatively limited forces compared to the numbers of protestors. Normally police are concerned to prevent robbery and vandalism; it is one of their more favored duties, since they get to be the heroes protecting people. But now they are in a situation where they have to stand by and let it happen. This builds up their frustration. Although they may perceive that only a small part of the crowd is doing the destruction, they dislike the crowd for providing the opportunity to get away with it. Given further trigger events during the protest-- more on this in [5]--  police will take out their tension and anger on whoever is nearby in the crowd.

Property destruction in a mass demonstration puts police in a damned-if-you-do, damned-if-you-don’t dilemma. If they take action against looters and arsonists, they get accused of whatever violence they use and casualties they cause. If they stand by and let the destruction happen, they are accused of neglecting their duty and not caring. Eye-witnesses to such scenes are particularly likely to be outraged (see letters to the editor in recent days).

[5]  Adrenaline overload and forward-panic attacks on unresisting targets. When tension builds up, humans experience rising heart rate, driven by adrenaline. At a high level, perception narrows in, time becomes distorted, fine motor control is lost. Nassauer found that the level of tension is visible in videos: whether the police are in relaxed or tense postures, and similarly with the crowd. When tension builds up, from escalating gestures of confrontation, unexpected movements by crowd or police units, police getting surrounded and cut off, a trigger point sets both sides in action. Adrenaline is the fight-or-fight hormone; it produces generalized arousal of the large muscles of the body, but in what direction will it go? Police, like soldiers, are trained to respond to high adrenaline arousal by attacking. Most civilians, of the other hand, will run. But the one reaction feeds back on the other. The crowd suddenly running away is felt by the police as a release of their own tension into action.

In interviews (reported by Nassauer and others), police say they can see the crowd is divided between peaceful demonstrators and a small number of trouble-makers; but when the situation boils over, the crowd is infected by the violent ones. --This is how the police perceive it; what happens is that the panic of the crowd running away puts the police in an over-the-top rush of adrenaline in which their own perception is narrowed. When police rush forward, they become likely to strike those who have fallen down, or are screaming uncontrollably. The content of what people are saying is lost; all that is heard is the sounds and sights of out-of-control people. Since the police are trained to operate as a unit, officers who rush forward with their comrades tend to imitate what they do; if they are striking someone on the ground, it must be for good reason, and they will join in or protect them.

I have called this “forward panic” because it is like a panic flight where the overwhelming emotion of the crowd increases individuals’ adrenaline level; but in this case, the adrenaline is driving them forward, towards an easy target who have their backs turned, running away or falling down. 

Police who have been in shoot-outs generally report that their senses are blurred, they have tunnel-vision, can’t hear the sounds of their own guns, don’t know how much time is passing (Artwohl and Christensen 1997). They also tend to fire wildly, with poor aim, and with an overkill of bullets as they empty their magazines. It is similar with those who deliver a large number of blows with their batons, or put their full weight on a captured suspect’s neck.  It is the same in military massacres (with a higher level of casualties because of more weapons).  There is the same time-sequence: a period during which tension has built up on both sides; a sudden tipping point when the tension is released; one side becomes incapable of resisting (because they are caught in a traffic jam, fallen in the mud, turning their back, running away); the result is  hot rush, piling on, overkill.

In real-life situations, violence is usually incompetent-- in the sense that it often fails to hit its intended target, or hits the wrong target, or is disproportional to what is necessary to prevail.  Soldiers and police are much more accurate shooters on firing ranges than they are in the emotional conditions of real-life confrontation.The clichés of military and police officials refer to “surgical strikes” and proportionate response. But the military is all too aware of “collateral damage”, especially in counter-insurgency warfare, where violent enemies hide in the civilian population.  This is a close analogy to confronting peaceful protests in which aggressive militants cover themselves.

[6] Police training for extreme situations.  Police training tends to emphasize the worst-case scenarios. Knowing that firing in real-life situations is encumbered by high adrenaline, weapons instructors tell them to aim middle-mass-- the center of the body; don’t try to shoot for extremities like arms or legs (the cowboy movie myth of shooting a gun out of someone’s hand never happens). The result is, police shootings tend to be deadly. Emphasis also is on rapid reaction; in the worst-case scenario, the suspect is armed and dangerous; you have to train your muscle memory to react as quickly as possible.

There is sometimes training in how to calm dangerous situations, but this tends to be overshadowed by the quick reaction scenario: your life or someone else’s life is in danger; train yourself to react automatically.

Another process that enhances the atmosphere of worst-case scenarios is police communications. When police call for backup, they tend to emphasize the danger of the situation. When the call is propagated more widely, the message is propagated just as rumors are: the distinctive elements are dropped out as the message is repeated. A man on a highway overpass threatening suicide by jumping, will get transformed into the cliché-- suicidal and threatening to take someone else with him --  into armed and dangerous. This is how individuals end up getting shot dozens of times by an aroused network of converging cop cars. The distortion may start when a civilian calls in, starting with an ambiguous situation, which the police dispatcher (a civilian employee), transforms into the more conventional warning. This was the case with the famous incident in 2009 when a Harvard professor, a black man, arrived home and had difficulty getting his front door open, getting the taxi driver to help un-jam it. A well-meaning Harvard secretary passing on the street phoned to say a possible burglary might taking place, but did not mention anyone’s race on the 911 recording and said: “I don’t know if they live here and they just had a hard time with their keys”. The dispatcher transformed this into a house-breaking by two black men; the cop who showed up was restrained at first but reacted to the irate professor by arresting him.

Lesson: police training needs to be drastically reformed. And training for police dispatchers, as well as from one police car to another, needs to be instructed on how rumors are formed; and procedures to avoid inflammatory worst-case clichés.

[7] Racism among police. Some cops are racists. How many are there, and what kind of racists they are, needs better analysis. What kind? There is a difference between white supremacists of the pre-1960s period; stereotyping racists who think most black people are potential criminals; situational racists who react to black people in confrontational situations with fear and hostility; casual racists who make jokes. These aren’t insoluble questions; if ethnographers followed people around in everyday life and observed what they talked about and how they behaved in different situations, we would have a good picture.  And there still remains the further question, does one or another degree of racism explain when police violence happens?

My estimate is that racism among police is less important a factor than the social conflicts and situational stresses outlined in points [1-6]. To put it another way, if we got rid of racist attitudes, but left [1-6] in place, how much would police violence be reduced? Very little, I would predict.


What can be done? And how likely is it to have effects?

Let’s go through the list.

[1] Collecting fines for municipal budgets. Getting rid of this corrupt practice would be important for reducing hostility between police and citizens; especially since it is a version of color-blind racism insofar as it targets poor black areas. But how to get municipal officials to forego money that can raised without taxpayer consent?

[2]  Enforcing unpopular regulations. A solution would be to legalize more prohibited substances. It does raise a problem of trade-offs, such as deaths from fentanyl. And there are other kinds of prohibitions being invented from time to time, as in the coronavirus period. Some conflict of this sort is going to be with us for a long time.

[2a] If police don’t have to do the dirty work enforcing unpopular policies, they’d be a lot less cynical and hard-assed, and we’d get along better with each other. This depends on what we do about [1] and [2].

[3] The code of the street, ostentatious defiance. I think this is declining already, with the growth of a black middle class. On the whole, recent protest demonstrations are more civil than those of the late 1960s.

[4] Police anger at property destruction. This is a genuine dilemma; either way, bad feelings are created. If we had fewer riots -- if some of the other conditions get better-- this would be less of a problem. Caveat: racism and police violence are not the only things riots can be about; for example, the anti-globalization riots of the past decade in the US and Europe. We may well be headed towards increased class division in the future, among other things between the computerized elite (now riding out the coronavirus working from their nice homes) and the other two-thirds of the population whose jobs are steadily being replaced by computerized robots.

[5] Forward panic violence in policing demonstrations. There are ways that police (as well as everyone else) can learn techniques to monitor their adrenaline level, and to not rush into action until they have a clear perception of the situation and have reduced their heart rate by breathing exercises. This one is solvable. http://sociological-eye.blogspot.com/2016/10/cool-headed-cops-needed-heart-rate.html

This could go along in tandem with:

[6] Reforming police training. More than reforming police departments, we need full-scale investigation and reform of police academies. They need to get away from the emphasis on worst-case scenarios and the quick-trigger, muscle-memory approach to weapons training. As noted, civilian dispatchers as well as cops need better training about rumor propagation and its tendency to revert to stereotypes as messages pass along the chain.

[7]  Police racism.  If we have enough of these kind of reforms, this will take care of itself.

As of now, most calls for reforms reiterate long-standing demands for independent review boards and stronger penalties for police misconduct. Having a reform-oriented black police chief in Minneapolis did not solve the problem. It is dubious that the top-down approach would solve it, as long as the everyday conditions of police work go unchanged.


From the grandson of Randall Collins:

References

Alexis Artwohl and  Loren Christensen. 1997. Deadly Force Encounters.

Geoffrey Alpert and Roger Dunham. 2004. Understanding Police Use of Force.

Elijah Anderson. 1999. Code of the Street.

Donald Black. 1980. The Manners and Customs of the Police.

Donald Black. 1989. Sociological Justice.

Randall Collins. 2008. Violence: A Micro-sociological Theory.

Randall Collins. "Cool-headed Cops Needed: Heart Rate Monitors can Help." [posted 10.05.16]
http://sociological-eye.blogspot.com/2016/10/cool-headed-cops-needed-heart-rate.html

Alice Goffman. 2014. On the Run: Fugitive Life in an American City.

Jennifer Hunt. 2010.  Seven Shots: An NYPD Raid on a Terrorist Cell and its Aftermath.

Dave Klinger. 2004. Into the Kill Zone. A Cop’s Eye View of Deadly Force.

Joseph Krupnick and Christopher Winship. 2015. “Keeping up the front: how disadvantaged black youth avoid street violence in the inner city.” in Orlando Patterson (ed.), The Cultural Matrix.

Peter Moskos. 2008. Cop in the Hood.

Anne Nassauer. 2019. Situational Breakdowns: Understanding Protest Violence.

Jonathan Rubinstein. 1973. City Police.

Martín Sánchez Jankowski. 1991. Islands in the Street.

Doctors At War: The Psychological Cost

Mark de Rond’s book Doctors at War (Cornell Univ. Press, 2017) is one the most painful books you’ll ever read. De Rond, a organizational ethnographer at Cambridge University, was embedded in a field hospital in Afghanistan, where a team of medical personnel from the U.K. and U.S. waited to operate on wounded flown in by helicopter-- allied soldiers, captured enemies, and injured civilians alike. Like Conrad’s Heart of Darkness, the horror is not so much in the gruesome physical scenes (although that is part of it), but more in the psychological costs of trying to do something about it. It is about feeling your failure in a terrible situation beyond your control; and how the things that members of the group do to cope with their feelings circle back to make things worse.

Each kind of patient delivered to this desolate outpost by a clattering helicopter creates its own kind of strains.

Wounded warriors: This is largely a war of home-made bombs on the insurgent side-- improvised explosive devices hidden under rubble at the side of the road or anywhere an allied patrol might go. This means wounds are often horrible, not bullets penetrating the body but limbs torn off, extensive burns, all kinds of fragments. Surgeons have to extract, patch, amputate and sew back up. It is not the kind of scene that one reads about from battlefield hospitals in the U.S. Civil War or the Napoleonic wars, where in the absence of sedatives there were anguished sounds of screaming, and doctors had to decide which ones to triage. Now the wounded are brought in already sedated by battlefield medics. And triage is not really necessary, this being a counter-insurgency war-- low-intensity if endless-- the doctors are not overwhelmed by numbers but instead have a steady drip of casualties to be patched up and flown out to medical facilities far from the war zone.

No, the strain is in the minds and emotions of the doctors, nurses, and auxiliary personnel as the same kinds of cases repeat themselves, day after day, with their endless variations.

A surgeon “had been operating for forty-one consecutive days, the last seven of which he said had consisted mostly of chucking dead or dying limbs into bins. Homemade explosives left few options other than lopping off the dying bits and dropping them in one of several buttercup-yellow buckets destined for the incinerator.” [p.31]

“I wandered into a waft of freshly burned bacon, its source soon obvious: two badly burned Afghans occupied opposite tables, attended to by emergency staff. The first registered at 53% burns, the second at 48%, both readings the result of a standard calculation using the ‘rule of nine’: divide the body into multiples of 9, with the head, chest and abdomen accounting for 9% each if completely burned, the back and buttocks for 18%, 9% for each arm and 18% for each leg, 9% for the front, 9% for the back. Anything over 35% isn’t considered survivable in Afghanistan... so such patients are given palliative care from the word go. The first of the two died within the hour. The second would follow soon after but insisted on seeing an interpreter.... ‘He wants you to take him and his friend back to the valley where the helicopter found them.’ ‘His friend’s dead.’ ‘Yes he says he knows. He wants you to organize a car to take them both back.’ ‘Right. So where does he think we’re going to get a taxi from?’ ... ‘Tell ‘em we will see what we can do.’... The Afghan slowly moved his blackened hand over his left upper chest and looked grateful.” [117]

“A US marine had called earlier to report the discovery of two partial legs belonging to Billy, one of the troops in his charge, and would it be all right if he dropped them off at the hospital? He and his troops had been told that if limbs could be reattached within six hours of an explosion, they’d have a chance of surviving. The legs had been cold too long, Smitty told him, and were probably too badly damaged to be reattached in any event, but the marine was not to be dissuaded and made his appearance soon after.

‘I gather you’ve got something for me?’ Smitty said.

‘Billy’s legs,’ he said and handed Smitty a floppy carton box that once upon a time held US army rations.

‘You be sure to fix him up, won’t you?’

‘Leave it with us.’

‘Billy’s a quarterback, you know, when we get time to play. Has one hell of an arm.’

‘His arm’s fine.’

‘You look after him now.’

“As soon as the marine took off, Smitty got hold of Ginger, a scrub nurse on his first-ever tour.

‘What’s this?’

‘Legs. Used to belong to the guy in theater three.’

‘Well what the fuck am I supposed to do with them?’

‘Walk them over to the incinerator, that’s what.’

‘...’

‘Sure whoever gave you this is gone?’” [52-3]

Captive enemies: Doctors operate under the rules of war, which stipulate that wounded enemies are entitled to medical treatment. At the forward hospital, surgeons do their best, although they know-- and openly say to each other-- that when they are fixed up and released into custody of Afghan troops, they will probably be killed.

“By the time I returned to the hospital the next morning, late and weary for lack of sleep, the early morning casualties had already been dispatched to the ward or the morgue, the youngest of the still warm only ten. Matching sets of double and triple amputees underlined the war’s agonizing ambiguities: which is the crueler, to prop up Afghans with quick fixes and the sort of sophisticated analgesics not available locally for the handful of hours they’d spend in Bastion, or let them cash in on their convictions pronto and meet their Maker? Ingenuity, after all, can render death quick nowadays and pretty much pain-free. All had been Afghans this morning, peeled off the desert floor by a helicopter crew after 106 pounds of AGM-114 air-to-surface missile did precisely what it said on the tin. The absurdity of the situation was plain for all to see: one budget is used to save those a different budget tried to kill only moments ago.” [11]

De Rond accompanies the transfer of three Afghan army casualties to their own hospital:

“ ‘This guy is high on opium,’ my escort said, having wrestled back one of our oxygen canisters [from a driver]. ‘These things fetch a fair bit of money on the black market, so we want to hang onto them if at all possible.’ He crouched down next to the most serious of the three casualties. The man had already been relieved of his 60% oxygen supply and now was cut loose from his morphine drip and antibiotics...       

‘And the first thing these drivers do is look into the bags to see what drugs we’ve sent along. Anything morphine goes directly to the driver and never even gets to the patient. And so we leave them here to a slow and painful death. This guy here will die of pneumonia.’ ”  Doctors argue about what they should do. “ ‘If  you keep him here and treat him, he’ll ultimately die. If you take him to Kandahar, he will die too, but a little more quickly.’ ” [101]

Injured civilians: The situation with patching up civilians was much the same, with some additional twists.

0400, four a.m. “Two local women had arrived with bullet holes in their legs. Someone who identified himself as a brother stood idly by, insisting, as they did too, that they should be treated by a female attendant. Weegee, the attending emergency department coordinator, ignored the request, saying they have no such luxury in Afghan hospitals so why give them that option here?

“After a quiet day, at around 1900, nine casualties arrived within thirty minutes of each other, including five girls with gunshot wounds: two to the chest, the rest through the arms, legs, and belly. The girls had long eyelashes and olive complexions, their hands covered with henna tattoos. There wasn’t a tear in sight. The emergency and surgical teams were brilliant to watch. When the proverbial shit hit the fan, they salvaged what war destroyed, giddy for being productive. The curse in Bastion was never that of too much work but rather the insufficiency of it. Once the casualties had received emergency treatment and the surgeons had repaired for near beers in the Doctors’ Room, it turned out the girls might have been shot by our own helicopters in error. Their thirty-millimeter cannon rounds were designed to fragment upon impact such that anyone within ten meters of an exploding round risked serious injury, and tonight’s GSW’s looked far more like fragments, the docs said, than the usual bullets.” [125-6]

Friendly fire and collateral damage, as the jargon goes, are endemic in a counter-insurgency war where the guerrillas hide in the civilian population. The civilians in the middle get treated if allied medivacs bring them in. But there are no hospitals to release them to, and back in their villages, care is poor and many will probably not survive. But release them we must.

Sometimes the borderline between civilians and enemies disappears, green-on-blue attacks where Taliban sympathizers among Afghan army troops turn their weapons on American soldiers-- or perhaps suddenly snap under their own pressure, as indeed some American troops have done.

De Rond observed doctors talking about such incidents with the medical staff.

One doctor “told of a British nurse who had arrived in the hospital with severe burns. She had befriended a young boy, plying him with candies, until one day he threw a plastic bucket at her, dousing her in petrol and setting her alight. The Taliban, he said, are not shy about using children to advance their interests, whether by forcing them to walk donkeys heavy with explosives toward the infidel or by leaving injured kids by the roadside as bait to attract a medivac helicopter.” [52] 

This, at any rate, is the conversational culture of the forward hospital. It does not stop them from treating everyone who comes in, to a high medical standard, in the brief time they are there. And this adds to the incongruities that make up the psychological dissonance of the place.

Isolation, boredom and surreal disconnect

In traditional wars, on the whole, the psychological pressure on doctors in battlefield hospitals was severe but not so complex. Of the three kinds of patients treated-- allied soldiers, enemy captives, injured civilians-- such doctors mainly dealt with the first. If they treated wounded enemies, they at any rate were not handed over to others who were going to kill them. In traditional battles with high casualties in a short period of time, the problem for doctors was being overwhelmed, and having to pick out those most likely to survive. This was not a problem in the Afghanistan field hospital, where there were plenty of medical staff to handle the daily influx of casualties. Their problem was that they practiced good medicine, then felt much of it went to waste. And unlike traditional battles, they didn’t even have the consolation of winning a battle or the war.

And they were isolated and bored. Their base was a fort in a hot desert, dangerous to go outside the perimeter, and nowhere to go if they did go out. They were stuck with the same people, who worked, slept, ate together, and tried to amuse themselves in the down times between the hours when the emergency alarms sounded and the helicopters unloaded. It was a total institution, in the sociological sense of the term, but not one in the Goffmanian sense of a hierarchy where a staff guarded a lower class of inmates. The wounded were in a sense like inmates, except that they were so badly incapacitated that they remained passive-- at least de Rond never noted any acts of defiance. And the medical staff were idealists and committed professionals; they didn’t pull rank on each other, and their culture was one of “we’re all in this thing together”, a common task and a common malaise. They all had the same problem and they couldn’t get away from each other.

“Boredom hung in the air like a peasouper that wouldn’t lift except for the briefest of periods. In principle, this should have been good news-- after all, no one was getting hurt-- except that it left the docs with nothing meaningful to do. There was the occasional bit of exercise in a muggy gym to provide a temporary lift, or reading or daytime television, but little to take pride in, to feel productive about. And so they found themselves pining for work to come in, even if this invariably came at the expense of someone else getting hurt.

“But boredom extracts its pound of flesh in other ways, too. Left with little or nothing to do, [the doctors] have begun to criticize each other’s handling of patients and discharge decisions... Left to their own devices these docs became broody and aware of the relative futility of some of what they do here, particularly when it comes to providing emergency treatment for Afghans whose chances of recovery were badly compromised as soon as they were transferred to local hospitals, or so they think.... Periods of great intensity followed periods of boredom in which it was nevertheless impossible to relax.” [70-72]

They tried to keep up a semblance of normal life. They celebrated the holidays as best the could. A Christmas party wearing Hawaiian shorts, tee-shirts and Santa Claus hats, although no one felt very jovial.

“ ‘Sometimes I try telling my family some of these things, but they don’t understand,’ Smitty said... He went on to tell me about a double amputee who had come in over Easter weekend. One of his legs had been attached by only a skin flap and came off during the usual logroll. The attending nurse, who’d been left standing with a leg in her arms, asked one of Smitty’s team to please take it away for disposal. As the lad made his way to the morgue, crossing the ambulance bay en route, he was met by Solesky and a nurse walking the other way, sporting bunny ears and carrying Easter eggs.” [85]

The early morning helicopter patrol brought in an American, but the tourniquets had come off as he was carried to the helicopter under fire, and he had already bled to death: “A glum band of brothers, the docs trundled back to their lair to feast on Apocalypse Now. A famous scene shows a swarm of American helicopters advancing like locusts on a Vietnamese settlement to the tones of Wagner’s ‘Ride of the Valkyries.’ It didn’t seem to strike any of those glued to the telly as ironic that less than klick away their own Apaches were taking off on similar missions... It would quite literally have taken no more than stepping outside the Doctors’ Room and onto the wooden patio to fast-forward to a similar scene. Alas, the patio door was closed shut, and the telly on, and they around it in a half circle, ‘near beer’ and homemade cookies and ginger cake and chocolate to hand.

“ ‘My favorite line’s coming up,’ Southwark said excitedly. ‘Wait for it...ah, “I love the smell of napalm in the morning.” Absolutely first class that is.’ ”  [64-5]

Obviously they appreciate the irony of it all, but they have gone beyond that. Gallows humour, but nobody was laughing, not even sardonically. The doctors wallowed in escapist Hollywood war films. M*A*S*H was another favorite, about a similar forward hospital in the Korean War, supplied by helicopters with wounded soldiers. Except this, like all war films, did not show the medical gore these doctors faced everyday. Their lives were not censored for the screen and there was no rollicking good fun, even when they had time away. Why didn’t they escape to something else, films that had nothing to do with war? They were obsessed, perhaps with distancing themselves from their lives by viewing the Hollywood version. But it didn’t help, only cycled through the day.

“At the onset of sunset, just as Sloppy Joe called for volunteers to help him lug around the weekly pile of pizzas... [the beeping of pagers carried by the on-duty medicals] heralded the arrival of a Cat A [severely wounded]. The whiteboard listed it as a US marine who’d been hit by a rocket-propelled grenade... We made our way from reception into the black hot night to where the light pollution was, signposting the makeshift square with its crude KFC-Pizza Hut combo and games room. A short queue had already formed at the shipping container’s window. The scent they gave off was unmistakable, evoking a lazy day topped off with fast food and soda and feet-up television. Stacked up on one of the two ovens were twenty-one pizzas, hot to the touch, though there’s always a risk they’ll be stone cold by the time the casualty is dispatched with. To my surprise, this happened more quickly than I expected.

‘Casualty’s a hero,’ Joe said.

‘Right,’ I replied. ‘Gone to Camp Hero.’..

‘The guy is dead.’

“It was right about then and there that I became aware of a nauseating feeling ascending from my gut: a rotten-to-the-core sense of relief, less at a merciful end to years of pain and rehabilitation than at the prospect of hot pizza and companionship. The sense of shame I felt then I’ve not felt since. After all, what was a pizza compared to the life of a soldier? What the fuck was wrong with me?

“We sat down to watch Lock, Stock and Two Smoking Barrels.” [118-19]

The doctors were becoming querulous as their beepers sound, calling them to surgery, only to return abruptly to the TV room when they find the new arrival is dead. “They sunk back into the spots they had vacated only moments before, to resume their involuntary stupor, only to be told that a fresh hail of casualties was on the way: a gunshot wound to the neck, a gunshot wound to the thigh and yet another unlucky victor in the roadside bomb lottery.” The most experienced surgeon griped to no one in particular that today he is supposed to be in charge but the other doctors are going ahead of him. No one pays attention.

“Southwark and Fernsby, in the meantime, were taking bets (to be paid off in pizza purchases) on whether the incoming amputee would turn out to be a single or double, left or right leg. ‘A pepperoni on the left,’ Southwark said. ‘I’d say a double. If it is you’re buying Friday,’ Fernsby replied. [123]

This is beyond gallows humor, beyond cynicism. It is a way of passing the time, living in a surreal disconnect. They disconnect even from their cynicism. It is one more layer of psychological distress, piled up and revolved by the hour.

De Rond winds up: back in England, his battlefield tour over, he can’t get over the pain, and the guilt. The doctors he corresponded with say the same.

Meanwhile, back Home

A medical sociologist who reviewed de Rond’s book in an American journal was horrified. He denounced publication of the book, calling it a pornography of pain, voyeurism of medical horrors for its own sake. He saw the book as pointless, no hypotheses, no theory, no take-away. As a reader, I thought this the most unprofessional review that I can recall. No doubt the reviewer missed the standard academic formalities, reviews of the literature, and writing in bland abstractions. Perhaps de Rond writing about his own emotions in the field set off the reviewer into ranting about his own emotions as a reviewer.

Before concluding that this closes the circle of absurdities mingled with (academic and military-medical) realities, it is well to remind ourselves that de Rond’s ethnography is about surreal experiences, but the report is not surreal. It is tell-it-like-it-is, you-are-there participant observation, focusing in on micro-sociological moments in the verbatim conversations of daily life and their bodily context.

It is about the psychological costs of working in an endlessly prolonged artificial situation, without adequate social support. Does it say anything to us about doctors and medical personnel in the COVID-19 epidemic?

Obviously the kind of medical treatment is quite different-- traumatic injuries for quick surgery, vs. prolonged treatment of agonized patients gasping to breathe. One similarity, in hospitals where there are many severe virus cases, may be the stresses of social isolation. Medical personnel constantly masked and keeping physical distance from each other may experience more isolation stress than at the battlefield hospital, where medical teams are constantly hanging around together. Medicals repeatedly exposed to the virus, some of whom themselves become sick and die, are presumably isolated from their families and friends. Of course they can make contact by phone and on-line, but this was true in Afghanistan as well: the social isolation there was intensified by inability to explain to their families the emotions they were going through. In both cases, a kind of total institution may be created, cut off from the normal supports of social life. Witnessing the social isolation of the bereaved, who cannot be at bedside nor take part in funeral rituals, must create a bleak atmosphere somewhat resembling the battlefield hospital.

Such stresses build up over time. Most people can handle extreme situations for a short period of time; there is a rallying-around burst of solidarity at the outset of any public crisis. In the immediate aftermath of the 9/11/01 attacks,* this period of public solidarity lasted 3 months-- but that was a period where mass ceremonies honoring firefighters and police took place at every public gathering. In the absence of this kind of ritual support, the uplifting period of shared dedication may be shorter, under a regime of enforced social distancing. The field unit in Afghanistan had been operating for six years when de Rond studied it, and some surgeons had served ten or more tours of duty. If anything like this kind of endlessness comes out of the struggle with COVID-19, the experience of doctors at war may start to converge.

From the grandson of Randall Collins:

 A book for all ages

Available from independent booksellers and beyond

---------

* Randall Collins. 2004. “Rituals of Solidarity and Security in the Wake of Terrorist Attack.” Sociological Theory 22: 53-87.

For a more formal social-science presentation of the battlefield hospital study, see:

Mark De Rond and Jaco Lok. 2016. “Some Things Can Never Be Unseen: The Role of Context in Psychological Injury at War.” Academy of Management Journal 59: 1965-1993. 


Predicting World War III, Predicting Climate Change

Even the experts have a poor track record in predicting the future. Writers can be well-informed on the trends of their times and knowledgeable about the best theories of social and political change. I will single out C. Wright Mills, who wrote The Causes of World War Three in 1960.

Mills expected a devastating nuclear war, and his analysis of conditions leading in that direction was realistic. Sixty years later, as we reach 2020, we have to explain how he could get it wrong. Mills was the best sociologist of his time. He was the English translater of Max Weber on power politics, multi-dimensional social stratification, and the growth of bureaucracy in all spheres of modern life; and he put this sophistication into a  portrait of the United States -- The Power Elite (1956) -- that still rings true for the mid-20th century and in many respects up through the present.

Did Mills lack the intellectual tools to predict stalemate and de-escalation of nuclear threat and other things that started happening not long after he died in 1962? Or is it inevitable that no one, no matter how sophisticated in the social science of their time, could predict the kinds of things that happened between 1960 and 2020? These are not rhetorical questions.

We can dismiss the argument that all social predictions are self-undermining, since people who become aware of it can take action to avoid the prediction happening. Perhaps a few predictions are self-undermining, but many events have happened in spite of strenuous warnings in advance. The years leading up to the American civil war of 1861-65 are just one of many examples; the wave of de-colonization in the decades after 1945 is another. For that matter, the coming of World War Two was widely foreseen; but no one was able to stop it. We should avoid all-or-nothing pronouncements that social predictions are either possible or impossible; ask instead, under what circumstances do we predict more accurately and less accurately? We should ask, too, under what conditions are we able to forestall predicted disasters-- or not?  The issue is hardly a trivial one, as widespread anticipation of global climate change will not necessarily lead to people actually doing anything effective to stop it. Whether we will or not is not a question for the natural sciences, but for social science.

What thinkers at the turn of the 20th century expected

Before examining C. Wright Mills on nuclear war, let us take a look at several predictions made around 1890-1910. Edward Bellamy was a muck-raking journalist in the era of industrial squalor and labor struggles; his 1888 book, Looking Backward, is about a man who is knocked out in an accident in 1887 and wakes up in Boston in 2000. The ugly factories are gone; there are green parks everywhere. A socialist regime has arranged jobs and housing for everyone, with near-equal wages. Bellamy takes the program of radical socialists of his time and depicts it as accomplished, with a more-or-less Marxian crisis of capitalism as the turning point in the 20th century. These views were widely shared up through the 1940s. Schumpeter-- no advocate of socialism-- wrote in 1942 that the march of bureaucracy in government and giant corporations was killing off the entrepreneurs who supplied the stream of innovations that keep capitalism going. Schumpeter was as sophisticated a sociological economist as ever existed; how did he get the implications of his own theory wrong, though the half-century after his death showed how powerful his theory of economic growth still is?

Another example. The monumental historical enterprise of the turn of the century was The Cambridge Modern History, planned by Lord Acton, and enlisting the world’s best historians to write detailed chapters on political, economic, and cultural events from the 1400s through the early 1900s. Fourteen volumes were published between 1902 and 1912. The editors, who must have been the best-informed persons in the world when they finished, summed up the volume The Latest Age (1910) through objective eyes of their all-encompassing viewpoint. They perceive the chief concern of the time is the social question-- social inequality, poverty, class conflict-- whether taking the form of social work among the poor, trade unions in varying degrees of militancy, progressive legislation or even socialist revolution. The chapter winds up: “The coming age will be occupied by the attempt to translate [these] ideals into practical politics.” (p.15) This is quite a good anticipation of the years up through the 1950s, and several decades beyond on the world scene.

What the editor misses entirely is the possibility of World War I and its follow-ups; he notes the jockeying among European powers but dismisses it as nothing unusual. He comments approvingly on the trend towards international arbitration of disputes (foreshadowing the League of Nations and the UN.) Furthermore, he has a theory of the causes of peace: the power of international finance has become all-pervasive, and “the interests of financiers are as a rule on the side of peace and tranquility... their means of persuasion can be employed against governments as well as against individuals... No Power, no person, is too great, no man too humble, to be reached by the pervasive and unseen pressure of financial interests and financial authority. This force, non-moral as it is, sordid as it may seem, is a growing factor in European politics, and, as a rule, it is exercised for the preservation of peace. ” [p.14-15]   The realistic part of this to foresee the ongoing rise of international finance capital-- which would grow even more powerful from the 1970s through the present. But something big is missing here; perhaps it is true that financiers as a whole prefer peace to run their business wherever they want, but wars and social movements are a different order of causality and can override financiers or sweep them up in their enthusiasms.

The editor of The Cambridge Modern History admits he is taking a materialist view, and in this he belongs to the atmosphere of his time shared by Bellamy, as well as other followers of Marx. The most striking example of this outlook is H.G. Wells, The Time Machine (1895).  This is considered one of the first works of science fiction, but Wells was a serious idea-novelist and he became famous for thirty years for his sweeping overview of human history, combining biological evolution, scientific invention, with the social issues of his time. In the story, a London inventor creates a time-travel machine, and transports himself almost a million years into the future-- to be precise, the year 802,701 A.D.  London is now inhabited by cute little people, who spend all their time playing, dancing, and making love. They don’t do any work and everything is provided for them by machines. So far this is the scientific paradise of the future. But the time-traveler discovers that these people are deathly afraid of night-time. It turns out there are openings to mine-shafts in the ground, and there is another race of underground people-- hairy, muscular, dirty-- who do all the work; in fact their eyes no longer function in daylight from centuries of working in the dark. At night they come out looking for something to eat: the happy little people are their meat.

Wells takes the class struggle of his time and extrapolates it across enough generations that biological evolution has turned humans into two races: brutalized workers, and pampered upper classes. The depiction of the latter is not a bad projection; the European upper classes of the late 19th century were a leisure class of tea-parties, concerts and balls, dressing up lavishly and amusing themselves with love affairs. (Oscar Wilde’s The Importance of Being Ernest, produced in the same year as Wells’ novel, gives a fair idea of the atmosphere he is extrapolating; so does Proust.) The middle classes, too, were acquiring more leisure, and they too were organizing their lives increasingly around popular entertainments and sports. Wells, thinking like an evolutionist, conjectures that the privileged classes, with less and less work to do, lose their useless muscles and diminish to the size of elves; while the workers evolve into inhuman brutes. * It is a pessimistic view of the class struggle; the workers are doomed, but the pampered elite pays the price of being helpless consumers. Or we could see it as a satire warning of what will happen if social reformers don’t succeed. In any case, it is one of the best examples of applying a strong theory (biological evolution) to a possible social trend.  

* This is almost literally the theme of Eugene O’Neill’s play The Hairy Ape (1922), about a thrill-seeking young lady and a boiler-room worker on an ocean liner. 

The social reality that Wells was building upon need not be explained simply as biological evolution. Max Weber, writing 20 years later, saw the same trend towards a society obsessed with entertainment and sex, and theorized it by hitching it to bureaucratization as the master trend of modern history. Every sphere of life becomes rationalized and calculated-- the state, the military, election campaigns, the economy. Alienated by this iron cage of inescapable bean-counting mind-set, people retreat psychologically into their private lives, where they live for entertainment (highbrow or low) and for the meaning-giving  pursuit of sexual love (Weber 1915). Weber could see this already in the hedonistic youth culture of Berlin and Vienna before WWI; the “roaring twenties” were the triumph of self-consciously avant-garde carousing in most wealthy countries (and the topic for writers like F. Scott Fitzgerald in the US, Aldous Huxley and Evelyn Waugh in England, Hermann Hesse and Christopher Isherwood in Germany). The novelty of the hedonistic counter-culture wore off (with subsequent revivals in the 1960s and later), but Weber’s prediction is one of the most accurate we have on record; it holds throughout the 20th century, spreading worldwide (the Islamic countries fighting a rear-guard action against it), and shows no sign of abating in the 21st.

Where did these intelligent observers miss the boat?  They accurately perceived one of the powerful trends of their time: Bellamy, Wells, and the CMH editor focused on the intensification of class struggle, whether in terms of Marx or a  peaceful reformist version. Schumpeter and Weber saw the master trend as bureaucratization, trumping even socialism. * 

* In another right-on prediction, Weber in 1906, examining the revolutionary movements in Russia, wrote that if the far Left came to power, there would be a bureacucratization of society such as the world has never seen; “the dictatorship of the official and not the proletariat is on the march.” [Gerth and Mills, From  Max Weber, p.50]

The failures came from focusing on the overwhelming importance of one line of theory, and missing what lies outside it. Their theoretical tool box was too small. We can appreciate this better after examining C.Wright Mills, who put Weber’s full-strength theory to work on the situation of the 1950s. He had a wider vision than was available at the turn of the century, but what it lacked is pointed up by the things that made Mills’ predictions go wrong.

When do people have the social power to decide on their future?

Mills’ book is not a polemic, but a thoughtful marshaling of the best sociology of the time. Yet it shows how the best intellectual tools, wielded with deliberately non-partisan objectivity, can still miss key future developments. 

Mills starts off, not by denouncing the nuclear arms race, but raising the question of whether everything happens by fate, or if there is an opening for intelligent decision-making. He treats the question like a sociologist looking for causes instead of as an all-or-nothing philosophical discussion of free will. What has been traditionally called “Fate” has a sociological basis, since it is “the summary and unintended result of innumerable decisions of innumerable men” [p. 26; using the pre-feminist language of the time]. Mills is an early symbolic interactionist, viewing interaction among people as the basis of all the large patterns we can call social structures. But historical patterns of interaction have shifted drastically between traditional and modern times. “In those societies in which the means of power are rudimentary and decentralized, history is fate.” No one is in a position to control what most other people do, so even if you see bad things coming, you are not in a position to do much about them. But decisions about social directions can be made when societies become centralized, as a result of the shift from feudal to industrial societies.

Although early capitalist industrialization happened slowly and from many local sources, by mid-20th century in the advanced countries centralization had taken place in every sphere: economic corporations coordinated by big finance; huge military forces backed by logistics and weapons-procurement based on the strength of the economy; huge national government agencies to tax, administer, and control. Mills’ own research had shown the existence of a Power Elite, the intersection of these networks at the top by the circulation of  corporate executives, military officers, scientists and government officials into each other’s jobs. The prime example of the Power Elite was in the United States, having built a centralizing structure to win World War II; an analogous structure had been created in the USSR, where the combination of revolutionary socialism and war-time mobilization had produced another Power Elite.

Summing up, Mills wrote: “’Men are free to make history’, and some men are now much freer to do so than others, for such freedom requires access to the means of decision” (p. 28--his terminology echoing both Marx and Weber).

Mills draws out two consequences. For the first time in history,  extremely fateful decisions can be made; in this case, whether to destroy the planet in a nuclear war. But why would anyone want to do so? Decisions can be implicitly made out of non-decisions, just letting things take their course-- in this case, in the midst of a nuclear arms race. The second inference is that the individuals who make up the power elite share a common vision of the world, a common psychology by virtue of how they have made their careers; where you sit determines where you stand. Structurally, they have the power to change the course of history; but because of their social psychology, they are unlikely to use that power to head off catastrophe. In the 1950s and heading into the 1960s, the Power Elite saw the nuclear arms race between the US and the Soviets as inescapable.

Mills made the single best statement of when and how some people have agency to move history, and when they can do nothing more than flow with the social tides.  Nevertheless, his theory missed some crucial points, and these caused his main prediction to go wrong.

Before examining what he missed, let us look more closely at his analysis of the coming nuclear war.

Mills and the Causes of World War III   

The immediate cause of World War III is the arms race. Beginning with the race between the Western Allies and the Germans in WWII,  it had produced aerial bombing, long-distance rockets, and the atomic bomb. It was taken up by the US and the USSR, soon to produce jet planes, the hydrogen bomb, space rockets, ICBMs, and nuclear-powered aircraft carriers and submarines. It became a pattern of mutual escalation, neither side willing to be caught lagging behind. By the early 1960s, the means of destruction reached the point where contamination of the atmosphere by radiation from a nuclear war would likely wipe out human life on earth. (This is exactly what was depicted in the 1964 film, Dr. Strangelove.)  Mills, like others, pointed out that although no one might want this to happen, continuous escalation of increasingly devastating weapons raised the risk that a war could break out by accident. An equipment malfunction, a misreading of a radar signal, misperceiving the other side’s intentions, could trigger off an attack through hair-trigger readiness to react immediately before being destroyed.

These are only the immediate causes. Deeper causes are in the structure that gave rise to a Power Elite. The US had gotten out of the Great Depression of the 1930s by the huge government spending of WWII.  When prosperity returned in the 1950s, big industries like automobiles, aircraft, steel, chemicals and electronics were not just producing for consumers, but their biggest customers continued to be the military. Thus the arms race in all respects-- not just nuclear weapons but all the other forms of weaponry and logistics-- sustained a military-industrial complex.  It was built so centrally into the economy that most people’s jobs depended upon it, directly or indirectly. The US had become a “permanent war economy” even in peacetime.

This in turn created a widespread mind-set. Few people questioned the direction they were going. Government spending had become the main source of funding for scientific research; scientists took it for granted that their careers in the university depended on getting such funds, if they didn’t work directly for the government or for corporations producing military materiel. The scientific, economic, and political elites coalesced in keeping the arms race going.

Mills was quite aware that opponents of the arms race existed--he was active among them.  But as a sociologist, he was alert to the need to understand the social sources of opposition. Below the Power Elite, the US was a middle-class democracy. It included politicians in Congress as well as the state and local level; professional associations of all sorts, entertainment celebrities, intellectuals, academics, and all the branches of cultural media. But their interests were narrow and local; they operated within the larger system of big organizations, and for the most part accepted them. Labour unions were still powerful, but were largely tied up with the interests of the big corporations, as long as they got a share of the proceeds. Mills called the middle class a “semi-organized stalemate” incapable of changing the military-industrial complex, and largely uninterested in doing so. Below the middle class were a powerless mass of consumers, more interested in sports and entertainment than anything else. 

So much for the American side of the arms race. What about the Soviets?  Their structure, too, had been forged in WWII, and they continued to use it in top-down fashion to bring themselves into the ranks of advanced industrial countries in the 1950s. It too was based on a military-industrial complex; hence the mentality of maintaining it must be built into the world-view of the Soviet Power Elite as well. 

But here Mills makes an prediction that looks strange in retrospect, although it was based on a realistic view of the evidence at the time. He notes how quickly Russia industrialized, accomplishing in less than half a century what during the rise of modern capitalism in the West had taken 300 years. This was the result of forced industrialization, the coercive but centrally controlled Soviet policy of building modern heavy industry, which Mills judged as evidently superior to the Western model. The Soviets had been quick to put scientific expertise to work where it needed it, demonstrated by its ability to create a hydrogen bomb a few years after the US, and launching the Sputnik manned satellite into space, jumping ahead of the US in 1957. Mills quoted statistics: the USSR had been growing economically at a rate of 6%, while the US rate was 3%. Thus he predicted that the Soviets would “overtake the US economy in a decade or two” [p.80] -- and he thought this all the more likely because the state-led socialist economy would not be slowed down by the capitalist business cycle of periodic recessions. As China got its act together, it would learn from its predecessors and achieve an even faster growth rate: “what Russia has done industrially in 40 years, China may well do in 25.”  (Not a bad prediction in some respects, although the Chinese take-off did not start until the 1980s.)

The upshot of Mills’ comparison of the US and the USSR is that the Soviets were not as committed to the nuclear arms race as the Americans. From talking with Russians, he got the impression that they felt the future would be theirs; all they had to do was wait another 10 years or so, and their model would be proved superior. And this brings him back to the American Power Elite. If the US is more committed to the arms race than the Soviets, it is we who bear the most responsibility for the danger of nuclear war. Somehow, the peace movement has to get the attention of the Power Elite, to convince them to stop the arms race. But since the military-industrial complex is central to our economy, the odds of changing it are poor.

What did Mills’ analysis miss?

Obviously, there has been no nuclear war; in fact, no further nuclear bombs have been used since 1945 (although the future is still open).  In part this was due to something else not on Mills’ radar, the fall of the USSR and its satellite states in 1989-91-- 30 years after Mills wrote in 1960. He also expected the Soviets would overtake the US economically within 10 or 20 years; instead around 1975 it began to be visible that they were falling behind, and were in considerable economic strain by the time Gorbachev launched a reform movement in 1985.

Mills expected that no-one but the Power Elite could do anything about the nuclear arms race, and he was not very optimistic they would do so. Nevertheless, in 1962, soon after the Cuban missile crisis when the US and USSR threatened each other, Kennedy and Khrushchev established a “hot line” telephone link, to avoid going to war through misunderstanding.  Again in the mid-1980s--after a period when the US massively increased its nuclear forces in order to catch up with a perceived Soviet threat-- Reagan and Gorbachev negotiated a treaty limiting the numbers of nuclear weapons. Were these events within the scope of Mills’ predictions? They do fit his argument that with the centralization of decision-making in the two world powers, any effective moves towards peace would have to come from the top.

Mills had explicitly ruled out the likelihood of a movement from below challenging the arms race. But here he was proved wrong within a few years of his writing. In the early 1960s, there was a “ban the bomb” movement, most active in Britain, but with a small group of activists in the US. These were ineffective at the time. In 1965, a much bigger anti-war movement developed in the US, in opposition to the Vietnam War. It became militant, building on the demonstrations and sit-in tactics of the civil rights movement for racial integration, and even attempted to block the Pentagon in a massive march in 1968. This anti-war movement failed; it was unpopular in public opinion; it failed to get an anti-war candidate chosen at the Democratic convention at Chicago in 1968; it got such a candidate in 1972 (McGovern), who lost the election in a landslide to a pro-war President (Nixon). Even so, something was happening. The US pulled out of South Vietnam in 1973, allowing the country to go communist in 1975. But now the military was becoming wary; for several decades, military officers explicitly tried to avoid “another Vietnam”. And although a majority of the public initially always backed whatever war the US got into-- the Gulf War in 1991, the invasion of Afghanistan following the 9/11/01 attacks, and the invasion of Iraq in 2003-- such wars eventually became unpopular if they went on for several years.

Mills did not think conditions existed for a successful peace movement. He was partly right-- no peace movement was able to dictate government policy. Nevertheless, anti-war sentiment generally grew in the years from 1965 to 2000, and has been intermittently influential since then. The conditions for a half-successful peace movement is part of what we need to explain.

Missing in Mills’ theory:  social movement theory

Social movement theory had barely developed in 1960. It focused on mass behavior as irrational, and on right-wing movements as motivated by status deprivation. Theories shifted as sociologists paid attention to the civil rights, anti-war, and feminist movements of the 1960s and 70s. The most relevant of the new theories pointed to resource mobilization as the key to movements’ growth: Movements develop where they have networks for recruiting activists and supporters; coordination through social movement organizations (SMOs) with a full-time  staff engaging in fund-raising, seeking favorable publicity in the news media, and enlisting lawyers and other professionals to protect demonstrations from arrests, and bring lawsuits in court. Not all movements developed all these resources. They varied in their use of violence, non-violent protests, civil disobedience, and legal action; movements that used all of these tactics tended to be most successful. SMOs are crucial in keeping a movement going during the long period of time-- often 10 years or more-- it takes to gain concessions; this multi-pronged offensive was best illustrated by the success of the civil rights movement.

Most of these resource mobilization processes were involved in the development of the anti-war movement during the Vietnam War. From his observations of the 1950s, Mills regarded universities as conformist and careerist. But in the 1960s, universities became the major resource base for new social movements. Here new SMOs were created, networks were recruited, and emotional enthusiasm built up. And this resource base was growing rapidly: university attendance sky-rocketed, from about 2 million students in 1950, to 9 million in 1970.  Mobilization began initially with students at historically black colleges in the South, who organized the sit-in movement to desegregate public facilities; within a few years they were imitated by white students in the North. Resource mobilization builds on itself. Activists and tactics shifted from civil rights to anti-war protests; further spin-offs from these movements led the second-wave feminist movement at the turn of the 1970s.

Resource mobilization-- not in Mills’ theoretical repertoire-- explains how an anti-war movement could grow. But what explains its degree of success (or lack of success)?  Theorizing the success of social movements remains an unanswered question. But let us make some rough comparisons. The civil rights movement was successful in desegregating public institutions within about 20 years. The anti-war movement in its first 10 years failed to stop the Vietnam War; it was unsuccessful in the 1980s in stopping the massive nuclear build-up during the Reagan administration; an even bigger outcry against the invasion of Iraq in 2003 also was ineffective. At most we can say anti-war movements became bigger over a 40 year period, creating a segment of public opinion that government leaders had to worry about. 

In sum, we have a movement that achieved most of its avowed goals in 20 years; and a movement with some modest success after 40 years.  C.Wright Mills’ frame of reference helps explain the difference. Civil rights was a local problem, below the concerns and interests of the Power Elite. It could be fought out in one city and town after another; its targets were at the level that Mills regarded as the realm of competing interests. Here grass-roots movements, acting locally with the help of sympathetic news coverage, could build a chain of victories. Stopping wars and nuclear weapons, however, pitted activists against the center of national power. The fact that anti-war concerns eventually became a modest influence on international policy shows that the Power Elite could be pushed, to some degree-- at any rate, more than Mills anticipated.

The main reason that nuclear war did not happen has to be attributed to large-scale factors, outside and beyond the control of each national elite.

Theoretical weakness: Geopolitics

Geopolitical theory gives the conditions for growth and decline in the military power of states; when wars break out; who wins and loses, and when stalemates occur. Geopolitical theory was still rudimentary in 1960, consisting mainly of the mutual escalation of arms races that Mills used; plus a balance-of-power theory, based on British policy in the 1700s and 1800s, which Mills saw was inapplicable to the two-sided world of post-WWII.

In the late 1970s, I put together a geopolitical theory, combining previous formulations, and based on examining changes of state borders around the world in the past 3000 years. There are 5 main principles:

[#1] States with more population and economic resources expand at the expense of smaller and poorer territories; and these advantages and disadvantages cumulate as the big get bigger. 

[#2] States at the edges of a densely settled zone tend to expand, while states in the middle tend to fragment and be swallowed up. 

[#3] As [#1] and [#2] operate over a period of time (30-50 years for each iteration), a geographical region simplifies into 2 big states (or empires/alliances) confronting each other.

[#4] Confrontation between two big states generates a turning point with 3 possible outcomes: victory of one side, or the other, resulting in a world-empire; or a costly stalemate, draining the power of both contenders and opening the way for new states to expand.

[#5] A big state also can decline from overextension: expanding so far from its home economic base that most of its resources are used up in logistics moving and supplying its forces. Eventually it loses wars on distant frontiers, even against weaker powers. Such defeats, combined with the economic burden of the military, create a crisis of legitimacy at home, fostering revolution and regime change.

Mills was observing the situation after World Wars I and II, when the central states of Europe had lost twice fighting the big states to their east and west. Germany’s loss fits [#1] and [#2]. But the two most powerful states of the west, Britain and France, were militarily exhausted as well. The post-WWII power vacuum was filled by two peripheral states, the US and USSR, in a confrontation over world empire (whatever terminology one might have used for their drive for hegemony). This fits [#3]. Such showdown wars historically have been especially ferocious and destructive (in contrast to the polite, rule-following battle etiquette of limited, balance-of-power wars). The nuclear arms race in the US/Soviet showdown, threatening to destroy everything, fits the pattern.

Russia historically had been an expanding state from the 1400s through the 1800s, spreading from Moscow against relatively resource-poor population zones to its east and south: [#1] again. Defeats by the rising power of Japan at the far end of logistics lines in the Far East, and by the Axis armies in WWI, brought revolution in Russia. The Communist regime inherited Russia’s geopolitical position, with the advantage after WWII of having its immediate enemies to the west and east destroyed; it began to expand again, taking over eastern Europe and expanding its global influence by sponsoring revolutionary regimes throughout the world. This was the situation as C. Wright Mills saw it in 1960. US involvement in the Vietnam war started after Mills died in 1962, but it would fit [#5]-- logistical overextension-- as the US found itself in a long, costly stalemate, fighting a guerrilla war on the other side of the world, supplied in the most expensive way, by air.

So far Mills’ prognosis looked correct, up through 1975, when South Vietnam fell to the communists. Then geopolitical conditions shifted. US withdrawal from Vietnam was a precursor to North Vietnam’s victory, but it reduced logistical overextension. [#5] was no longer a problem, and the US maintained this cautious posture until 2001. The Gulf War in 1991 was an exception, but the fighting was called off by President Bush in 4 days; and no costly occupation of Iraq was attempted. While the US was improving its geopolitical position, Russia was straining theirs. The USSR kept up military expansion, invading Afghanistan in 1979 to prop up a weak communist government; the resulting 9-year war became Russia’s Vietnam--a resource drain, and a crisis of legitimacy at home, leading to Gorbychev’s reform movement. Powers that enter the declining side of the geopolitical processes tend to lose territorial control faster than they had acquired them. The 1989 revolutions in Eastern Europe led to the loss of its post-WWII satellites; the 1991 revolution in the USSR broke apart hundreds of years of conquests stretching from Latvia to Kazakhstan.

The 45-year confrontation between the Soviet bloc and the US-dominated bloc came to an end in the pattern of [#4] and [#5]:  prolonged 2-sided confrontation and stalemate allowed new power-coalitions to grow on their periphery. These were the “unaligned nations” or “Third World”; and would include the shift of the Middle East to its own belligerent ideology (Islamic nationalism against both Western capitalism and communism). It was in this ideological atmosphere that China pulled out of the Soviet orbit, and eventually launched its own nationalist version of state-controlled market/socialism.

Mills’ reasoning would have been accurate if nuclear war had come in the 1960s or 70s. It didn’t. Nuclear detente settled into a stalemate; and this allowed the world configuration to morph into a polycentric world by the late 1980s. By then it didn’t matter so much that the Communist campaign for world-domination (or world-liberation) no longer had much resources or enthusiasm. It was in this new ideological climate of delegitimation that Soviet regimes almost everywhere reformed themselves or collapsed.

We still haven’t explained why nuclear war didn’t break out in the years before the Cold War wound itself down. We have two further factors to consider.

Theoretical weakness: extrapolating economic growth rates

Comparing growth rates, Mills predicted the Soviets would overtake the US within 20 years, and thus would win the Cold War, since the resource-rich win. The string of predictions unravels because by the late 1970s the Soviet growth rate had fallen below US growth. It is always a mistake to assume that a statistical pattern from a particular period (in this case, the 1950s) will continue indefinitely, unless we have well-established theory for what causes such numbers. *  Mills thought that the USSR’s 6% growth rate was caused by the advantages of a centrally planned socialist economy. In fact, it was the typical pattern of a take-off from the relatively low production of an undeveloped economy into a massive industrial economy. ** The same pattern was seen later in China, which began sustained growth in the 1980s and achieved growth rates of 10-15% in the 1990s and early 2000s; subsequently trending downwards (but remaining as yet still considerably above the 3% ceiling typical of mature economies.) 

* The classic example of this fallacy was the prediction by demographers, based on population growth in the 1930s, that the US would level out at 140 million in the 1950s. Instead came the post-war baby boom-- on nobody’s theoretical radar-- with the result that US population passed 200 million by 1970, and doubled the predicted number by hitting 280 million in 2000.

** This is a matter of arithmetic. Starting from a small number, even a small absolute increase can be a large percentage. If the GDP per capita is $100, adding $15 gives you a growth rate of 15%.  This becomes progressively harder as the base grows larger.

Making his analysis in 1960, Mills would have needed better tools for explaining economic growth, both in the Soviet bloc and in the US. Without attempting a sketch of relevant theory as of today, what was needed would have to include understanding the weaknesses as well as strengths of centrally-planned socialist economies; and correspondingly of the mechanisms of economic growth in market capitalism-- especially Schumpeterian theory of entrepreneurs driving technological innovation. (Why did the IT economy take off in the US, starting with the personal-computer explosion, while Soviet technological innovation remained narrowly in the realm of weapons technology?)

Theoretical weakness: substitutes for all-out war

Mills saw the nuclear arms race as the pathway to endless escalation. And this continued for another 20 years, with the proliferation of ICBMs, long-distance bombers constantly in the air, and submarine-launched missiles. Both sides acquired arsenals capable of destroying the other many times over. The situation came to be called Mutually Assured Destruction.  The abbreviation MAD was mocked as indeed madness. Nevertheless, it turned out to be workable mutual deterrence. After the Cuban missile crisis, both sides were careful to avoid another nuclear confrontation.  One could even say it formed a tacit mutual agreement-- conflict creating a social tie, in the manner that Simmel had theorized, with both focusing on coordinating with each other in at least this respect.

This did not mean their Great Power rivalry would become peaceful. Military conflict is not a simple binary-- either nuclear war, or world peace. Both sides continued to carry on their struggle for spheres of influence, by proxy wars. The USSR armed Cuban troops to fight for communist regimes in Africa. The US used CIA aid to oppose the Russian-supported regime in Afghanistan. Both sides sent military “advisors” to help their proxies; their limited involvement was something of a pretence, as advisors often took part in combat, especially with artillery, helicopters, and aerial strikes-- keeping a little distance from “boots on the ground”. But mutually accepting the pretence that these were only advisors was a tacit agreement too, to avoid all-out conflict by staying in a less visible role. This practice has continued, even after the end of the Cold War-- for example in the 8-plus years of conflict in Syria, where the US, Russia, Iran, and other outside powers have armed their own proxy forces.

From MAD to proxy war is obviously not an ideal way to peace. Nevertheless, it shows it is possible to pull back from joint suicide. Limited wars through proxies is another illustration of geopolitical principles [#3] and [#4] and their corollary: a stalemated showdown turns back into a version of balance-of-power wars, where military aims and methods are scaled down. It does have the disadvantage that proxy wars can go on for a very long time, since the outside sponsors are not incurring much losses for themselves; it is the local population who pays the price for having multiple forces fighting in their homeland.

It is a sobering lesson we have learned since the time of C.Wright Mills: the worst kind of escalation can be avoided, while lesser degrees of conflict can continue to pile up “limited” destruction.

And so the world came through, without nuclear war. Mills hoped this would happen, but his theoretical tool-kit was unable to anticipate the crucial processes:

-- Geopolitical stalemate opened the way for a more polycentric world, with more limited forms of warfare.

-- The expensive arms race bankrupted the Soviets first; Mills failed to envision this because he extrapolated short-term growth rates instead of recognizing the initial surge of high growth as a country first modernizes.

-- The US economy eventually outgrew the military-industrial complex. The loss of heavy industry to cheaper overseas producers began the trend; its place was taken by expansion of what could be called “consumer entertainment industries” based on electronics. The roots of this go back to the invention of the phonograph and radio; from the 1950s onwards these industries produced a series of innovations in devices for playing recorded music, film, and much else.  Some of the electronics spun off from the military: after the advent of the personal computer (created by entrepreneurs in Silicon Valley, around the big defense electronics companies), the Defense Department’s DARPANET became the Internet. GPS, first developed so military aircraft wouldn’t run into each other, eventually was combined with smart phones into a large number of consumer and business applications. Not only did the US lead the new wave of technological innovation, while the Soviet economy remained centered on heavy industry for its military; the American style of music, counter-culture rebellion, and entertainment resulted in the “blue-jeans offensive” that made Soviet youth jealous of American culture. The economy of cultural innovation, it turns out, is also a political weapon, insofar as it delegitimates enemies in their own eyes.

Lessons for predicting climate catastrophe

What can we take away from this episode that is relevant to the big question of our future?

First: the distinction between making a prediction and being able to do something about it. So far, almost all research on global warming, rising sea levels and other climate trends have come from the natural sciences. They tell us what is likely to happen in coming decades; but hardly anyone has seriously analyzed how likely we are to stop these these processes, and what determines how people will respond.  The prevailing assumption seems to be that if predictions are alarming enough, the world will do something about it. But as we have seen in the case of nuclear war, being alarmed is not a sufficient mechanism to predict what people will do. We need social science to ask the question, as objectively as C.Wright Mills did in his day: what social processes are leading towards impending disaster, and what social processes can stop it? And in this case, we need to be able to predict matters of degree:  what social forces would be necessary to completely control climate change; what forces would lead to a half-way solution, or virtually no solution, and so on.

Many people besides C. Wright Mills saw we were heading for nuclear war, but they saw no way out of the arms race-- for a time, their advice was to build fall-out shelters. It was processes outside their control-- shifting geopolitical patterns; differing trajectories of economic growth; the shift to proxy wars -- that prevented world destruction. Can we say something similar about reactions to climate change during the remainder of the 21st century?

What kinds of things would we want to predict?

One factor that will affect people’s reactions will be the direction of world economies: I put this in the plural, because different economies would affect what their leaders and peoples would be willing to do to combat climate changae. Would their economies support or resist measures to reduce greenhouse gases, or to shift their uses of energy? There would be different responses from the rich countries (the U.S., north-west Europe, Japan); from burgeoning economies (China; potentially India); from Russia, positioned on the melting Arctic; from other parts of the world. Would these be willing to give up automobiles or air travel? Would they all move into high-density housing and seal their dwellings against heat transfer? Would attempts to do these things bring economic prosperity or decline?

A second factor is political. On the whole, political elites in the era of global internationalism have advocated policies to curb climate change. But such elites may not always be in control; their popularity has fallen in the U.S., Brazil, Britain and elsewhere. Objectively, we need a theory that gives the conditions for internationalists winning or losing elections; and to predict whether large numbers of people resist giving up their cars (the gilets jaunes movement in France, for example).  In short, we need a theory about political conflict over responses to global warming. In Europe, populist/nationalist movements have been energized by the influx of refugees-- partly as the result of wars in the Middle East and Africa; partly as poor people use refugee pathways to seek a more favourable economic location (ditto for influx at the US border). Understanding the political dynamics of welcoming or resisting refugees will become an even bigger issue if global warming continues to the extent of displacing many millions of people living in areas threatened by rising sea levels.

Predicting whose positions will have political influence is similar to the question I raised about anti-war movements. Theory of social movements is grossly incomplete in some key respects. We know something about the conditions that allow social movements to mobilize, including organizational bases like universities; and more recently the attention-steering power of the Internet. We know less about what predicts the directions social movements will take. Liberal partisans were taken by surprise by the extent of support for Brexit, Trump, Bolsonaro, etc. Can we formulate, in a more generic sense, what kinds of movements we can expect during coming decades? It is unrealistic to assume that as global warming grows worse, there will be consensus on what to do about it. We need a better theory of social conflict: what determines the strength of different factions, and who wins what kind of power to take action?

This is the same question I raised about what determines how effective social movements are in achieving their goals. I suggested that movements where action can take place locally achieve their goals faster than those who have to take on centralized systems of power. We need to rethink this further to cover movements for and against action on global warming.

The effects of global warming are not all-or-nothing, like a nuclear war destroying the earth. There is a long future trajectory of gradual change into the 21st century and possibly beyond. The most serious crises will be local rather than world-wide. It is unrealistic to assume that of course  we will act as a world community to save whoever is imperiled. It could happen-- if a certain type of altruistic social movement became dominant everywhere. It could also happen that nationalist/populist movements would be in control-- in a few countries or many-- and they could be most concerned to protect their borders against being swamped by refugees. Some conservative economists have argued that the costs of certain parts of the world becoming uninhabitable can be calculated; and these costs can be weighed against the costs of changing our entire energy, transport, and living conditions. What theory do we have that can tell us how far these various policies will win? (In countries with what kinds of social patterns will various policies prevail?)

The Power Elite, in the sense that Mills described it 60 years ago, is no longer so dominant. He saw two centralized elites, one for the US, the other for the USSR; between the two of them lay the power to decide on nuclear war, one way of the other. His theory did not foresee additional conditions that would reduce centralized control in both places, as well as in the world as a whole. No one or two countries can dictate policies to control global warming. Within the U.S., the military-industrial complex no longer encompasses massive sectors of the economy like automobiles and heavy industry. The electronic/entertainment industries that replaced them in the height of the economy are not locked into a revolving door of government and Pentagon officials. There is more structural split. I do not mean merely that we can count on the political attitudes of owners and employees of Amazon, Google, Facebook etc. to stay internationalist and dedicated to fighting climate change. That is a short-run situation, which can change. More systematically, what effects do the IT industries have on mobilizing political movements, one way or another? Conservative movements mobilize well through the social media, too (Trump, etc). There is the additional possibility that IT will demobilize many people, creating an electronic addiction to fantasy entertainment that would make them put up with almost anything. Big questions to be tackled: what will mobilize and demobilize people on crucial issues, and how many in each segment? For that matter, how stably will these patterns hold? The future may well be a series of swings back and forth.

We need a theory of conflict between rival forces-- economic tendencies, local and international ways of organizing power, rival social movements. If  movements on crucial issues are more or less evenly divided, the result is likely to be political gridlock; the default position becomes the status quo-- doing nothing, or doing little enough so that the problems of climate change are not much affected. In the years leading up to 2020, this kind of social division and policy deadlock has become widespread. That does not mean it will stay that way over the decades of the future. Can we theorize the conditions for today’s deadlocks, in such a way that we can see them as variable, and thus predict the conditions that would change deadlock in the future?

References

Edward Bellamy. 1888. Looking Backward.

CMH= Cambridge Modern History.  1910.  Vol. 12. The Latest Age. Cambridge Univ. Press.

Randall Collins. 1986.  “The Future Decline of the Russian Empire.” in Weberian Sociological Theory. Cambridge Univ. Press.

Hans H. Gerth and C.Wright Mills. 1946.  From Max Weber: Essays in Sociology. Oxford Univ. Press.

David R. Gibson. 2012. Talk at the Brink: Deliberation and Decision during the Cuban Missile Crisis.  Princeton Univ. Press.

C. Wright Mills. 1956.  The Power Elite. Oxford Univ. Press.

C. Wright Mills. 1960.  The Causes of World War Three. NY: Ballantine Books.

H.G. Wells. 1895.  The Time Machine.

Joseph Schumpeter. 1942. Capitalism, Socialism, and Democracy.  NY: Harper.

Max Weber. 1906/1995.  The Russian Revolutions. Cornell Univ. Press.

Max Weber. 1915/1946.  “Religious Rejections of the World and their Directions.” In Gerth and Mills.

Downfall of NCAA Would Improve American Education

The 2019 California law allowing college athletes to retain agents and receive payment for use of their name and image opens the flood-gates to professionalization of college sports. So they say.

Most arguments against it are about its effects on college sports, especially the “minor sports” outside the big time, revenue-generating sports, football and basketball.

But I want to raise a more important point: the worst-case scenario, getting rid of all school sports, would be one of the biggest things we could do to improve American education.

The predominance of sports in American schools at all levels distinguishes US schools from most of the rest of the world. This sports-centeredness is to a considerable degree responsible for why American students score lower than almost everyone else in international comparisons of academic skills.

Most American students just don’t care that much about studying, and their peers put them down for it, in comparison to the athletes who are the center of the school prestige hierarchy. This is not the case in the rest of the world—because their schools aren’t focused on athletics at all.

But this is unimaginable—American schools without sports! Why would people support schools if there were no excitement and no mass spectacle about them? It’s not unimaginable at all. To see how it works, just look outside the boundaries of the USA.

School athletics at the center of attention devalues intellectual students

Anyone who has been to high school in the United States knows there is a prestige system, with the jocks and cheerleaders at the top, and the nerds at the bottom. The term “nerd” has no equivalent in foreign languages; here it means someone who is a grind, not part of the youth culture, not just deficient in athletics but inferentially lacking in sexual appeal and klutzy at social skills. With the rise of the high-tech economy and dissident counter-culture movements this hierarchy has gotten somewhat blurred, but it still has a very strong hold in the way American schools operate. Successful school teams are the way a school—from high-school up through university—gets a public image and public support; and it is the main school-spirit-building activity inside the school.  Athletics are sacrosanct, as the center of most schools’ image, prestige, and money.

There are exceptions: tech schools like M.I.T. and Cal Tech opt out of that system entirely; some elite colleges (mostly Ivy League and nearby) have so much reputation for their graduates going to the top of the corporate and political worlds that they don’t need famous athletic teams. As I will argue later, these are seeds for an American academic system that would replace our current one, if the sports-centered school system self-destructs over the professionalization of athletics.

Murray Milner (University of Virginia sociologist) did a massive study of prestige hierarachies at high schools across the country. He went on to develop an explanation of why jocks and cheerleaders are at the top, and serious students near the bottom. Games by a school team are the one activity where everyone is assembled, focusing attention on a group of token individuals who represent themselves. Games also have drama, plot tension, and emotion, thus fitting the ingredients for a successful interaction ritual. Predictably, they create feelings of solidarity and identity; and they give prestige to the individuals who are in the center of attention. Jocks are the school’s heroes (especially when they are winning). Cheerleaders are their number-one worshipers, high priestesses to the cult, sharing the stage or at least the edge of it. And they are chosen to represent the top of the sexual attractiveness hierarchy, hence centers of the partying-celebration part of school life—out of the purview of adult teachers, administrators, and parents.

In contrast, outstanding students perform mostly alone. They are not the center of an audience gathered to watch them show off their skills. There are no big interaction rituals focusing attention on them. Their achievement is for themselves; they do not represent the school body, certainly not in any way that involves contagious emotional excitement. The jocks-&-partying channeling of attention in schools devalues the intellectuals. When it comes to a contest between the two, the athletic-centered sphere always dominates, at least in the public places where the action is.  The social networks of intellectual students are backstage, even underground. *

* These are not the only identity-groups among students; there are also the theatre crowd, musicians, druggies and counter-culture types, thugs, working class kids and part-time job-holders looked down upon by the fun-and-consumption culture of middle-class kids. Thus nerds tend to be more in the lower-middle of the school prestige hierarchy than at the absolute bottom. See Milner [2004] for details.

Not surprisingly, the majority of American students who could go either way in emulating athletic/partying or intellectual stars, choose the former and downplay the latter. In the era of increasing competition over admission to higher-ranking colleges, most students in the middle prestige levels aim for a respectable level of academic performance (only the top jocks can afford to be largely oblivious to grades); but they don’t pour themselves into it. They are surrounded by counselors and easy-grading teachers who sympathize with the existing prestige system, who make sure they don’t have to work too hard on academics. Of course, America is a large and ethnically diverse population, and there are subgroups in it—above all children of immigrants from China and other places with a stronger academic achievement focus—who push their children to study hard and to come out at the top of the grades and test scores. But it is the home that it is giving these kids the impetus; not the school—its atmosphere mostly works in the other direction.

This is why the average scores on American students in international comparisons of skills in reading, math, and other subjects tend to be at the bottom, far below countries in east Asia and in Europe. It is not a matter of talent, and certainly not a deficiency in school facilities, but a problem of social motivation.

Countries where universities are intellectual and athletics is elsewhere

Compare school life in countries where academic achievement is publicized and celebrated: Britain, France, Germany, the Netherlands and elsewhere. One feature they have in common is lack of competitive school athletics. This is not to say there are no students who are athletes in these countries. But athletic teams and practice places are in separate clubs, not representing the school. The pattern of separating sports and schools is strongest on the Continent, where universities frequently don’t even have a campus, let along a stadium.

East Asian schools, especially in China and Korea, take the focus on academic competition to an extreme. In China, high schools receive a great deal of publicity for the test scores of their students—these are published in newspapers and trumpeted in school propaganda, along with the name of those admitted to top universities. Students exhort each other (and are exhorted by their teachers) to work hard on the kinds of problems set for exams; they brag or complain about their school’s standing. Within the school, those who perform at the top are the stars of the school. *  The result is an atmosphere of intense focus on academics. Moreover, it is a highly social focus. Students work together in preparing for exams. They also make a show of being at school for longer hours than are officially required; they stay hours late for special study groups (in Japan, this takes the form of going to tutors or “cram-schools” after school hours, to study for university admissions exams). They voluntarily go to school on Saturdays. They also do a lot of studying at home, where their needs for quiet (and other indulgences) are carefully attended to by their parents. Such students spend many more hours in school during the year, and more hours at home studying, than American students, who are notably lax in these matters.

* A very good Korean student, who I knew as a graduate student at an elite American university, told me that in Korea the mothers’ of other girls would encourage their daughters to be friends with her—she was their shining example. This is extremely unlikely to happen in the US, for all sorts of reasons in the teen culture. On daily life in Chinese schools, see Yi-lin Chiang [2018].

The social atmosphere—or let us say, the spotlight of attention—is completely different in the high-achieving school cultures of the world, and in American schools. Chinese students are much more focused on school and a supportive home; American kids, even when they have helicopter parents (or especially when), have studying as just one of many things they do—athletic practices, music lessons, entertainment, parties with other kids. Officially, parents and talking heads all say learning is important, but it just doesn’t rate that high in everyday life.

Actions speak louder than words: the US is one of the only countries in the world whose laws ban schools from publicizing any information about students’ grades (the Buckley Amendment, passed by the US Congress in 1974). Before the 1980s, like every other professor, I used to post the list of grades after an exam on my office door. This was convenient for students to come and see them; no one ever complained. The rationale for the law banning posting grades—first promoted by legislators and psychologists, not by students—was that it was harmful to students’ self-esteem (i.e. to the self-esteem of students who weren’t at the top.) No one seemed to worry about negative effects of publishing scores on athletic performance: on the contrary, game scores report who did well or badly in basketball, baseball and football games etc.  The unspoken message is clear: academic performance is something we do not give honor to; we treat it almost as something shameful, certainly something private. Athletic performance is the reverse.

Not only do top-scoring European and Asian schools not focus on athletics, but they treat their outstanding students the way we treat top athletes. In France, with its highly centralized national school system, students take competitive exams for entrance to the elite university-level schools in Paris (the so-called grandes écoles). The rankings are published in the newspapers. Individuals who scored highly are remembered for the rest of their lives. (Pierre Bourdieu, for instance, was famed for having ranked number one at the entrance to the École Normale Supérieure in 1950; Jean-Paul Sartre is famous for having flunked the aggregation (school-leaving exam) in 1928, whereupon he spent another year cramming with his new girl-friend, Simone de Beauvoir, and next year passed in first place [Cohen-Solal 1987].  In Britain, students are admitted to the universities with publically announced prize honors (or not), and graduate with ranked degrees in their subject (Firsts, Seconds, etc.) that are widely discussed at the time and during their careers. The contrast is clear: academic performance is widely publicized and intensely focused upon in countries with high-achievement school systems; in American schools, it is not.

Sociological research on student life at American universities supports the point. Arum and Roksa [2011] found that American college students average only a few hours of studying per week. This does not hurt their grades much, as professors have adapted to their clientele by giving fewer and easier exams and papers. And what they learn does not sink in or last long; students retested a year later retained very little of what they once knew. All this has taken place during a period in history when the percentages of the youth cohort who attend undergraduate colleges has risen to over 60%, with those attending graduate and professional schools rising proportionately. This is credential inflation, where the value has declined of a high school diploma, undergraduate degree, or even an M.A. (or for science fields, even a PhD, which now is only preliminary to getting a post-doctoral fellowship). It is a race in which the finish line keeps receding into the distance as more students compete at each level. Universities do well (and professors get paid) as long as they have plenty of paying students (including those subsidized by student loans); it doesn’t matter how little they learn as long as it fits the average that moves them along the pipeline. Grade inflation and lax standards are a way that schools adapt to a system in which there is a great deal of competition to move from one level to another, but this is mostly on formalities rather than what they actually learn.

Ethnographic studies of student life (by young-looking researchers living in the dormitories) show how the culture operates on the ground. Armstrong and Hamilton’s Paying for the Party [2013; see also Moffatt 1989; Sanday 2007] shows that kids from comfortable middle-class (or higher) families are happy to enter big state universities known for being party schools. These are places with big-time athletic programs, football and basketball teams playing in huge stadiums, surrounded by school rituals, partying, an active sex life; they quickly learn how to coast through their classes. Students from working-class backgrounds have a harder time with it, balancing the partying with studying and part-time jobs, and they often drop out without a degree. Universities try to be attractive to students who can finance their own way, and to subsidized students too; outside of the most elite colleges, these universities get what name-recognition they have by their athletic teams and their appearance on TV. Having a team that is a contender generates the atmosphere of the college experience; party schools go in tandem with the big-time athletic schools.  Universities pretend otherwise, but big-time athletics usually goes along with academic mediocrity; its sine qua non being sheer size of its student population, hence size of budget that can invest in big-time teams.

My own experiences teaching at various universities across the USA illustrates the attitude of college athletes. At one of the newer branches of a big state university (not big enough yet to have a football team), the baseball team was its chief claim to fame. The entire team used to enroll for one of my undergraduate classes in sociology (I think they liked it that I gave very clear outlines of what would be on the exams), but only one player would show up for classes, taking notes for the others.  At an even bigger state university, the basketball team was nationally ranked, and one of the basketball players was in my class. One day I saw him in the hall and said: “Hey, Ricky. I haven’t seen you in class for a while.” He said: “Yeah, prof, I’m on injured reserve, and I figure as long as I can’t play, I don’t have to go to class.”  This university also had a staff of assistant coaches in the athletic department, whose job was to call up professors and check whether an athlete in their classes was in danger of falling below the minimum grade (C-minus) that would make them ineligible to play. This was also a way of seeking accommodation for players to have easier assignments or tutoring for exams. *

* In the defensive reaction by sports columnists to the new California law on players receiving royalties, the argument is blatantly stated that players get lots of in-kind payment already, including arranging accommodations from professors so that the academic work is as easy as possible.

The bottom line: the centrality of athletics in American education is a major source of devaluing academic standards and achievement. Eliminating school teams and focusing instead on publicizing academic achievement would be a pathway to catching up with European and East Asian levels of academic performance.

Are we ready to do this just on the merits of the case?  Certainly not. But fighting over the issue of openly professionalizing school athletics could be the way the system undermines itself.

Fake amateurism is a collusive wage-fixing agreement

In favor of maintaining the NCAA ban on any taint of professionalism, the biggest argument is not about what is good for education, but “we need the money.”

Legalizing payments to athletes for their name and image is just the opening wedge. Even this crack in the nationwide front raises fears that the best athletes would migrate to California and other states that adopt similar laws. As legislatures compete with each other over boosting their own school teams, further incentives to athletes would unravel the entire system of control. The NCAA has the weapon of banning schools that violate their rules from competing with schools still within their system; but in a showdown, it is likely the NCAA that would be undermined by schools pulling out.  Even in short-term perspective, colleges would lose advertising dollars they are now collecting.

Fundamentally, the NCAA is a collusive agreement to fix wages for college athletes. The argument of athletic departments is that athletes are already being well-compensated. They get free room and board, fancy dining halls, plane trips, easy class schedules, and the value of their college degree, all without student debt. All this assumes: (a) that athletes on scholarship go on to finish their degree; (b) that the value of a degree from a non-elite university, and in a easy major (not a lot of STEM student-jocks, nor pre-med or even business-school) will get them a good job; (c) continually rising credential inflation won’t make these degrees worth less as the economy grows more dominated by computers and high-tech.

Big-time college sports are already professional. They are professionally managed, with high-paid coaches and administrators, and a  huge, well-paid NCAA bureaucracy to keep money incentives out of the system so that all revenues—tickets, TV rights, name-branded clothing and gear—stay in the hands of the universities. It is a competitive world, dependent on a continual stream of recruiting the best athletic prospects from the high-school farm system. Accordingly, the corporate managers of this university-owned sports-entertainment trust attempt to keep their labor expenses as low as possible, by paying them in kind. In effect, they are like Roman gladiators, captives who fight in the arena for the benefit of the owners of the gladiatorial troupes—the main difference being that college jocks serve a maximum 4-year servitude, with a chance at getting selected by the professional leagues. (Which are another set of collectively-administered trusts that collude with the NCAA to do their training for them). 

Athletes on scholarships are not part of the regular student body. They live in their own dorms, eat in their own dining halls, take a limited range of classes—nothing hard or time-consuming—and exams under special conditions designed for their convenience. Real students on the same campus have little more contact with these semi-professional jocks than they have with professional athletes they view on television.

No more hypocrisy

One good thing: getting rid of the NCAA would relieve us of an endless stream of hypocrisy and scandal. Schools are repeatedly investigated, castigated, and penalized for infractions of NCAA rules over recruiting and rewarding players. Wins and titles are forfeited, coaches are fired. But the scandals keep happening, because scandals re-create the conditions that caused the behavior in the first place.

Scandals don’t change anything, because they are conservatizing. When rules or customs are violated, a public scandal stirs up moral indignation and reenergizes the old values. That is because a scandal is a kind of moralistic mob mentality, where everyone jumps in to enthusiastically denounce the culprit; not to join in the condemnation is to risk being attacked oneself. An archetype of how this works is the condemnation of Oscar Wilde in 1895 for homosexuality [Adut 2008]. His sexual preferences were an open secret in smart society of the time, but when the case became public, his supporters shunned him, and he died in disgrace soon after being released from prison. The scandal was conservatizing, re-energizing old prejudices against homosexuality, missing an opportunity to speak out and revise the old norm. The Oscar Wilde scandal didn’t stamp out homosexuality; it just kept it underground.

Similarly with NCAA-created scandals: the temporary wave of moralistic indignation (or pseudo-indignation by universities, journalists, and politicians chiming in) does nothing to change the underlying problem. For universities whose prestige and prosperity depends chiefly on their athletic teams, there is a continuous pressure to win, and that means recruiting the best players, year after year. Paying them outright is forbidden, so this leads to more and more devious ways of rewarding them. Alumni boosters used to give them fake off-season jobs and gifts like cars. Money is channeled to parents by helping them buy a house. Sports agents offer their services in lining up future deals in advertisements and professional prospects; then the NCAA cracks down on that too. Recruiters and assistant coaches try to attract players with a good time: scandals spread about trips featuring strip clubs and parties with ready sex. Other scandals are generated by eligibility rules that require players to keep up minimal grade levels; this leads to cheating on tests or fabricating grades by sports-friendly administrators or faculty.  The underlying problem is that these are not really “student athletes”, but quasi-professional athletes pretending to be students, and not committed to the student role very much at all. Nobody feels guilty about this kind of academic cheating because those involved regard the system as a hypocritical façade. The scandals are so much show; when you get caught, you have to play sorry and repentant, but everyone understands you have to do that or the fury against you grows even worse.

The latest version of such scandals, breaking in 2019, consists of students who get admitted to colleges by pretending to be athletes in minor sports. These scandals are generated by rules that require a certain number of scholarships for women’s sports, plus the determination of colleges to portray their entire athletic program as amateur because it also gives scholarships in sports that don’t make money. But in the era when a large majority of all high-school graduates apply for college, and most of them apply to a half-dozen or more schools, the sheer volume of application materials that schools have to process leads to a maze of complications that can be exploited. American schools are under pressure not simply to rely on grade-point averages and test scores (since students from higher social classes and dominant ethnic groups do better on them), so all sorts of non-academic criteria are added—extracurricular activities, public service, and being active in sports. It is not too surprising, in this perspective, that go-betweens with contacts in athletic departments created channels for getting fake athletes admitted to colleges in minor sports—which up to now had been below the radar of NCAA investigators obsessed with the big-money sports. Ironic, isn’t it? We have athletes who pretend to be students, and now students who pretend to be athletes.

Will scandal and exposure bring this kind of thing to an end? Every time there is a scandal, official people repeat the same idealized statements about amateurism and a level playing field. But dire punishments and a wider list of things for NCAA investigators to keep track of have not prevented new scandals from happening. The pattern is clear. Scandals will keep on going into the future as long as the hypocrisy goes on about student-athletes who generate major revenue for schools. *

* As a former professor, I have to laugh whenever I watch a bowl game. These always feature advertisements by the participating universities, showing pretty pictures of campus, plus a spiel about student-athletes earning degrees and becoming medical doctors or some such. If the NCAA regime collapses, at least this kind of hypocrisy might disappear.

But what about funding non-revenue sports?

OK, big-time college sports are just another kind of oligopoly capitalism that happens to be run by legally non-profit organizations. But isn’t it all necessary, to fund all other college sports?

Profitable, big-time sports are the revenue stream that pays for all the other sports on campus: women’s soccer, lacrosse, wrestling, tennis, track and field, you name it. In many or most of these, the players really are students who also do athletics. They get treated under the same rules and bureaucratic procedures as the money-sports jocks, but generally they would be playing for fun, exercize, and local prestige even if they didn’t have athletic scholarshps. All this would disappear, too, if the big-time sports are openly professionalized, and college athletes cut into the revenue stream that has been going to athletic departments.

All right, assume that it does disappear. What would it actually look like? There is nothing to prevent students from playing various kinds of sports. There would be no more scholarships for hockey, soccer, golf etc. Money for travel to play against other school teams around the country would dry up; they would have to play closer to home. They might even have to go back to intramural play.

To see what this looks like, all we need to do is look at Britain. These universities do not have school teams competing against each other, but they have plenty of athletics. These students are quite literally amateurs, playing cricket for the love of the sport. There are no athletic scholarships and no recruiting rules that could be violated. There are plenty of athletic fields; this is a cheap expense and universities are willing to absorb it, as part of the university tradition. British students generally enjoy sports; but for them, it is much less as spectators, since they actually get to play the games themselves.

Some top athletes trained in this system. Roger Bannister became the first runner to break the 4-minute mile in 1954 on a track at Oxford, when he was a medical student. Wait a minute, an American might say: since when do medical students get to compete in college sports? We have the rule that only undergraduates can compete, with a maximum of 4 years eligibility; this rule is a by-product of restricting athletics to those on scholarships. The one big inter-university sporting contest in England is the annual Henley Regatta, an 8-man crew race between Oxford and Cambridge on the Thames River. The teams are made up of students, mostly at graduate and professional school level, as well as university personnel such as scientists and medical doctors. Why not? These are the best rowers, and they represent the university as a whole; there are no bureaucratic rules as to who can take part, except that you must be a member of the university. 

On the Continent, the decoupling of sport from education is even more extreme. French, German, and other universities have no sports teams at all; nor do they have stadiums or playing fields. *  (For the most part, this is also true of secondary schools,) Students who want to take part in sports join a sports club. This has nothing to do with any school; it is a place with facilities for swimming, soccer, rugby, gymnastics, etc., funded by dues, or sometimes by local communities. (Sweden got to be big in tennis by a program of building indoor courts.)  In some countries, where sports is considered important for national prestige, the government has its own sports programs or schools (notable in the old Soviet bloc and in China), where the curriculum is purely sports. These are not countries without sports, but countries where sports are in one sector and one social identity, and being a student is another social identity, and no one confuses the two. 

* German universities generally have buildings for a particular faculty or specialty in a particular part of the city, with other faculties in other places; they have their own cafeterias and gathering places, but these form identities by what you study, not around the university as a whole. Living in dormitories and going to school athletic events together—the American pattern—does not exist in most places.

The result is that intellectual and academic achievement is not clouded by a prestige system that focuses on sports and defocuses everything else. These are the countries that lead the world in student test scores. More importantly, they are places where intellectual work is respected, and intellectual standards are high. They win Nobel Prizes and have heavily cited scientific papers—at a proportion to their educated population that is higher than the US; here we have more university students and professors than anywhere else, but our average intellectual productivity is modest. Since the early 20th century, when American universities began to dwarf other countries in sheer numbers, size, and money, the top intellectual talent from Europe and the rest of the world has migrated to American universities. This continues in the 21st century. To the extent that there is intellectual excellence in the U.S., it depends to a considerable degree on the presence of foreign students and faculty from more intellectually-oriented systems.

It doesn’t have to stay this way. The collapse of the NCAA regime and similar fusions of sports spectacles and educational institutions would put the US back into the situation of the rest of the world—at least in this respect, where we are lagging.

Would university prestige fall, if we didn’t have famous college teams?

Yes, for some schools. Especially for the big state universities that don’t have top faculties and have low academic standards. (I don’t mean Michigan or Berkeley or UCLA, which have dominant research faculties; but I do mean most of the schools in the top-25 football or basketball weekly rankings.) These are places that you would never hear of if they weren’t in the sports pages.  And since attending team games is the one big collective event at most schools, and the big party weekends at universities, the attractiveness of attending college would decline for many of these students.

How severely such schools would be hit remains to be seen. Credential inflation has ramped up for the last half-century or more, and competition for college degrees may keep these schools afloat even with a more utilitarian atmosphere. The elite colleges and universities have long since gone beyond resting their prestige on their athletic teams. * Eliminating school athletics there would have no effect at all on their financial condition, or their prestige.

* Football was invented and publicized at schools like Yale, Penn, and Rutgers at the turn of the 20th century, where it started as a pastime organized by students themselves; and then became a rallying-point for public prestige and school spirit as universities changed from being clubs of the local upper class into nationally competitive institutions. But the growth of scientific research, especially during and after WWII, gave an alternative source for prestige; and the wider struggle for admissions led to more emphasis on elite schools as a route into top corporate and government careers. For these universities, athletic fame (which they had held up through the 1920s) became superfluous. The University of Chicago was a big football power in the 1920s; but an academically-oriented president eliminated football, and the university went on to become the most intellectually-elite university in the country. It already had a lot of money from John D. Rockefeller, and it was famous for hiring away the best professors in the country. If most people have never heard of the University of Chicago, it is because they get their information from the sports news.

The open professionalization of college sports would bring some schools into a crisis. But this would feed into a trend already under way. Since the 2008 financial crisis, the public has begun to question the economic payoff of a college education instead of taking it for granted. There have been more discriminating looks at what kinds of schools pay off in career success and which ones don’t. Purely profit-oriented schools have poor results, and are already failing economically as there is a squeeze on government-supported student loans. Athletics-dependent universities may be the next to go. The elite colleges and technical schools continue to do well, above all because of their prestige networks into elite science, professional firms, corporations and politics. Such pipelines are by their very nature restricted; which is one reason why they are awash in applications. Even if the big state universities were to collapse, it would only return the US to a situation like Britain or Germany, where there are relatively few universities but all of them of high intellectual quality.

In a wider perspective, collapse of the NCAA system would be only one of several major crises already looming. The sustainability of credential inflation into the endless future is in question.  The threat of computerization and artificial intelligence displacing most middle-class jobs—and potentially most intellectual work—would be a crisis of how to find jobs, or at any rate sustenance, for the bulk of the population. In the context of these other crises, the disappearance of the athletic-centered university would be just another feature of the next epoch of historic change now on the horizon.

References

Adut, Ari. 2008. On Scandal. Cambridge Univ. Press.

Armstrong, Elizabeth A., and Laura Hamilton. 2013. Paying for the Party. Harvard Univ. Press.

Arum, Richard, and Josipa Roksa. 2011. Academically Adrift: Limited Learning on College Campuses. Chicago: University of Chicago Press. 

Chiang, Yi-lin. 2018. “When Things Don’t Go as Planned: Contingencies, Cultural Capital, and Parental Involvement for Elite University Admission in China.” Comparative Education Review 62: 165.123.034.086

Cohen-Solal, Anne. 1987. Sartre: A Life. NY: Random House.

Collins, Randall. 2019. The Credential Society. An Historical Sociology of Education and Stratification). NY: Columbia University Press.

Milner, Murray Jr. 2004.  Freaks, Geeks and Cool Kids: American Teenagers, Schools and the Culture of Consumption.  NY: Routledge.

Moffatt, Michael. 1989. Coming of Age in New Jersey. Rutgers Univ. Press.

Sanday, Peggy Reeves.  2007. Fraternity Gang Rape. New York University Press.

WHAT MADE THE GREATEST COACH IN ANY SPORT?

John Wooden was probably the greatest coach in any sport. How did he do it? Better said, what conditions formed a person like him, and what made the record-setting teams he coached? Are his methods a formula for success in any realm, or any sport?

Wooden said winning is a byproduct of something else. That is probably true, but records of winning in competition give an easy and objective way to measure greatness as a coach. Wooden’s records in basketball at UCLA are at the best in any team sport: 10 NCAA national championships (in a peak period of 12 years); an 88 game winning streak covering almost 3 years; a 38 game winning streak in NCAA tournaments against the highest competition. Comparisons with other coaches and other sports are complicated: winning streaks, frequency of championships, winning percentages, all need to be combined, taking into account differences in level of competition. Top coaches are those who win consistently, against whoever, and are at their best in big games against the strongest opponents. (See Appendix for Wooden’s record compared to other big winners.)


Wooden’s coaching methods

Bill Walton described team practices at UCLA as “non-stop action and absolutely electric, super-charged, on edge, crisp, and incredibly demanding, with Coach Wooden pacing up and down the sideline like a caged tiger, barking out instructions, positive reinforcement and appropriate maxims: ‘Be quick, but don’t hurry’... ‘Discipline yourself and others won’t need to.’

“He constantly moved us into and out of minutely drilled details, scrimmages, and patterns while exhorting us to ‘Move... quickly... hurry up!’ ... In fact, games seemed like they happened in a lower gear because of the pace at which we practiced. We’d run a play perfectly in scrimmage and Coach would say, ‘OK, fine. Now re-set. Do it again, faster.’ We’d do it again. Faster. And again. Faster. And again. “I’d often think during UCLA games, ‘Why is this taking so long? because we had done everything that happened during a game thousands of times at a faster pace in practice.’

“When four guys touched the ball in two seconds and the fifth guy hit a lay-up, man, what a feeling! When things really clicked, the joy of playing was reflected by the joy on (Wooden’s) face. He created an environment where you expected to be your best and outscore the opponent; where capturing a championship and going undefeated was part of the normal course of events.” [Wooden and Jamison, viii-ix]

Practice was more central than the game. After Wooden retired, when people would ask him what he missed, he said “I miss the practices.” [108]

Wooden’s formula was: don’t think about winning; strive for the best performance you can produce by maximal effort in practice.

This meant focusing on the tiniest details that would give you an advantage. When players first arrived at UCLA, weeks before practice began, Wooden would assemble them and demonstrate how to put on your socks so as to avoid wrinkles-- the aim being to prevent blisters. It was a group ritual: Wooden would demonstrate putting on his socks, then have every player demonstrate it to him. Same with how to tie the laces and measuring to get exactly the right size basketball shoes. [60] Wooden’s writing rarely goes into the details of more substantive skills and team drills, probably thinking there was no need to give this away, and anyway it had to be done physically at the proper rhythm. *

* Wacquant [2004] makes a similar point about practicing in a boxing gym: everything could be done alone at home except sparring, but doing stomach-building sit-ups, skipping rope, punching the light and heavy bags was more motivating when it was done together in the gym in 3-minute rounds punctuated by the trainer ringing a bell.


Team above star. Not only does team play come first. Stars who show off or “get too fancy” are distractions from team coordination and maximal effort on everyone’s part. Wooden comments that Lew Alcindor [Kareem Abdul-Jabbar] could have set college scoring records (as he did in the pros), but he understood that it would have reduced team play. Disciplining newly arriving stars was crucial, and Wooden said that the coach’s best ally in this respect was the bench: benching a star during practice or even a game is the best way to get the point across, and the other players see it and play harder. [128]

Wooden’s team-above-star method ran directly against the main pathway by which young basketball players first make their reputation. Scott Brooks [2009] has shown in analyzing teen basketball leagues in Philadelphia-- the breeding grounds for players recruited by colleges all over the country-- a young player could only make a reputation by scoring a lot of points, and this meant getting the ball as often as possible. It was a cumulative spiral, both up and down: if a player scored a lot, he could demand the ball more; if he couldn’t get the spiral going, he wouldn’t have the opportunity to show his skills. Wooden thus had to break the prevailing style of play, especially as basketball became desegregated in the 1960s and the sport jumped in popularity. Probably he had an advantage in that his earlier teams did not emphasize shooters, big men, or fancy dribblers, but speed and passing. His signature player, Bill Walton, was described (when he won a championship in the pro ranks) as the greatest passing quarterback in the history of the NBA.

Mistakes happen. This is inevitable. Wooden’s point is not to blame yourself, or blame others, but to analyze the situation, locate the mistake that is under your own control, and fix it. This means not getting emotional when bad things happen.

Controlling emotions was a key to Wooden’s methods. He did not believe is giving pep-talks, speeches to stir up emotion before big games. “If you need emotionalism to make you perform better, sooner or later you’ll be vulnerable, an emotional wreck, and unable to function to your level of ability.” Hatred and anger motivate only briefly. They aren’t lasting and won’t get you through the ups and downs of a game. “Mistakes occur when your thinking is tainted by excessive emotion... To perform near your level of competency your mind must be clear and free of excessive emotion.” [124-5] Top performance is being cool and professional. Micro-sociologically, this is high EE -- emotional energy as confidence and enthusiasm, the very words that Wooden uses. Not that it is emotionless-- Bill Walton speaks of the joyful feeling when a high-speed coordinated play involving the entire team clicks. This is a non-disruptive emotion, in the rhythm, not breaking it.

The coach’s job includes criticism, pointing out mistakes not as punishment by to correct them and get better results. “The only goal of criticism or discipline is improvement.” Above all, hard criticism in public is to be avoided, since it embarrasses and antagonizes players. Wooden’s strongest punishment was to take away the privilege of participating in a UCLA team practice. “If they weren’t working hard in practice I would say, ‘Well, fellows, let’s call it off for today. We’re just not with it.’ The vast majority of the time the players would immediately say, ‘Coach, give us another chance. We’ll get going.’” [118-19]

Playing under pressure against top competition, obviously, is the mark of a championship team. For Wooden, this was a by-product of long experience in his methods of practice. He also believed the most difficult experience would promote even higher effort--- not just as individuals or in bodily endurance but in the team rhythm. The apex was perhaps in the 1973 championship game when UCLA stretched its undefeated streak to 75 games and won the NCAA for the 7th year in a row; Bill Walton made 21 of 22 field goal attempts.

Adversity of all kinds are to be taken as opportunities. After the 1966-67 season, the NCAA banned the dunk shot, in part because of 7 ft. 2 inch Lew Alcindor. Wooden encouraged him to get a new shot; it became his trademark skyhook, an unstoppable shot by a big man.

Wooden said that he scouted opponents less than other coaches. He wanted the focus to be on his own team, not on the other. What if they changed unexpectedly? He aimed at preparing for any eventuality, and believed that his team’s high-speed style was capable of attacking any defense. Tricks of psychological warfare weren’t worthwhile. Nevertheless Wooden had one that he always used: don’t be the first team to call a time-out. Let the other team admit they’re tired. His high-speed practices ensured his players would be in superior condition.

Wooden rated himself an average coach tactically; he said his advantage was in analyzing his players, and the fact that he enjoyed hard work. I would add: his hard work was producing intensely focused, high-speed, rhythmically entraining team practices: the definition of a successful Interaction Ritual. The hard work of an intense IR pays off in high EE. Wooden certainly had it; and his players got it too.

Why Wooden?

Sociologically, our aim is not to make a hero or a genius out of Wooden, but to analyze how someone like this would appear when and where he did. John Wooden was born in 1910 in a small town in Indiana. He grew up in the 1920s in a basketball-crazed state, where high school games would attract more spectators than the entire local population. As a boy, Wooden idolized the famed state champion team, and became a star basketball player in high school and college. He was not tall or muscular (5 ft. 10 inches), but quick and fast, and he worked hard conditioning himself to become even better in the areas where he had an advantage. His coach at Purdue, where he was a three-year All American, said he was the best-conditioned athlete he had seen in any sport. He became famous for diving for loose balls on hardwood floors, hustling all the time and jumping back up like a rubber ball whenever he was knocked down by bigger players. In his final year at Purdue, he was named Player of the Year in a national poll.

Wooden’s technique as a coach was to make players as much like himself as possible.

He early became accustomed to team success. His high school team won the Indiana championship in 1927 and lost the final game in 1928 by a last minute shot at the buzzer. At Purdue, his 1932 team was voted national champion (NCAA tournaments did not exist until 1941).

Nevertheless, top success as a coach took considerable time. When Wooden won his first NCAA championship at UCLA in 1964, he was 54 years old; his last came in 1975, when he was 65. After graduating from college in 1932, he played in a local professional league for 6 years, simultaneously teaching high school English and coaching the basketball team. Not counting 3 years in the army in WWII, Wooden coached high school for 11 years, winning state championships and compiling a record of 218-42 (winning 84%). At age 36, he became the basketball coach at Indiana State Teachers College (now Indiana State University), as well as Athletic Director, where he won the state championship both years he was there. In 1948 (now age 38) he was hired to coach UCLA, at the time a mediocre team. He turned the team around immediately and began a string of winning seasons, but it was 16 years before UCLA’s first NCAA championship. By 1961-2 and 62-3, his teams were winning the Pacific Coast Conference, and finished 4th nationally both years. Apparently they were on the brink of dominance.

Why did it take so long? Wooden taught the same methods throughout his years at UCLA, and declared that he put out the same amount of effort. The spiral of success and reputation was moving in his favor; Wooden was attracting better players nationally. A second factor was the opening at just this time of the era of black basketball players. In the 1964 championship game, UCLA beat Duke, even though they had two players near 7 ft. and Wooden had no-one taller than 6 ft. 5. But Duke was still in the era of segregation, and UCLA’s integrated team of fast ball-handlers beat the big slow white guys. Soon after, Lew Alcindor arrived from the Bronx. (Freshmen were not allowed to play varsity, so Alcindor did not join the mix until the 1966-67 season.) Wooden had been a pioneer of supporting black players since his coaching days at Indiana State, where he refused a post-season invitation in 1947 because the then-ruling National Association of Intercollegiate Basketball maintained segregation to mollify its southern members. Next year, when Wooden’s team again won the Indiana championship, the NAIB reversed its ban on black players, and Wooden’s player Clarence Walker became the first black to play in a post-season tournament. Wooden’s team reached the final but lost-- the only championship game he would ever lose.

After 1965, everything was rolling in the positive spiral of feedback and social momentum, setting the stage for the unmatched championship streak.

Wooden says he decided to retire, suddenly after a Final Four victory in 1975, putting UCLA in the championship game for the 10th time. He recalled walking from the floor, thinking of the crowd of reporters he would have to face with the same, boring questions. He was 65; he had been playing and coaching at high intensity for 50 years. He had broken every record and then broken them repeatedly. Is there a turning-point from the peak of EE? His team had been on a plateau for the last several years. Or was it that Wooden finally decided that the victory-crazed fans and reporters never would understand what he was about?

He did not disappear from the public eye, but turned to publicizing his methods of success, as universally applicable, not just to sports but to business, professions, life in general.


Do the same methods succeed elsewhere?

Wooden’s methods (team not star, attention to details, high-energy practices, professionalism not emotion, maximal speed in rhythm) worked extremely well in basketball. But not all great basketball teams use them; in other sports, some champions do, some don’t. Further afield, do these principles succeed in other career fields, in business and politics, art and intellectual life?

Basketball and other team sports have a similar range of coaching styles. Some successful coaches were dictatorial, including Vince Lombardi in the NFL and Bobby Knight in college basketball, known for win-at-any-cost (and the latter) for angry tirades against players and referees. The best winning record among college football coaches (88%) was Knute Rockne at Notre Dame in the 1920s, and he was famed for his inspirational oratory at halftime. This was probably a carryover from the public oratory ubiquitous throughout the 19th century, at a time when college football became the big public entertainment. As time went on, the analytical approach promoted by Wooden tended to supplant or at least complement emotional oratory. But we lack good comparative study on different coaching methods vis-à-vis their results in winning records.

Wooden’s unconcern with scouting opponents has certainly not carried the day. His rationale of concentrating on what we do well and not making the enemy the prime focus was a way of living inside the ritual zone-- better yet the center of a whirlwind-- that he created in intense practices. But scouting became a major part of team organization, above all once football coaches (led by pro coach Sid Gilman in the 1960s) began using newsreel film to analyze opponents. This has expanded into a huge industry with TV broadcast tapes and every team’s own digital cameras and a large staff to break out the patterns. Devising strategies has tended to supplant spontaneous application of well-drilled skills; the use of statistical analysis in baseball strategy in the 2010s is blamed for making games duller. Does Wooden’s caveat still apply? -- getting caught up in focusing on your enemy can backfire. Probably.

Focus on crucial details that make up athletic skill is unquestionably a key to success. Was Wooden overcontrolling in this respect? Not only did he have his players concentrate on details, he was obsessed with them. Wooden and his assistants spent more time each morning planning the afternoon practice than the 2 hours or less actually spent on the court. They made detailed lists, down to the minute for time to be spent on each drill, even specifying how many basketballs should be placed where for what drill. Wooden also kept detailed records after practice of how it went, so he could chart individual players' progress and focus on individualized drills as needed. [133] The value of all of this detail-compiling would have to be settled by comparisons; probably it was Wooden’s way of keeping himself running at full bore at all times, when he wasn’t in the buzz of directing practices.

In individual sports, the role of the coach declines or at least becomes less prominent. The most successful athletes in areas like tennis and swimming are famed for their own self-discipline. Bill Tilden, who dominated men’s tennis during the 1920s (ranked number one player in the US 11 years in a row, 8 years as world champion), wrote about his own methods: the keys were to observe carefully, to anticipate the opponent’s moves and the flight of the ball, and to establish emotional domination. He would watch an opponent warming up, looking for his favorite and least-favorite shots; then he would attack his weaknesses; occasionally when playing an especially strong opponent, he would attack his strengths to deflate his confidence. Tilden carefully observed the angle of the racket so that he could tell which way the ball was going even before it was hit; and he moved immediately to the spot, not where the ball would bounce, but where he could hit it with his own body in prepared position. The focus on detail fits Wooden’s methods, although Tilden was more focused on the opponent -- probably inevitably in the back-and-forth rhythm of tennis. Tilden commented that luck plays less role in tennis than in any other sport, so skill wins out consistently. But it was skill, not brute force: he disparaged the hard-hitting serves of the younger players that he called “the California game,” and was consistently able to beat them. All this resembles Wooden. So was his central maxim, concentrating one’s attention during play: “The man who keeps his mind fixed on his match at all times puts a tremendous pressure on his opponent.” [Tilden, 13] Winning against equally concentrated opponents is a battle of wills-- a battle of self-discipline as Wooden would describe it.

Dan Chambliss [1989] developed a theory of winning and losing based on studying swimmers at different levels of competition, from local club up to the Olympic team. Winners are those who perfected the details that go into swimming faster (exact hand angles, timing of turns on the wall) an ensemble of techniques that add up little advantages into superior speed. Chambliss noted that winning swimmers do not practice longer hours or put in more effort than their also-ran competitors. It is not that they practice more; they practice better. And they enjoy their practice, since it gives them a sense of being in rhythm with oneself, a zone where effort is eclipsed by smoothness and purpose. Wooden would agree: feel yourself performing your best skills, and winning comes as a by-product.

Chambliss denied that the winning difference is just innate bodily strength or muscle quickness; the same swimmer could make a quantitative leap in competitive level, once he or she acquired the ensemble of skills and the mind-set to use them. * Opponents think the habitual winner has something special, which they can’t define in physical terms; hence they tend to put the winner into a higher realm of existence. They think that the top rank must be something alien to their own experience, and thereby put themselves in an inferiority zone of lesser confidence. But there is nothing mysterious about winning-- in sports or anything else, Chambliss argues; writer’s block is a version of psyching oneself out by comparing oneself with the company of unreachable great writers. Excellence is the ability to maintain a normal, habituated attitude to being in the inner circle, just performing one’s detailed techniques. Thus Chambliss’s title: “The Mundanity of Excellence.”

* This happened to Tilden. He was interested in tennis since age 6, but for a long time was a klutzy player despite hanging out at a tennis club near his home. By the time he was 26 -- following 20 years of sharpening his observations of what makes good tennis -- he became unbeatable.


With team sports, we can make comparison between the success of a coach like Wooden and his successors. After Wooden retired, UCLA remained a good team but no longer a dominant one. And this was so even though UCLA was coached by Wooden’s former assistants and players. Presumably they followed the same methods of practice and so forth. This would appear to undercut the importance of the methods per se; Wooden himself must have added an extra something to them-- just more calm? more professional balance, combined with the right amount of detached intensity? But there are other reasons why a coach’s winning methods do not guarantee unending success. For one thing, his assistants become coaches elsewhere, propagating the techniques and raising the level of competition. This happens to all successful coaches, in all sports. Bill Walsh at the San Francisco 49ers taught a dozen assistants who spread the West Coast Offense throughout the NFL; eventually assistants of former assistants made up a network of almost the entire league. When a field of competitive method becomes saturated in this way, there are moves to break the mold by starting some new style of coaching.


Wooden’s methods outside sports

The closest analogy to team sports is war. William James said that we needed “a moral equivalent of war” -- i.e., a way to get the teamwork and group solidarity of wartime, without the casualties. When he wrote this, football was just beginning to become a popular college sport, and it created for its fans the tribal enthusiasm of war, with a better chance of joys and less devastating sorrows.

But the military art, for professionals, has many varieties of leadership. Taking the battlefield commander as the coach, do Wooden’s methods and maxims apply? One can find examples to fit, but we have no systematic comparisons. Some generals were famed for their oratory (Napoleon, Caesar); many made themselves deliberately into hero-figures with their own totemic emblems (Montgomery’s beret, Patton’s pearl-handled pistols, MacArthur’s corncob pipe, Eisenhower's jacket). Some led from the front, both as point man in the charge (Alexander the Great), or probing a high-speed blitzkrieg front (Rommel). Some were stern and dictatorial (Patton, Stonewall Jackson); some beloved by their troops (Robert E. Lee); some were just methodical and unflappable (Grant). There is no clear record of which styles did best; the question is confounded by historical changes in the size of armies and the technologies of war. Focusing on anticipating the enemy’s plans sometimes brings success, but also can lead to conservatism and loss of momentum, as Wooden warned (France at the Maginot Line). The tendency throughout the 20th century was to emphasize planning and logistics -- crucial for moving huge armies with many kinds of equipment; and in the 21st century huge staffs surround generals mapping out the sequence of projected choice points as a battle unfolds. The less resource-rich side reacts by avoiding settled battle lines and relying on tactics of guerrilla warfare -- a style which tends to promote charismatic leaders, whose personal reputation is a key to recruiting followers. On the whole, Wooden’s attention to detail holds true; but the other elements of his leadership style have no clear precedence.

The weakness of the war/sports analogy is that games are fundamentally different from real life. Games are scheduled: we know in advance when and where they will take place. They have time limits; and referees to enforce rules and assess penalties. (In real war, the referees materialize after one side has decisively won, whereby they hold war crimes trials for the losers.) Above all, real life calls no time-outs; when military momentum flows and the more coherent war-organization has the other scattered and demoralized, wars are won by not letting the other side catch their breath. Schedules and time limits are exactly what are missing in war; it is sudden switches in time and place that are the art of military maneuver. War may resemble a scheduled game if there are fixed fronts, but this is the formula for unheroic carnage, war by attrition won by the side with the bigger battalions and industrial capacity to wear the other down. Evenly matched opponents in war resemble finalists in a championship tournament, but the result is not a good game but a disgusting one.

At the core of Wooden’s methods is the importance of practices. And this fits sports in general, where there is far more practice time than game time. But this is another key feature of sports that does not fit most of real life. Yes, peace-time militaries spend much of their time drilling: marching around (a ritual irrelevant to modern battle), practicing firing weapons, war-games. But soldiers have found no substitute for the emotional atmosphere of war; and trained troops only acquire battle skills after a period in combat (if they do at all). It is true that set-piece battles are planned, in the Iraq wars as in past centuries; but this is generally a luxury of the resource-powerful side, who can wait until they are ready to launch an attack. On the macro-strategic level, planning is mostly table-top models (or today, computer simulations), since moving the full number of troops and weapons around a modern battlefield is far too large to be practical, and the experience is mainly worthwhile for the higher officers. Nevertheless they may find something left out of the plan-- the occupation of Iraq in 2003 was waylaid by unexpected mass looting of government installations, and by the fading away, not the surrender, of the defeated army with their weapons. Wooden would consider this an instance of over-relying on plans.

Business also lacks practices. Techniques exist for how to manage, and workers sometimes undergo training for a job; but it lacks the rhythm of sports-- intense practices with the coach pacing the sidelines, leading up to a short period of the game itself. Job-training programs are mostly failures; too remote from the cutting edge of work, too flat an atmosphere to have the skill-honing and motivational effects Wooden prized. Contemporary business corporations like to flaunt morale-building spaces and amenities for their workers; but these are chiefly a vacation from work, not the intense preparation of UCLA basketball. In politics-- another area for which sports analogies are sometime touted-- a certain type of practicing for “the event” is now popular. Chiefly this takes the form of privately coaching a political candidate how to make speeches (complete with hand gestures and facial expressions) before the public and the press. But as the 2016 presidential campaign illustrates, politicians who need coaching tend to come across as artificial, and this hurts their impressiveness vis-à-vis candidates who seem more natural. Aside from this frontstage manipulation, one does not hear of practicing for the main work of politicians in office, which consists above all in negotiating coalitions. Despite superficial analogies (“Just win, baby!”) politics is not at all like sports.

Business is another area where analogies to winning in sports and in war are popular. Business schools and journalism are full of advice and theory of leadership. The preferred leadership styles change. In the era of the high-tech giants, charismatic leaders are in demand. Their charisma is orchestrated in product launches and other appeals to consumers-- a style set by Steve Jobs and emulated by Bezos, Ballmer, Musk, and others. The hope is that their charisma carries over to their cutting-edge products, or vice versa. But out-front charisma is not the only successful style. French sociologist Michel Villette shows that most of the great entrepreneurial fortunes made since the 1950s were created by ruthless competitors, stealing techniques and turf from oblivious or momentarily troubled rivals, and brazening out hardball lawsuits. Another style, far from supporting and uplifting one’s employees, is to set them against each other and cut the work force whenever their jobs can be absorbed by others, or by computers. A good deal of the rhetoric today about effectively managed corporations emphasizes good human relations, but this may be temporary in a time of expansion and relative labor shortage. One can only conclude that Wooden’s principles are not generally followed in the business realm, except for some of his more platitudinous maxims.

Bottom line

How does Wooden’s list hold up?

High energy practices, building up maximal speed and coordinated rhythm. This is chiefly specific to sports.

Team not star. On the other side is the popularity of charismatic leaders, in war, business, and politics. If they are too charismatic, they easily slide over into authoritarian despots. Team not star is good advice, but deviating from it is hard to avoid. Outsiders (fans, investors, customers) tend to simplify everything down to an emblem. Audiences create the star.

Focus on perfecting your own action skills, not on scouting out the enemy. Even in sports, the shift has been entirely in the other direction. This is equally the case in business and the military.

Attention to the details that make up skill. It is this “mundanity of excellence,” as well as “total concentration on the match” that divides insiders from outsiders. Even charismatic leadership is a skill, and it involves learning not only oratory (although not necessarily) but being a careful observer of one’s audiences. Emotional domination in conflictual situations is based especially upon careful observation of one’s opponent. This does not mean, scouting out the opposition in advance so that you follow a planned strategy, but quick, on-the-spot observation of emotional cues, finding the openings when domination can be achieved by setting the rhythm for the other to follow.

Cool professionalism rather than emotionalism. This is probably always valuable. How does it measure up in success compared with charismatic or cut-throat styles? A good sociologist could find this out.

CIVIL WAR TWO Condensed One-Volume Edition out now

Appendix: Highest-winning coaches and teams

The highest winning percentage for NFL coaches is .759 (John Madden with the Oakland Raiders 1969-78). But Madden won only 1 championship in 9 years (11%). Best for both winning rate and championships was .730 for Vince Lombardi (Green Bay Packers, 1959-69), with 5 championships in 10 years (50%).

Highest for college football coaches were .888 for Knute Rockne (Notre Dame, 1918-30); and Frank Leahy .864 (Notre Dame 1939-53). No national championship existed at that time.

Highest for NBA coaches is .705 for Phil Jackson, winning 11 championships (Chicago Bulls 1991-3, 1996-8, L.A. Lakers 2000-2, 2009-10). Billy Cunningham is next at .698 (Philadelphia 76ers), but won only 1 championship. Red Auerbach had a .662 winning rate, and won 9 championships with the Boston Celtics, including 8 in a row 1959-66. No one else has more than 4 NBA championships.

For college coaches, John Wooden’s lifetime winning ratio (.804) is 5th on the list. Of those above him, the top is .833; these were coaches in minor programs except Adolf Rupp (Kentucky) at .822 with 4 NCAA championships. Wooden won the most championships (10), followed by Mike Krzyzewski (Duke) at 5 (winning percentage .765).

Top coaches in women’s college basketball exceed the men’s coaches. Geno Auriemma has a .884 lifetime winning average at University of Connecticut, where he won 11 national championships between 1995 and 2016, including 4 in a row. Pat Summitt at University of Tennessee won .841 of her games, and 8 championships between 1987 and 2008, including 3 in a row. Summitt’s style differed strongly from Wooden’s, yelling at players, giving them icy stares for poor play, and generally considered one of the toughest coaches anywhere.

In Major League Baseball, top lifetime winning average was .615 by Joe McCarthy (Chicago Cubs 1926-30, New York Yankees 1931-46, Boston Red Sox 1948-50). He won 7 World Series, all with the Yankees, including 4 in a row. Tied for World Series was Casey Stengel with 7 (all with the Yankees during 1949-60, including a 5-year victory string 1949-53; he previously coached the Brooklyn Dodgers and the Boston Braves, and subsequently the New York Mets). Stengel’s lifetime percentage was far down the list at .508 -- showing that quality of players makes a difference too. McCarthy was regarded as a passive, hands-off manager, letting his stars do their thing. (No John Woodens here.)

Winning streaks differ considerably by sport. Tops are 111 straight games won by Auriemma at UConn 2014-17 in women’s basketball; and 88 by Wooden at UCLA in men’s basketball. In the NBA, the longest streak was 33 games by the Lakers in 1971-2.

Longest winning streak in college football was 47 games, by Oklahoma in 1953-57, coached by Bud Wilkinson (winning percentage of .826, from 1947-63). In the NFL, the longest streak was 22 games, by the New England Patriots 2003-4. This seems around a natural ceiling; 8 NFL teams have had streaks of 18 or 19 games. The quality of the league is inversely related to the longest winning streak: longest in high school football (151 games by De La Salle H.S. in Concord, CA), 47 in college, 22 in the NFL.

In baseball, the longest winning streak is 22 games (2017 Cleveland Indians). In the National Hockey League, the longest streak is 17.

Putting together team streaks with lifetime coaching winning percentage and number of championships per years coached, these coaches stand out:

Geno Auriemma, UConn women’s basketball, .884, 111 game streak (top), 11 championships in 23 years (48%)

John Wooden .804, 88 game streak (top), 10 championships in 29 years (33%)

Phil Jackson, Bulls and Lakers, .705 (top for NBA), 33 game streak (top), 11 championships in 20 years (55%).

Red Auerbach, Celtics, .662, 9 championships in 17 years (53%)

Pat Summitt, UTenn women, .841, 8 championships in 38 years (21%).

Vince Lombardi, Green Bay Packers, .730, 5 championships in 10 years (50%)

Bud Wilkinson, Oklahoma football, .826, 47 game streak (top)

Bill Belichek, Patriots .681, 22 game streak (top), won 5 Superbowls in 28 years (18%)

Paul Brown, Cleveland Browns 1946-75, .672, won 7 championships in 29 years (24%)

Joe McCarthy, Cubs/Yankees/Red Sox, .615 (top for baseball), 7 championships in 23 years (30%)


There are others who were high in one area but not in others:

George Halas, Chicago Bears 1920-67, .667; won 6 championships over 47 years, for an average of 13%.

George Allen 1966-77, .712 (3rd highest in NFL history), but zero championships in 11 years.


There is no valid way of mechanically combining winning percentage, championships per year, and unbeaten streaks, to arrive at a mathematically ideal ranking of the most successful coaches. It could easily be devised, but the weights are subjective and arbitrary (as in college rankings, or corporate rating systems). Some coaches are better at one thing or another. It is what it is.

What should be done now is classify coaches at all levels of success by their coaching styles, the only valid test of whether any particular method makes a difference.


References


John Wooden and Steve Jamison. 1997. Wooden. A Lifetime of Observations and Reflections on and off the Court.

Scott N. Brooks. 2009. Black Men Can't Shoot. Univ. of Chicago Press.

Loic Wacquant. 2004. Body and Soul. Notes of an Apprentice Boxer. Oxford Univ. Press.

Daniel F. Chambliss, 1989. "The Mundanity of Excellence." Sociological Theory 7: 70-86

Randall Collins and Maren McConnell. 2016. Napoleon Never Slept: How Great Leaders Leverage Social Energy. Published as an eBook by Maren Ink.

William T. Tilden. 1950. How to Play Better Tennis. A Complete Guide to Technique and Tactics.

Allen M. Hornblum. 2017. American Colossus. Big Bill Tilden and the Creation of Modern Tennis.

Michel Villette and Catherine Vuillermot. 2009. From Predators to Icons: Exposing the Myth of the Business Hero. Cornell University Press.

Wikipedia articles: John Wooden; other coaches.

On-line records of coach and team winning percentages, championships, and streaks in basketball, football, and baseball.

MARILYN MONROE’S NETWORKS PULLED HER APART

Marilyn Monroe had a famous career: famously good, famously bad, pretty much simultaneously. Once launched, everything she did made her famous; and everything she did caused her grief.

Why? Look at it from the point of view of her networks.

[1] Hollywood film industry. She grew up on the periphery of Hollywood, and from an early age her ambition was to be a star. She went along with the casting-couch system, and as a result got looked down upon as just a studio whore. But she kept coming back, from other angles...

[2] Glamour photographers. This network provided her early livelihood, and caused the first big scandal that propelled her to the center of attention. Photographers were her comfort zone. They kept her in the public eye (for better or worse, including the second scandal that broke up her celebrity marriage). And photographers and their spouses were her strongest friends, the fallback whenever everything else went bust.

[3] A celebrity among celebrities. She hung around with big names like Frank Sinatra and Joe DiMaggio, her second (but first famous) husband. The result was a home vs. career conflict, and even worse, a spotlight contest that she was bound to win, and lose a husband.

[4] Theatre intellectuals. These became allies in her battle versus Hollywood studio scorn, low pay, and stereotyped roles. She got in tight with the New York elite of acting coaches and directors, and married the most famous playwright of the day. But from now on, her acting coaches would be in tension with whatever film directors she worked with.

[5] The star/politician nexus. Already during third husband-to-be Arthur Miller’s fight with the House Un-American Activities Committee, Marilyn was becoming connected with the liberal intellectuals. With the coming of Camelot, the media-beloved Kennedy White House was glamorized by its overlap with the Hollywood “rat pack” of Sinatra, Kennedy in-laws, and other party animals. Marilyn is linked sexually with JFK and his brother Robert, until it becomes a little too openly scandalous and she is dropped. Later, Joe DiMaggio would blame Sinatra and the rat pack for the drugs and drinking that led to her death.

[6] Her psychiatrists. By this time, she is dependent on psychiatrists, if not to sort things out, at least to give her drugs and a semblance of allies. One of them betrays her—worried over suicide—by having her locked up an mental hospital. Who gets her out? Her most heavyweight lover, Joe D. Not long after, her alcohol-and-drugs diet kills her anyway.

Her networks offset each other, providing a succession of reliefs, which turn into new strains.   [1] clashes with [2]; [1-2] clashes with [3]; [1-2-3] clashes with [4] and with [5].  [6] claims to deal with the clashes but just extends the damage.

Her networks canceled each other out—as support networks. But their overall effect was to make her as big a star as could be: the center of maximal attention whatever she did. Whatever you can say about Marilyn, there was no dead air.

What was Marilyn really like?

In a way, this is not a very sociological question. Erving Goffman said that everyone has a frontstage self (or more than one), plus a backstage part of your life where you put on your clothes, your make-up, and your way of dealing with the people you’re going to meet. But he also denied that the backstage is the real self, since it is shaped by what you do on the frontstage part; it isn’t any more spontaneous or “real”, just an alternation between preparation, social performance, and down-time. Marilyn had a complicated personality, which means her total self was a sum of how she dealt with all her networks; and since her networks were energizing her, pulling her this way and that, she was the sum of multiple attractions and their strains.

There was, however, a constant core to pretty much everything she did. She was always very ambitious and determined. She was not a  weak person; that was a role she played, wispy-voiced, naive little-girlish. She seemed passive and clue-less, but she always stole the scene, whether on-screen or off.

From her early childhood, she wanted to be a movie star. Her mother worked as a film negative cutter at a company that processed films for all the studios. Her mother gave her up to foster parents within a few months of her birth in 1926, but visited the little girl from time to time and took her to the movies and to see the sights of Hollywood. When Marilyn was 6, her mother bought a small house in Hollywood, which she shared with her daughter and a family of actors. This lasted less than a year, when the mother had another breakdown and was committed to a mental hospital. Marilyn continued living with the actor housemates, then her mother’s friend Grace took over, along with other friends and relatives in the Los Angeles neighborhoods near Hollywood. (A fairly accurate picture of this Hollywood-fringe lifestyle is in the first part of Nathanael West’s 1939 novel, The Day of the Locust.)

There was virtually nothing else. Her mother, a flapper-type of the 1920s, had lovers, and Marilyn was probably an illegitimate child.  Marilyn was effectively an orphan, shunted around from one foster parent to another (then as now, foster parents often took in a number of children). She lived in an orphanage from age 9 to 11; then with another foster family—in all a total of 10 different families. She married, as soon as she could after her 16th birthday, to avoid being sent back to the asylum when her foster family moved out of state. Her choice of husband was just a convenience, a boy who lived next door. Since this was 1942 and WWII had broken out, he shipped out to the Pacific while Marilyn lived with his parents and worked in a defense factory. There was no sentiment in the marriage; Marilyn said they had nothing to say to each other and it was boring. When he came back in 1946, he objected to Marilyn’s new-found career as a photographer’s model, so they divorced.

In 2010, some notebooks of Marilyn were found among the effects of one of her acting coaches. These contained two main themes: her ambition, self-reminders to work hard and master the craft of acting; and feelings of being alone, always alone. Since these notes were from the period after she was already a star, these were life-long preoccupations-- if this is how she felt when her networks were dense and active, how would she have felt when she was cast adrift, bouncing back and forth between ephemeral families and institutions, bit parts and photo gigs? Still, her ambition was her salvation; it was her energy-center, giving her a purpose and a trajectory. One cannot say she was a person of low emotional energy. Her ambition was the thread that kept her going.

What was she like backstage? (in Goffman’s sense, not just in the movie world)  Our best glimpse into that side of her life is an account by Truman Capote of an afternoon he spent with her in April 1955. They are at a funeral parlor in New York, a memorial for a grand old lady of the theatre who had been something of a mentor to Marilyn. As usual, Marilyn is very late. When she arrives in the entry hall, she explains she couldn’t decide what to wear—was it proper to wear eyelashes and lipstick? She had to wash it all off. What she decided to wear was a black scarf to hide her hair, a long shapeless black gown, black stockings, combined with erotic high heels and owlish sunglasses. She is gnawing at her fingernails, as she often did.

Marilyn: “I’m so jumpy. Where’s the john? If I could just pop in there for a minute--”

Capote:  “And pop a pill? No! Shhh. [...They’ve] started the eulogy.”

They sit in the last row through the speeches. After it’s over, Marilyn refuses to leave.

Marilyn: “I don’t want to have to talk to anybody. I never know what to say.”

Capote: “Then you sit here, and I’ll wait outside. I’ve got to have a cigarette.

Marilyn: “You can’t leave me alone! My God! Smoke here.”

Capote: “Here? In the chapel?”

Marilyn: “Why not? What do you want to smoke? A reefer?”

Capote: “Very funny. Come on, let’s go.”

Marilyn:  “Please. There’s a lot of shutterbugs downstairs. And I certainly don’t want them taking my picture looking like this.” ...  “Actually, I could’ve worn makeup. I see all these other people were wearing makeup.”

Capote: “I am. Gobs.”

Marilyn: “Seriously, though. It’s my hair. I need color. And I didn’t have time to get any. It was so unexpected. Miss Collier dying and all. See?”  She displays, under her scarf, a dark line at her hair part.

Capote: “Poor innocent me. And all this time I thought you were a bona-fide blonde.”

Marilyn: “I am. But nobody’s that natural. And incidentally, fuck you.”

They sit and talk. Marilyn goes on to say that Miss Collier’s companion is going to live with Katherine Hepburn. “Lucky Phyllis... I’d change places with her pronto. Miss Hepburn is a terrific lady, no shit. I wish she was my friend. So I could call her up sometimes and... well, I don’t know, just call her up.”

The conversation goes on. Marilyn: “Did I ever tell you about the time I saw Errol Flynn whip out his prick and play the piano with it? Oh well, it was a hundred years ago, I’d just got into modeling, and I went to this half-ass party, and Errol Flynn, so pleased with himself, he was there and he took out his prick and played the piano with it. Thumped the keys. He played You are My Sunshine. Christ! Everybody says Milton Berle has the biggest schlong in Hollywood. But who cares? Look, don’t you have any money?”

Capote: “Maybe about fifty bucks.”

Marilyn: “Well, that ought to buy us some bubbly.”

They go to a crummy bar on Second Avenue. Marilyn: “This is fun. Kind of like being on location-- if you like location, which I certainly don’t. Niagara.  That stinker. Yuk.”

Capote: “So let’s hear about your secret lover.”

Marilyn giggles while Capote keeps silent.

Marilyn: “You know so many women. Who’s the most attractive woman you know?

Capote: “No contest. Barbara Paley. Hands down.” (wife of the owner of CBS television network)

Marilyn frowns: “Is that the one they call ‘Babe’? She sure doesn’t look like any babe to me. I’ve seen her in Vogue and all. She’s so elegant. Lovely. Just looking at her pictures makes me feel like pig-slop.”

Capote: “She might be amused to hear that. She’s very jealous of you.”

Marilyn: “Jealous of me? There you go again, laughing.”

Capote explains that a gossip columnist wrote about a rumor that Marilyn was having an affair with William S. Paley, and his wife believes it.

They trade sex stories. Capote tells of a homosexual fling he had with Errol Flynn. Marilyn: “It’s not as if you told me anything new. I’ve always known Errol zigzagged. I have a masseur, he’s practically my sister, and he was Tyrone Power’s masseur, and he told me all about the things Errol and Ty Power were doing.... So let’s hear your best experience. Along those lines.”

Capote: “The best? The most memorable? Suppose you answer the question first.”

Marilyn: “And I  drive hard bargains! Ha! (Swallowing champagne)  Joe’s not bad. He can hit home runs. If that’s all it takes, we’d still be married. I still love him, though. He’s genuine.”

Capote: “Husbands don’t count. Not in this game.”

Marilyn (nibbling her nail, really thinking): “Well, I met a man, he’s related to Gary Cooper somehow. A stockbroker, and nothing much to look at-- sixty-five, and he wears those very thick glasses. Thick as jellyfish. I can’t say what it was, but--”

Capote:  “You can stop right there. I’ve heard all about him from other girls... He’s Rocky Cooper’s stepfather. He’s supposed to be sensational.”

Marilyn: “He is. Okay, smart-ass. Your turn.”

[Capote continues his memoir:] “While I paid the check, she left for the powder room, and I wished I had a book to read: her visits to powder rooms sometimes lasted as long as an elephant’s pregnancy. Idly, as the time ticked by, I wondered if she was popping uppers or downers. Downers, no doubt... After twenty minutes passed, I decided to investigate. Maybe she’s popped a lethal dose, or even cut her wrists. I found the ladies’ room, and knocked on the door. She said, ‘Come in.’ Inside, she was confronting a dimly-lit mirror. I said, ‘What are you doing?’ She said, ‘Looking at Her.’  In fact, she was coloring her lips with ruby lipstick. Also, she had removed her somber head-scarf and combed out her glossy fine-as-cotton-candy hair.”

Marilyn is in a good mood now. She wants to take a taxi to the Staten Island ferry and feed the seagulls. [Capote 1975.]

Truman Capote was part of the celebrities network. He made a big splash by 1948 in the New York literary scene as novelist, enfant terrible of boyish good looks and flaunting homosexuality long before it was fashionable. He made literature out of whatever he observed, and specialized in backstage gossip about other celebrities, as well as hangers-on wannabes and small-town transients like himself. His conversation with Marilyn is a good specimen of the way he talked. As we can see, they are comfortable together.

Marilyn and Truman Capote dancing, April 1955—the same month of this conversation.

Marilyn and Truman Capote dancing, April 1955—the same month of this conversation.

The celebrity world is usually depicted as a superficial place, where prestige attracts prestige, famous people basking in each other’s limelight and thus multiplying their prestige by being seen together. This is true, but it misses another dimension: celebrities—if they have friends—usually make friends with other celebrities, because they share the same viewpoint on the rest of their lives. They have the same problem of being instantly recognizable, so that they cannot have an ordinary conversation with most people. (The Beatles used to refer to their encounters with fans as being “Beatle-ized” when people gush with amazement at seeing them.) Sociologically, what makes for spontaneous friendships is the feeling of sharing the same backstage, us in a private enclave against the world.

Marilyn, even at the height of her fame in 1955, still has a certain amount of that star-struck attitude about others. She wishes she could be friends with Katherine Hepburn, and feels inferior to the elegant Barbara Paley—a common denominator here is that these are both women of the hereditary upper class, while Marilyn made her way up from the working class. Privately, Marilyn is crude, cynical, and on the whole disgusted with Hollywood, although she also revels in the insider knowledge she has about everyone’s sex lives (not least from her own experience). She would like to get out, but it is her career mainstay; and she senses there is part of the New York world that will never accept her, even if her intellectual pals are willing to patronize her as long as she stays eager and humble.

Three years later, Capote published his most famous novel, Breakfast at Tiffany’s. In 1960, he tried to get her cast in the female lead for the film version, but the studios considered Marilyn too much trouble, and Audrey Hepburn got the part. The central character is a “treats girl”—a sexy young woman who lets herself be picked up in expensive bars by men on expense accounts, and lives on asking them for $20 bills to “tip the maid in the powder room”—and usually cutting out to avoid further sexual complications. Holly Golightly could have been modeled on Marilyn, a ditsy but good-hearted waif, who has a deserted husband from a small town, acts as a go-between for a Mafia boss in prison, and befriends a preppy young writer living in her apartment house who resembles a younger Truman Capote. You have to wonder how Marilyn would have liked playing this role, and if her friendship with Capote could have survived. Her marriage with Arthur Miller would break up when she started acting the script of The Misfits that Miller wrote for her--depicting a flighty, screwed-up personality based on herself. So this is what you think of me?


[1. Part 1] Hollywood Studios

Hollywood is first of all the meat market, where a crowd of aspiring young actors vie for the attention of a small number of studio chiefs and whoever else can help them get their break. Since the 1920s it was also the sex scene, known for risqué parties and goings-on (Rudolph Valentino, Louise Brooks, Errol Flynn), slightly veiled behind a publicity apparatus that made everthing look like peaches and cream. Marilyn had no inhibitions about playing it for what it was. She had affairs with studio executives and talent agents, including the agent who arranged her first part in an important film, The Asphalt Jungle (shot in 1949, when she was 23 years old) and got her a seven-year contract with 20th Century Fox late in 1950. But Marilyn had been in and out of the studios ever since she was 20, where she was mostly regarded as too light-weight to be an actress, and too eager to make it necessary to do much to get her cooperation. She was willing to serve as eye-candy at Hollywood parties as long as she was invited, and often this meant going upstairs with whoever was an important guest. Combined with the passive naive-beauty roles she was given, Marilyn came to be looked down upon as the studio whore, an attitude that would dog her throughout much of her career. Marilyn built an extensive network inside Hollywood, but for the first half-dozen years it was a network circulating the wrong kind of reputation.

She got a couple of short-term contracts in 1946-7 and again in 1948 (age 20-22), resulting in a few bit parts in minor films. She was eager to work and threw herself into gym workouts, dance lessons, and acting lessons. She even paid to continue lessons after her contract ran out--which also kept her on set and in the networks. (Her reputation of being hard to work with on the set would come later, as she became successful.) Though she was in and out, contract-wise, she gradually built up a few film credits, showing that she could wear beautiful costumes, stand out in a chorus line, sing and dance. Groucho Marx got her a part in a comedy; Carey Grant played opposite her in Monkey Business, a farce about a middle-aged lawyer who takes a drug that turns him into a teen-ager. Her comedy roles were always the dumb blonde, varied by film noir roles as a gangster moll and mentally ill characters like a freaked-out baby sitter in Don’t Bother to Knock and the lead in Niagara (both released 1952). On the whole, Hollywood was an ordeal from her late teens until age 26, and most of what success (and livelihood) she got was not from films but from photography.

[2] Glamour photography

Marilyn got her start while working a defense factory, when she was approached by a military photographer looking for “Rosie the Riveter” type inspirational pictures. It was her entry to a network that included not only photographers, but modeling agencies and their customers: magazines, advertisements, calendars, pin-ups, and studio publicity. In early 1945, Marilyn was able to quit her factory job and by the next year, had appeared on the cover of over 30 magazines, not yet the big ones but respectable ones like Pageant and Family Circle, as well as U.S. Camera and sex-tease mags. (The San Fernando Valley, across the Hollywood hills, was then as later a national center for pornography, but Marilyn stayed on the respectable side of the line-- which paid better, in any case, since conventional magazines had bigger circulation.) Her reputation for bathing-suit shots spread, and she was picked up as an artist’s model for well-known pin-up artists Earl Moran and Earl MacPherson. It was during one of her hard times, laid off from the studios and needing money, in 1949, when she posed for the nude photos that would later make her famous.

It was an unusual photo angle, shot from the top of a ladder looking down on her lying on a bright red curtain, and became the best-selling calendar photo of its time. Color photography was just emerging as a viable printing process, most photographs previously having been black-and-white. Marilyn would repeatedly feature in the technological breakthroughs in all the visual media. The nude photos came back to haunt her in March 1952, when gossip columnists spread the story that she had posed in the nude 3 years before. But 1952 was Marilyn’s break-out year. The previous fall she was on the cover of Collier’s (one of the big national photo-news magazines), and soon after made the covers of Look and Life. Niagara was about to be shot and would be on screens next year with Marilyn as top billing. The studio executives worried about the nude calendar but Marilyn handled the rush of reporters with aplomb: “It’s no big deal. You can get a copy of it anywhere.” And asked if she had nothing on during the photo, she replied in her little-girl voice, “I had the radio on.” Set up for scandal, she stole the scene. That’s one definition of emotional domination of the situation, however meek and passive her demeanor.

Marilyn had become too big in the photo world for the studio bosses to cut her out any more.

It was her photo career that made her transition to the iconic Marilyn Monroe. Norma Jean Mortenson, as a young photo model, was a brunette with curly hair. She changed her name to Marilyn Monroe during a screen test. Meanwhile her photos show her curls straightening out to wavy, her brown hair shading into red, then reddish-blonde (red-heads were considered hot stuff in the 1940s), and by 1950 to now-classic platinum blonde. Her agent had her hairline raised (to eliminate the widow’s peak seen in her early looks), and according to rumours, possibly also paid for a minor nose-job. Her photos show the addition of a small beauty-mark on her left cheek from 1950 onwards. This was the look of the 1953 photo that Andy Warhol would use for his multi-colored Marilyn silkscreen in 1962, just after she died, sealing her icon status in another medium. Marilyn created her own image, but the photographers, agents, and artists had a hand in it too.

Marilyn's hair: 1946, 1947, 1950

Marilyn's hair: 1946, 1947, 1950

[1. Part 2] Hollywood

Two big technical developments were happening in the film business just as Marilyn became a star. One was Technicolor. Color films had occasionally been made since the late 1930s—The Wizard of Oz was one, starting out black-and-white in Kansas and then switching to color for the Land of Oz—but until the early 1950s most films were black-and-white. Technicolor as it appeared in the late 1940s was garish, bright but unnatural-looking. Natural-looking color was achieved in the 50s, and Niagara publicity trumpeted it as the combination of two of the world’s great spectacles, Niagara Falls and Marilyn Monroe. The scenic aspect of outdoor films, which was never very good in black-and-white, was now a big selling point for the movies. They needed it, because these were the years television had taken off; movie attendance had peaked in 1946 and now had declined over 60 per cent. But TV was black-and-white and didn’t get very good color until the late 1960s, so Hollywood exploited color films as hard as it could. *

* Black-and-white continued to be used until the end of the 50s for serious films. On the Waterfront-- Elia Kazan’s 1954 drama of labor corruption, with Marlon Brando’s famous “I coulda been a contender” scene, was turned down by 20th Century Fox because Kazan didn’t want it made in color.

The other gimmick that Hollywood had over TV was big, wide-screen spectacles. There were initial technical problems. The early version was called Cinerama; it required special theatres with a triple-wide screen, each with a separate film projector. This was too cumbersome and expensive, but by 1953 it was replaced by Cinemascope, which required only one projector and one film instead of three. The first big Cinamascope block-buster appeared in 1953, a Biblical epic, The Robe, starring Charleton Heston with his famous chariot race. The second was Marilyn Monroe’s film, How to Marry a Millionaire, also in 1953. It wasn’t a great film and had a silly plot, but it was packed with stars—Marilyn along with her two predecessors, Lauren Bacall and Betty Grable—a it paid back its huge production costs many times over within its first month. A much wittier film was Marilyn’s earlier film of the same year, Gentlemen Prefer Blondes, co-starring Jane Russell, whom she also up-staged; it also made a lot of money. So 20th Century Fox immediately piled into producing yet another big Cinemascope film, River of No Return, a frontier action-adventure pairing Monroe with Robert Mitchum. She later called it “a grade-Z cowboy movie in which the acting finished second to the scenery and the Cinemascope process.” The appeal of Cinemascope soon wore out, and 20th Century Fox almost bankrupted itself over the next 10 years, especially with the over-long four hour production Cleopatra (finally released in 1963) starring Elizabeth Taylor. During these years of trouble, Marilyn Monroe films were the chief money-makers for the studio.

By 1955, Marilyn was bigger than everybody and ready to rebel. She was still getting the modest salary negotiated in her 1950 contract; she wanted commensurate pay and better roles than the dumb blonde stereotype. The studio, still under-estimating her, refused. She walked out. This was news. Hollywood contractual disputes were usually behind closed doors. How could someone with such a weak personality do this sort of thing?

[3] The Celebrity Network

In 1956, sociologist C. Wright Mills published The Power Elite, a portrait of the upper reaches of stratification in the United States. His main argument was that the country had morphed into a pyramid ruled by three overlapping groups: the executives of the big corporations; the top officers who shuttled between the interchangeable branches of the military-industrial complex; and the cabinet officials who served no matter which party held the presidency, and who came from the same Ivy League schools and the same Wall Street firms. (Sounds familiar?) He also pointed out that the old fashioned Upper Class, the hereditary rich families of the Social Register in New York, Boston, and Philadelphia still existed (one of their daughters married John F. Kennedy), but that they no longer really counted as sources of national power, or even of prestige. They were no longer in the public eye the way they had been when the Titanic sank (when the headlines listed which members of “Society” were on the ship). What had displaced them was a group called Celebrities.

Celebrities were anyone who was famous, which meant anyone who had their picture taken a lot, and were in the news just by being visible. Celebrities could be athletes, singers, movie stars, famous writers (Hemingway; Tennessee Williams), band leaders, people who broke flying records (Charles Lindbergh, Howard Hughes, Amelia Earhart). What created Celebrities, as a group phenomenon, was the rise of the mass media. Above all, these were the newspapers and magazines, which underwent an era of tremendous popularity (and profitability) from the 1920s through the 50s. Photos were a big part of this; it was only around 1920 that cameras became portable so that photographers (later called paparazzi) could swarm all over places where celebrities might be seen; and when newsprint publications could afford to sprinkle their pages with photos. Celebrities were wanted because of an unsatiable need for things to fill papers with; celebrity stories had legs, whether there was any breaking news or not. In the 1930s, glossy black-and-white photos in magazines became economically feasible. Hence the world of celebrities. Hollywood was a favorite photo/ news/ gossip site. A broader swathe of famous persons could be found in the restaurants and night clubs of New York, where almost anyone who was anyone could be seen and gossip columnists could write about who they were seen with.

Marilyn may not have been very aware of the world of Celebrities when she was young and completely Hollywood-struck. But she soon found out; in fact, she became a celebrity before she became a star. By around 1950, she wasn’t just trading sex for entrée into Hollywood parties; she was having affairs with the stars, including Marlon Brando, Yul Brynner, and big-name director Elia Kazan. In her breakout year, 1953, she became connected with the biggest name of all—Joe DiMaggio. Just recently retired from the New York Yankees, DiMaggio was the biggest star on the most famous team in the most popular American sport. (His teams had gone to the World Series 10 out of 13 years; fans and sports-writers used to debate about who was the greatest of all time, DiMaggio or his predecessor, Babe Ruth.) In January 1954, Marilyn and Joe were married.

They honeymooned in Japan. Marilyn took time out to go to Korea, where the Korean War had ground to a stalemate, to entertain tens of thousands of American troops. Singing outdoors in a spaghetti-strap gown in the February cold, she was received with wild enthusiasm. “You never heard such cheers!” she told DiMaggio, upon returning. “Yes I have,” he said. He had; but that was then, and this was now. Their marriage immediately started coming apart.

Further strains appeared. DiMaggio was from an old-fashioned Italian family. He didn’t want his wife to work (it was a mark of not being able to support your own family); he wanted her to stay home and cook for him and his buddies. She tried, a bit, but she had a career and movies to make. In September 1955, they are in New York City. Marilyn is shooting The Seven Year Itch. Director Billy Wilder has concocted a scene where she stands over a subway grate while the air from the train rushes up and blows her skirts above her waist. It is a hot summer night, and Marilyn is enjoying it—the rush of air, showing off her great legs, the several thousand men and dozens of photographers gathered to watch. It goes on for several hours. Joe DiMaggio is there watching, with the wife of Marilyn’s personal photographer and manager, Milton Greene. Joe is getting angrier and angrier, every time her dress blows up to reveal her panties, and the crowd cheers. He walks off in disgust. Next month they are divorced.

Marilyn with director Billy Wilder, planning the skirt-blowing scene (Sept. 1954)

Marilyn with director Billy Wilder, planning the skirt-blowing scene (Sept. 1954)

Clash of life-styles? Yes. But also, Marilyn has upstaged him completely. And she always would.

[4] Theatre Intellectuals

The theatre world—which mostly meant New York City—had always overlapped with Hollywood. In the 1910s, before Hollywood, films were mostly made in or around New York, and Broadway producers were at the fore among those who created Hollywood in the 1920s. Burlesque stars like Mae West and dancers like Fred Astaire moved on to films; famous plays were often made into movies; and stars of the “legitimate theatre” continued to circulate between the stage and the movies up through the 1950s and even later. But already in the 20s, there were film stars who never did theatre; and these became more prominent over time. They were two different kinds of media, and the difference expanded as films became more outdoors, more action-oriented, and more colorful and spectacular.

In moving from Hollywood to the New York theatre world, Marilyn was moving in a conservative direction. It was also a claim for prestige. The theatre world tended to look down on films as a second-rate medium; and intellectuals in general regarded films as low-brow entertainment. True, famous writers like Scott Fitzgerald and William Faulkner spent time as Hollywood script writers, but this was just a way of raising their incomes. Some Hollywood studio chiefs—notably Darryl Zanuck, the head of 20th Century Fox, and Marilyn Monroe’s chief detractor—tried to raise the status of films by making “serious” movies; but they largely had to give this up in the 1950s when competition from TV moved them in the direction of colorful spectaculars. One can see the pattern in the fact that there were no film schools and no “film critics” until James Agee in the 1940s and 50s started trying to review films in the same spirit as reviewing plays. There was little sense of what was a film “classic” until the 1960s and later. *

* Though looking for outstanding developments in the film art, Agee completely missed the significance of film noir, the main innovation of his own day; he thought that films ought to deal with the social developments of modern times, such as making patriotic movies about World War II. Not surprisingly, his film reviews were mostly negative.

Marilyn already had network ties with the theatre intellectuals from the early 50s. (After all, there was Brando, Elia Kazan; and she’d acted alongside Bettie Davis in All About Eve, which is precisely about an aging theatre queen and her ambitious understudy.) After divorcing Joe DiMaggio and breaking her Hollywood contract, in 1955 she moved to New York, where she was taken in by a famous art-photographer Milton Greene and his wife. Greene did a series of sensitive photos of Marilyn (not as film star or sex kitten but moody, swan-like, etc.). He also floated the idea of forming an independent company, Marilyn Monroe Productions, with themselves as partners.

Meanwhile Marilyn starts over again, “from the bottom” (sort of), by joining other would-be actors at the Actors Studio run by Lee Strasberg. He is a proponent of method acting, getting into your own emotions, feeling yourself in the role. Marilyn is met with skepticism by the other actors but Strasberg and his wife Paula find Marilyn has potential. For the rest of her career, Paula would be Marilyn’s personal acting coach, at her side on the set of every movie she made.

In January 1956, 20th Century Fox caved in. She got a new contract, with options to choose her own films and directors. Marilyn Monroe Productions also had the right to make one independent film a year. She and Milton Greene made this a priority. Their first film would be in England, directed by (and co-starring) Sir Laurence Olivier. Olivier was probably the most prestigious theatre/film cross-over in the world, famed for his Shakespeare and for films of classic novels like Wuthering Heights. The film had a not-so-promising title, The Prince and the Showgirl; but in fact it was a first-rate comedy by the playwright Terence Rattigan, the foremost follower of the style of George Bernard Shaw, with its witty dialogue and surprising plot reversals. This should be the perfect launch to Marilyn’s new phase as a serious actress.

What could go wrong? For one thing, something that at the time seemed very much to be going right. Marilyn falls in love with Arthur Miller. He was at the top of the theatre world; his 1949 play, Death of a Salesman, would be for decades the most widely performed play ever written by an American playwright. It does nothing to hurt his public image that he is in a fight with HUAC over their effort to compel him to testify against former Communist party members and sympatheizers from the 1930s. This is a fight that had convulsed Hollywood, too, although Hollywood came down on the side of Communist-busting and a number of writers had been blacklisted. Marilyn had never been involved in politics, but now that her fiancée was called before the Committee (and its cameras) in Washington, she is right there beside him. When the politicians threaten to prevent him from traveling to England for the Olivier film, Marilyn’s admirers exert pressure on the other side, and he gets the passport. Marilyn and Arthur are photographed at their wedding at his home in rural Connecticut, where she is blissfully happy, just to be married to such a wonderful man.

In England, Marilyn and Arthur were greeted and photographed with Laurence Olivier and his wife, Vivien Leigh (who played Scarlet O'Hara in Gone with the Wind), but cordiality soon ended. Now accustomed to method-style directors, Marilyn asks Olivier how he wants her to play her part. “Just be sexy,” he tells her. She is insulted and upset. They fight throughout the filming, Arthur putting in advice and Paula Strasberg conferring with Marilyn before every shot. The Prince and the Showgirl is a financial flop and leaves a bad taste in everyone’s mouth. (In fact it is very viewable today, even though it doesn’t feel quite like the same Marilyn Monroe).

It is the beginning of a series of bad relationships with directors. She is consistently late on the set. She cancels and calls in sick. She forgets her lines, or botches them repeatedly. She argues with directors and retreats into conferences with Paula. During the shooting of Some Like It Hot in 1958, her co-star Tony Curtis famously said that “Kissing Marilyn Monroe is like kissing Hitler”—in exasperation at the endless re-takes. And this was to be the only really successful box-office hit that she made after re-inventing herself as a serious theatrical actor.

Marilyn had won the right to choose her own directors, but it doesn’t improve matters. She argued with top directors, Broadway and Hollywood legends alike. Her first effort at a serious drama, Bus Stop (1957), is a contemporary or real-life version of a cowboy movie where Marilyn is a cafe singer kidnapped by an enamoured rodeo cowboy. She played opposite 59-year-old Clark Gable in The Misfits, another real-life Western about aging cowboys trying to make some money rounding up wild horses. Arthur Miller had written the script especially for her, but his habit as a professional writer was to turn real people into material for drama, and it shocked her as a portrait of herself. Whether cast realistically or in film fantasies, she always ended up being the dumb and/or neurotic blonde beauty. Arthur left the set and she began another affair. Shooting dragged out, her films always behind-schedule and over-budget.

We can see the deterioration in photos of Marilyn in this period. Earlier, there had been candid photos of her biting her nails with tension, but now her face looks bland and washed-out. She carried a flask of gin on the set and drank between takes, a dangerous combination with the pills she took to wake up in the morning and the sleeping pills she took at night.

Marilyn biting her nails (1952)

Marilyn biting her nails (1952)

Between takes of Some Like It Hot (1958)

Between takes of Some Like It Hot (1958)

Arguing with George Cukor, famous comedy director, on the set of Let’s Make Love (1960)

Arguing with George Cukor, famous comedy director, on the set of Let’s Make Love (1960)

[5] Camelot

In November 1960, John F. Kennedy was elected President, promising to bring a youthful new approach to the White House. He brought youthful good looks, an even younger and beautiful wife, and created enthusiasm that made him the most popular President of the 20th century (with favorability ratings consistently around 70%). The Kennedy family were no strangers to Hollywood. The patriarch, Joseph P. Kennedy, had bought and reorganized studios in the 1920s, ruthlessly taking over a movie theatre chain, and carrying on a long affair with film star Gloria Swanson while financing her films. (Yes, the one who played the aging star in Sunset Boulevard, 1950.) JFK reportedly had numerous affairs, both before and after his marriage to Jacqueline Bouvier in 1953, including several film stars before taking up with Marilyn, which began attracting attention from his political enemies in early 1962. But reporters in those days gave popular politicians space for their private lives (they avoided photographing FDR in his wheel chair, and kept quiet about General Eisenhower’s affair with his driver). Kennedy got along well with the press, who showed the glitz of the Kennedy White House but not its backstage.

Marilyn had already had an affair with Peter Lawford, a Hollywood actor married to JFK's younger sister. Now she was socializing with the rat pack, as we see in a photo with other stars at a Las Vegas event—a fake look of enthusiasm on her mouth clashing with the sadness in the rest of her face.

Marilyn with Elizabeth Taylor and Dean Martin, Las Vegas (June 1961)

Marilyn with Elizabeth Taylor and Dean Martin, Las Vegas (June 1961)

In June 1962, in the midst of yet another contentious on-again-off-again film project for 20th Century Fox, Marilyn takes off to fly to New York for JFK’s birthday celebration at Madison Square Garden. It is her last famous photo scandal. Having kept the crowd waiting for almost an hour, she appears in a clinging, flesh-colored gown and sings “Happy Birthday to You, Mr. President” in her wispy voice. Kennedy, sitting in the front row of the huge audience, makes no gesture of response. Immediately afterward, his brother Bobby tells him the affair is getting too public and warns him to break it off. He does, that very night. Marilyn is shut out. She can’t even get through to Bobby by phone any more.

Back in Hollywood, she is suspended by the studio. A month later, she is reinstated with a new contract and a higher salary, and called back to resume filming. Three days later she is dead: an overdose of barbiturates, combined with whatever other drugs she was doing during the day.

[6] Marilyn’s psychiatrists

Marilyn had been seeing psychiatrists ever since her sojourn in New York in 1955. Psychoanalysis was very much in vogue during the 1940s and 50s, and her coaches at the Actors Studio encouraged her to explore her emotional depths. She had at least four psychiatrists. The second of them, in 1957, was Anna Freud, the daughter of Sigmund Freud. Such psychoanalysis was not expected to cure anything, but was just part of a life-long process of knowing oneself. At any rate, there was no indication psychiatry did Marilyn any good; her problems got worse during the years of treatment.

Her psychiatrist from 1957 to 1961 was Marianne Kris. These were the years of her fights with directors, her breakdowns on the set, her estrangement and divorce from Arthur Miller, her heavy drug use and drinking. The drugs were abetted by her doctors, including the psychiatrists themselves; like celebrity doctors then and since, they were impressed with having famous patients, and multiple doctors would add up to unlimited prescriptions. By 1960, Marilyn had two psychiatrists, Dr. Kris in New York, and Dr. Ralph Greenson in Los Angeles. In March 1962, Dr. Kris decided that Marilyn was on the verge of suicide, and had her admitted to a psychiatric hospital in New York. Marilyn went along with it at first, until she found herself locked in a padded cell, under constant surveillance, and cut off from communication with the outside. She began to resist, to no avail. She refused to take part in therapeutic activities with the other patients (supervised handicrafts and the like), declaring: “when I start becoming one of them, I’ll know I really am crazy.”

At almost exactly the same time, Erving Goffman published Asylums: Essays on the Social Situation of Mental Patients and Other Inmates (1961). In the late 1950s, Goffman had gotten himself into the schizophrenic ward of mental hospital, incognito, for two years to observe what it was like to be locked up, whether you were crazy or not. He concluded that the structure of the mental hospital itself was making people worse, not better. It was a “total institution,” where ones entire life was under surveillance by staff who held all power over you--guards in a prison, sergeants in a boot camp, orderlies and psychiatrists in an asylum. Inmates were a degraded status, with no way to escape from their social position, except by giving in to the staff’s definition of oneself as a spoiled self. You had to give up your self in order for them to make you better (or at least declare you were better so you could get out). Goffman argued that the bizarre things that patients did in the mental hospital (like pissing on floor or refusing to keep one’s clothes on) were a last gasp of autonomy, trying to show they still had at least this much of a personal self by rebelling in trivial ways. Goffman called this “the underlife of a total institution,” one version of which is the “convict code” in prison.* This was the pressure that Marilyn faced.

* Within a few years after Goffman published this book, mental hospitals began to be closed down.


Marilyn was finally able to smuggle a message out to Joe DiMaggio. Why Joe? He still loved her, she knew. And Joe D was a big name in New York, an old-fashioned hero type who wasn’t going to let a bunch of bureaucrats stop him. Surrounded by an army of reporters and photographers, Joe got Marilyn out.

Photos tell the unspoken story. She and Joe are seen together for a while. Uncharacteristically, Marilyn covers her face from the cameras. Joe looks stony-faced. She sits beside him on the beach with a wan expression. He rescued her, but he couldn’t save her.

Marilyn with Joe DiMaggio (1962)

Marilyn with Joe DiMaggio (1962)

The old networks were still there, still pulling her apart, and the networks were now inside her. Her new psychiatrist, Ralph Greenson, violates professional norms by trying to befriend his patient; he and his wife invite her into their home. Marilyn moves into an apartment a few minutes away. But she is back to the drugs and the drinking, the daily uppers and downers; the back-and-forth with the studio; the collapse of her dream to be something more than a Hollywood star. No one can befriend her in her personal backstage, suspended between all the frontstages. She dies alone.


Did Marilyn Monroe Have Charisma?

Let's see how she fits the check-list of different kinds of charisma.

Frontstage charisma. Obviously, Marilyn was not the kind of person who makes speeches and leads crowds by swaying their emotions and beliefs. But no one was better at capturing the center of public attention. In this respect she was like Cleopatra, the master of spectacles, who left Mark Antony sitting alone on his podium while the crowd flocked to see her. This makes us broaden our theory of how charisma operates. It doesn’t have to be peremptory, I’m telling you this! It is all the more effective when it is irresistible. In public, almost everyone liked Marilyn, were charmed by her, men and women alike. In part, precisely because she was not an authority; she never told people what to do. Even as a sexual figure, she was never the femme fatale, the malicious vamp, the money-grabbing whore. She was most natural in front of a crowd: if you like to look at me, I’ll blow you a kiss.

Marilyn with her public (1954)

Marilyn with her public (1954)

Backstage charisma. This is the realm of face-to-face relationships; the capacity for emotional domination that is so striking in the way Jesus talked with people, always seizing control of the conversation with an unexpected shift. Marilyn was not at all like this. But when people pressed her (like reporters), she usually came up with a stopper, a tag line that gave everyone pause, or made people laugh.

Success-reputation charisma. The classic definition of charisma is the general or politician who always wins: Alexander the Great, Julius Caesar, Napoleon. This was not Marilyn. But-- if her aim for success was to be a star, to be the center of attention, she never failed. (After her career launch, of course; this launch-point is a key feature of any “charismatic” life.) Unlike some (perhaps most) charismatic success-leaders, she never lost her position, never became once-charismatic. Perhaps because she killed herself at the right time; she did not hang on too long. Even her death was big news; her legend was just beginning.

Fame as pseudo-charisma. Just continuing to be a famous name, with the passage of time, can get one the retrospective label of being charismatic. I have argued this is a mistake, a confusing use of the term. Queen Elizabeth, of Elizabethan fame, is an example of a person who was not charismatic on any of the three main dimensions. But historic fame can accompany real charisma. So far—60 years after her death—Marilyn Monroe checks that box too.

Of course, over the flow of history, 60 years is not a long time. Can we theorize what makes some names resonant over the centuries? Yes... but that is another book.

CIVIL WAR TWO Condensed One-Volume Edition now available

References

James Agee. 2005 [originally 1941-50]. Film Writing and Selected Journalism. Library of America.

Lois Banner. 2012. Marilyn: The Passion and the Paradox. Bloomsburg Books.

Truman Capote. 1975. Music for Chameleons. Random House.

Randall Collins. Nov. 1, 2016. "Does Charisma Win Presidential Elections?"

Bernard Comment (ed.) 2010. Marilyn Monroe, Fragments: Poems, Intimate Notes, Letters. Farrar, Straus and Giroux.

Robert Dallek. 2003. An Unfinished Life: John F. Kennedy, 1917-1963. Boston: Little, Brown.

Erving Goffman. 1961. Asylums: Essays on the Social Situation of Mental Patients and Other Inmates. Doubleday.

Erving Goffman. 1959. The Presentation of Self in Everyday Life. Doubleday.

James Haspiel. 1995. Young Marilyn. Becoming the Legend. Hyperion Books.

James Kotsilibas-Davis. 1994. Milton’s Marilyn. The Photographs of Milton H. Greene. Munich: Schirmer/Mosel.

Life Magazine. 2009. Remembering Marilyn. Time-Life Books.

C. Wright Mills. 1956. The Power Elite. Oxford Univ. Press.

Carl Rollyson. 2014. Marilyn Monroe Day by Day: A Timeline of People, Places, and Events. Rowman and Littlefield.

Robert Sklar. 1993. Film: An International History of the Medium. Harry N. Abrams Publishers.

Donald Spoto. 2001. Marilyn Monroe: The Biography. Cooper Square Press.

Stupid Popular Sayings


“For a man with a hammer, everything looks like a nail.”

Whoever said this or repeats this must never have held a hammer in his or her hand. It might be true of a two-year-old. But any carpenter, or any adult who has a hammer, uses it for particular purposes and then puts the hammer away. You drive in a nail to hang a picture. That’s it. No one but a homicidal maniac in a frenzy (and actually, not too many of those) goes around hitting everything in sight with a hammer.

How did this stupid cliché become popular? Probably because it was used in a political argument, like the anti-nuclear argument that having nuclear weapons is dangerous because “to a man with a hammer, everything looks like a nail.” Even Harry Truman, who is the only man who ever used that particular hammer, had a special motivation. Some historians say he felt inferior in the shadow of FDR, and wanted to show off to Stalin and Churchill that he was powerful too. That doesn’t make Truman an exemplar; and 70-plus years of nuclear restraint since then (gratia deo) shows that people aren’t as stupid with dangerous things as this saying implies they are.

“Beauty is truth, truth beauty--that is all

You know on earth, and all you need to know.”

This one is from Keats, Ode to a Grecian Urn. It sounds wonderful, until you think about it. It makes four statements, all of which are untrue.

“Truth is beauty.”  No; a lot of truth isn’t beautiful; some things that are true are very ugly. Occasionally something that is true is also beautiful, but this doesn’t happen often enough.

“Beauty is truth.”  Oh yeah? On the whole beauty is fantasy; it’s an ideal; it’s something our best artists and thinkers create.    Even Plato, the philosopher closest to the sentiment Keats expresses, did not regard truth and beauty as part of the ordinary world of experience, but an ideal world far transcending it. A much keener observer of the world of art, André Malraux, wrote in his globe-spanning The Voices of Silence that artists never just copy the natural world, they select elements from it; what they see in their mind’s eye is what they have learned from previous artists, and when they create something new it is not part of our ordinary world but creates another world  for us to experience (by going to art museums).

“That is all you know on earth.”  Sure, if we don’t count anything as knowledge such as practical skills for getting around in our everyday worlds, science, mathematics, history, law, medicine, sociology, whatever. “And all you need to know.” Ditto.

Keats’s poem, Ode to a Grecian Urn, is self-refuting. It is beautiful, but it isn’t true.

“No one is free, as long as anyone is in chains.”

Try telling that to someone who is in prison. Most of them would prefer to be on the outside.

The word “free” has no meaning, if it doesn’t contrast with something that isn’t free. Orlando Patterson’s book on the history of slavery, Slavery and Social Death, makes the point that the existence of slavery gave impetus to the word freedom, which became part of political vocabulary precisely in the places where slavery was widespread (such as ancient Rome). It is echoed in the English national anthem:  “Rule, Brittania! Brittania rule the waves! Britons never, never shall be slaves!”  And thus the image of slavery is equated with not being the one who rules.

Of course, “No one is free, as long as etc...” is a piece of rhetoric. It is a claim for sympathy and altruism for the plight of others. But the track record of any particular social movement for freeing some particular group from something equated with slavery has so far never been without its own blind spots. Like most political slogans, it pretends to be more universal than it really is. One thing we can probably be sure of is: no matter how altruistic we think we are today, there will be people in the future who will accuse us of some inequity we haven’t yet thought of.