MONA LISA IS NO MYSTERY FOR MICRO-SOCIOLOGY

The Mona Lisa is considered the world’s most famous painting, chiefly because of its mysterious smile.  What is so mysterious about it? Art critics have projected all sorts of interpretations onto it, and these are endless. There is a more objective way to analyze the Mona Lisa smile, using the social psychology (or micro-sociology) of facial expressions.

As the psychologist Paul Ekman has found, analyzing emotions in photos all over the world, emotions are shown on three zones of the face: the mouth and lower face; the eyes; and the forehead. Our folk knowledge about emotions concerns only the mouth: the smiley face with lips curled up, the frowning face with lips turned down. These intuitions  also make possible fake expressions. The mouth is the easiest part of the face to control. You can easily turn up the corners of your mouth, and this is what we do on social occasions where the expected thing is happiness or geniality. Arlie Hochschild, in The Managed Heart, calls this emotion work. In the contemporary fashion of political campaigning, politicians are required to be professional producers of fake smiles.

The muscles around the eyes and eyelids are much more difficult to control, and along with the forehead these are usually outside one’s conscious awareness. So a fake smile—or any other fake emotional expression—is easy for viewers to catch, because we are unconsciously attuned to the entire emotional signal all over the face. One reason we like photos of small children is that they haven’t yet learned how to fake emotional expressions.

If we examine the Mona Lisa face, zone by zone, the reason for its mysteriousness becomes clear: there are different emotions expressed in different facial zones.

Her mouth, as everyone has noticed, has a slight smile.

Her eyes are a little sad.

Her forehead is blank and unexpressive.

We will see further peculiarities as we examine each in detail.

Mouth and lower face.

 Smiles come in different degrees. As Ekman shows, stronger smiles—stronger happiness—pull the corners of the mouth further back (from the front of the face). Corners of the mouth may tilt up but they don’t have to; very strong smiles, which pull the mouth open and expose the teeth, often have the line of the upper lip more or less horizontal. What makes the smiley mouth is more the rounded-bow shape of the lower lip, and especially the wrinkle (naso-labial fold) that runs from the corners of the nose diagonally down to the beyond the corners of the lips. In very strong smiles, these triangle-looking folds become deeper, and are matched by a flipped-over triangle of skin folds from the chin to the outer corners of the lips, giving the lower face a diamond-shaped look.

Compare the Mona Lisa. This is a pretty pallid smile. Yes, she does turn up the lip corners a bit, but this is more of a conventional sign than what we see in a real smile. More importantly, there are no naso-labial folds running downward from her nose, nor any mirroring triangle up from the chin. Real smiles raise the cheeks (as we will see in a moment, this affects the eyes in a smile), but Mona Lisa hardly has any cheek features at all.

Eyes and eyelids.

 Smiles, especially stronger smiles, make wrinkles below the eyes, more or less horizontal, slightly curved across the bottom of the eye socket (deeper wrinkles the more the cheeks are raised). This has the effect of narrowing the slit of the eyes, as the lower eyelid is raised.  This is a tell-tale detail, since narrowing eyes can also happen in other emotions; in happiness, the lower eyelid may be puffed-out looking but not tense. (By contrast, angry eyes have very hard-clenched muscles around them; fearful eyes are wide-open and staring; sad eyes we are coming to).  For the happy face, all these muscle movements cause crows-feet wrinkles to spread out from the corners of the eyes.

Mona Lisa’s eyes? The lower lids do look a little puffy, but there are no wrinkles below them; her cheeks if anything are flaccid. And no crows-feet.

Sad eyes.

  Sad eyes are passive. The lower eyelid is weak, and there is no horizontal wrinkle below it, since the cheek is not pushing up. Whereas in a smile the upper eyelid is open, so the eyes brightly look out, the sad upper eyelid droops a bit. Even more noticeable is the brow, which tends to collapse and sag downwards; this makes the skin of the upper eye socket droop almost like a veil slanting over the outer corner of the eyes. This is particularly noticeable in the picture of the Middle-Eastern woman below right; next to it is a photo of a woman at her lover’s funeral. The photo on upper left is a composite, with sad eyes at the top, and neutral lower face.

Mona Lisa’s eyes.

 

They are not brightly exposed and wide-open as in the happiness photos above, where the upper eye-lid is generally narrow as can be. Mona Lisa’s upper eyelids are partly closed, so are her lower lids; and the skin at the outer edges of her eye sockets droops a bit.  These are sad eyes, although only mildly so.

Mona Lisa is a combination of sad eyes and a slight smile, but the way she is painted makes her even more mysterious. As already noted, she lacks the naso-labial folds and chin folds characteristics of happy smiles. Leonardo da Vinci did very little with the cheeks, but concentrated a great deal on the corners of the lips and eyes. This was his famous sfumato technique—a smoky look producing deliberate ambiguity. This also has the effect of obscuring just the places where important clues to genuine smiles are found; there are no crows-feet around her eyes, but then there are no expressive wrinkles in this painted skin anywhere.

Was this the actual expression Lisa Gherardini, La Gioconda, had on her face when Leonardo painted her? Probably not.  Leonardo worked over all his paintings a long time; the Mona Lisa took him four years, and was still unfinished in his estimation. He kept experimenting with the portrait, quite likely upon just these features. The idea that Leonardo was trying to portray a specially mysterious lady was a favorite with romanticist 19th century art critics, as was the very unlikely idea that he was having an affair with her (he was apparently a homosexual, once charged with sodomy, and was never known to have a relationship with a woman).  He was an artist in an era when artists were rivals over the super-star status of their time, and technical innovations made for fame. What we are viewing is less a real emotional expression at a moment in time, as a virtuouso experiment at the frontier of what could be pictured.

No eyebrows.

Another reason the Mona Lisa seems strange to us is that she has no eyebrows. For many emotions, the brows are important points of expression; as we have seen, somewhat subtly in sadness; in happiness, mainly by contrast with other emotions—unmoved eyebrows are generally part of the happy face, unless it is really over the top:

For anger, the position of the eyebrows is the strongest clue—the vertical lines between them as the facial muscles clench make even a stripped-bare cartoon emblem of anger.

So eyebrow-less Mona Lisa gives us less clues than usual to emotions; all we see are the bare ridges of her upper eye sockets through the haze of Leonardo’s sfumato, making even the sad expression less clear to us. There was nothing intentional about this; in the late 15th century shaved eyebrows were a fashion for European ladies, as we see from the Fouquet madonna (painted 1452) and the Piero della Francesca portrait (1465; the Mona Lisa was painted 1503-6).

This may be one reason why the Mona Lisa was not particularly well known in its day, nor was it considered mysterious, nor was there much comment on her smile. Leonardo da Vinci was famous but less so than his contemporaries Michelangelo and Raphael, and his most celebrated painting was The Last Supper. The Mona Lisa was a minor work until the 1850s-60s in France, and the 1870s in England, when it became the object of gushy writings by ultra-aesthete art critics, led by Théophile Gautier and Walter Pater. (The history of how this happened is told by Donald Sassoon, 2001.) Mona Lisa and her smile became mysterious, in fact the mysterious Feminine, an Eternal Spirit with all the Capital Letters. And not just the benevolent Earth Mother but a Cleopatra-Jezebel-Salomé temptress. This sounds like fantasies of mid-Victorian males—perhaps understandable in an era when women wore bustles and men hardly ever saw much more than their faces. As Sassoon notes, women were always much less taken with Mona Lisa than were men.

Is there any truth in the interpretation, that Mona Lisa was a subtly flirtatious sexpot?  Again we can call on some objective evidence, how erotic emotion is expressed on the face.

Sexual turn-on, at least for female faces, has a standard look (as can be seen by thousands of examples on the web): eyes closed or nearly so, mouth fallen open. The woman’s face is otherwise slack, no fold lines like other emotions; it may be happiness but the expressions are quite distinct.

Marilyn Monroe made the eyes-half-closed expression virtually her trademark.  The sex idol of a less explicit era than today was also a great actress in her line.

Mona Lisa? If there is any sex in her face, only a repressed Victorian could see it.

So this is micro-sociology?

 The purpose of micro-sociology is not to be an art critic. I only make the venture because so many popular interpretations of the Mona Lisa blunder into social psychology.  But reading the expressions on photos is good training for other pursuits. Paul Ekman holds that knowledge of the facial and bodily expressions of emotions is a practical skill in everyday life, giving some applications in his book Telling Lies. And it is not just a matter of looking for deceptions. We would be better at dealing with other people if we paid more attention to reading their emotional expressions—not to call them on it, but so that we can see better what they are feeling. Persons in abusive relationships—especially the abuser—could use training in recognizing how their own emotional expressions are affecting their victims; and greater such sensitivity could head off violent escalations.

Facial expressions, like all emotions, are not just individual psychology but micro-sociology, because these are signs people send to each other. The age we live in, when images from real-life situations are readily available in photos and videos, has opened a new research tool. I have used it (in Violence: A Micro-sociological Theory) to show that at the moment of face-to-face violence, expressions of anger on the part of the attacker turn into tension and fear; and this discovery leads to a new theory of what makes violence happen, or not.  On the positive side, micro-interactions that build mutual attunement among persons’ emotions are the key to group solidarity, and their lack is what produces indifference or antipathy. And we can read the emotions—a lot more plainly than the smile on Mona Lisa’s face.

PRAISE FOR CIVIL WAR TWO
“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City
CIVIL WAR TWO Available now at Amazon

References

Ekman, Paul. 1992. Telling Lies. Clues to Deceit in the Marketplace, Politics, and Marriage. NY: Norton.

Ekman, Paul, and Wallace V. Friesen, 1984. Unmasking the Face. Prentice-Hall.

Hochschild, Arlie. 1983. The Managed Heart. University of California Press.

Sassoon, Donald. 2001. Mona Lisa. The History of the World’s Most Famous Painting.  London: Harper-Collins.

FOUR REQUISITES FOR SUCCESS OR FAILURE IN ANYTHING

Everything in the human world has four aspects. They are easy to remember by putting them in four boxes:

The ECONOMIC box: This is a short-hand for the ways in which things are material, practical, or economic.

The POLITICAL box: Again a short-hand, for everything that involves power and conflict.

The SOCIAL box: the ways that people interact with each other, especially their emotions, rituals, and networks.

The CULTURAL box: people’s ideas and ideals about what they are doing.

Everything human has these four requisites.

If one or more is missing, the thing will fail.

Success requires all four in the right amounts. What are the right amounts? We shall see.

 

Analyze anything: some examples

The four boxes can be applied to anything.

Using the four-box scheme is like playing a game of tic-tac-toe or Sudoku.

But it is completely serious: a way to analyze anything that people are concerned about, from high politics to low entertainment.

To show what you can do with it, consider the kinds of kids that everyone who has gone to an American high school knows about.

Politics has the same sub-divisions:

Now to put the 4 REQS to work. Filling in the four boxes for any activity shows what it needs for success and where are its dangers of failing.

Success and failure in medicine

The CULTURE box:

The ideals of medicine are to provide health and cure sickness. Medical professionals swear to uphold ideals of service and altruism. The culture of medicine includes doctrines about what causes illness and the scientific methods for dealing with it. This is the textbook definition of medicine.

How large does the ideology of medicine loom in the experience of a patient, or a medical doctor, nurse, or hospital employee? How important is for whether medicine succeeds or fails?

The ECONOMICS OF MEDICINE box:

The material and practical aspect of medicine starts from the fact that human bodies are handled by medical workers. A hospital is a lot like a factory. As in an assembly line, patients are interviewed, tested for body signs, have specimens drawn, sent to labs, seen by various specialists, have medicines hooked up or ingested, and are subjected to various body-intrusive procedures. In the mean time they are wheeled around from one place to another, moved into rooms when available, parked in hallways, and sometimes fed and cleaned. The more bodies there are moving through the medical factory, the more the fate of any one patient is affected by the sheer quantity of things in the assembly line.

Even the minor experience of how long you wait between the time you show up for your appointment and the time you actually see a doctor is determined by how many patients are scheduled and how long each one takes. The doctor may in fact be quite personable and try to treat each patient as an individual; but this has the effect that patients later in the queue end up waiting even longer. The large-scale bureaucratic side of the organization runs against the ideology of caring, so that many patients’ experience of a hospital visit is about as people-friendly as a Kafka novel.

Where patients undergo more extensive hospital procedures, one’s body rolling around on the gurney is not very different from an automobile part in a car factory, except that in factories the supply parts can’t complain about where they are stored; and hospitals can’t use the just-in-time delivery systems that factories use for off-site storage.

In short, the human experience of being in a hospital comes from being treated like a part in a not very efficient factory assembly line-- inefficient because humans are more unpredictable, especially in how long they will take to respond to treatments. The result is a lot of unaccountable waiting around.

Notice the contrast: the ideology of scientific/altruistic medicine describes it as its best; one of the things it omits is what the experience of being one of the bodies moved around in a hospital is actually like.

There is also the economics of medicine in the narrower sense: the costs of medical care, billing and insurance systems, doctors’ payments, administrative and staff costs, the hospital plant.

Many of these run in vicious feedback loops. Insurance companies’ efforts to keep costs down leads to an accelerating back-and-forth between hospital staffs and insurers disputing payments, with a good deal of fanciful accounting on both sides. It exemplifies the sociological process of escalation and counter-escalation of conflict. The result of this administrative warfare is that both sides expand their billing staffs, making administrative costs endlessly rise.

Another vicious feedback loop comes from the scientific culture of medicine:

scientific discoveries lead to new and improved treatments, especially with the expensive diagnostic equipment of recent decades (CAT scans, MRIs, etc.); the standard of treatment constantly rises and new levels of expense become normalised, putting more pressure on hospital administrators to add equipment and simultaneously to find creative ways to pass the cost along to someone else.

Here again the ideology of medicine runs against the economics of medicine. In the perfect world of economists, patients would be informed consumers who could compare prices and the values they get from various treatments and make their own decisions. In the real world of medical practice, patients are rarely informed of such things; typically the hospital or clinic takes the patient’s arrival at the door as an agreement to pay for whatever treatment the professionals decide to give, at whatever price they want to charge. The ideology of caring for patients does not extend to caring for them financially, nor paying attention to what medical costs can do to their lives.

The POLITICS OF MEDICINE divides into an external and an internal aspect. External politics involves government policies and debates, and political movements for and against particular ways of legislating about medicine. The ideological stridency reached by such debates today is obvious. Since the politics box includes any kind of conflict, it also includes law suits over medical malpractice, damages, and religious and cultural claims: all of which add to the economic and organizational burdens of medical professionals.

The internal politics of medicine is more local; it consists in alliances and power struggles over who runs a hospital; relations between outside doctors or privately owned clinics and hospitals they staff; and the financial politics of hospital chains, take-overs, and the usual maneuvers of the corporate world. Here the link between the pure ideology of medicine as altrustic service and realities of medical politics becomes so remote as to have virtually nothing to do with each other.

Finally, the SOCIAL RELATIONS OF MEDICINE:

How do people interact with each other? Patients and staff may try to keep up a pleasant, humane relationship; but the bureaucratic factory setting of medical organization makes it likely that most interactions are faked. Talking with a doctor or nurse is the Goffmanian front-stage, since the organizational and economic realities that the patient is caught up in are rarely even acknowledged. Since patients have so little power in the system, they try to put up a hopeful front, fearing that protest will only leave them more neglected in the bureaucratic queue. The ideology of the helpful, altruistic medical staff and the grateful patient is constantly strained. Most of their interactions would be considered mediocre Interaction Rituals, producing little real solidarity.

From the point of view of the bureaucratic organization, it doesn’t matter what the patients feel, since they are just the raw material running through the machinery.

Hospitals and clinics have developed a long-standing culture over the years of how to keep patients superficially quiescent; it used to be called “bedside manner” although now it includes advertising campaigns and manipulating the decor of waiting rooms. Strictly as an organization (i.e. the economics box), medicine doesn’t depend on solidarity with patients.

Lack of solidarity is more of a threat to relationships within the staff. The biggest problem tends to be the behavior of the most powerful professionals, the medical doctors. As a strong profession, they are well-networked among themselves; they can control each other’s careers by referrals, partnerships, and by word-of-mouth reputation. These advantages also are useful for economic interests as well, whether steering patients to expensive procedures from private groups of practitioners, or manipulating billing practices. Observational studies of hospitals show doctors who chase gurneys down the hall, briefly asking the patient how they are doing, then billing it as a full-scale consultation for the insurance coverage.

These kinds of practices undermine solidarity in the hospital work force as a whole.

How then do we rate medical success or failure?

The four boxes have quite different criteria. From the economic angle, success of a hospital or a medical practice is how much money it makes; failure would be medical bankruptcy. From the political angle, success would be a favourable political environment; failure would be a political swing that crushes the existing medical elite. Most of reality is in the middle ground of seemingly endless political contention. From the social angle, the criterion would be patient satisfaction; empirically this seems to be in the mediocre range.

Finally, there is the lofty ideal of health and altruistic service. This altruistic side of this seems badly compromised; what about health? The problem here is that it is a moving standard. Some diseases have declined; focus on other diseases has risen in their place. Objectively there is now more scrutiny of medical error (not unrelated to lawsuits over medical malpractice). Medicine as a whole has been successful in keeping people alive longer; it also keeps people under medical treatment longer, not necessarily making them healthier but living more years when they aren’t healthy.

It has often been cited that any individual will charge up more in medical costs in the last 6 months before dying than in the rest of their life. It is the same pattern with automobile repairs: an old car becomes progressively more expensive to maintain, until the owner finally decides to get rid of it. These are material realities; the political, social, and ideological aspects get piled on top and obscure the reality.

Can’t the success of medicine be measured objectively, by rating systems? Certainly one sees billboards in every city across America touting how highly rated a particular local hospital is.

Compared to what? and by what standard?

The naive way to read a rating system is just to accept the numbers.

The more intelligent way-- which takes more work-- is to look at how the rating was done. By opinion polling among doctors or hospital administrators? This relies on their gossip network. By objective measures: OK, which ones? do they measure how satisfied patients are, how favorable their medical outcomes, how serious their conditions were? The most common objective measure is of the extremes of failure-- mortality, infection rates, and complications from medical procedures. This is still only a small part of the picture.

The overarching problem is there are four dimensions to the medical system; and they are all unavoidable. Setting up a rating system for success or failure is itself a matter of politics, making choices over what to pay attention to and what to ignore. We are a long way from getting a reliable rating system that tells you which hospitals give you the best treatment at the best price, with the most pleasant human interactions.

Looking at the total picture for all four boxes, it appears that medical systems rarely fail completely, but the different components undermine each other so much that they rarely work at a high rate of success. Marshall Meyer and Lynn Zucker referred to these kinds of organizations as “permanently failing organizations.”

How can they go on failing, instead of going out of business and being replaced by more efficient organizations, as economic theory on its most abstract level would imagine? In part because medicine is in such high demand; even permanently failing organizations are better than none at all.

The best practical advance that sociology can offer is to pull back from the macro level where the four boxes clash, and focus on the social interaction box. Here are two important findings by medical sociologists such as Charles Bosk: First, the strongest predictor of medical failure is whether the patient feels the doctor doesn’t like him or her. In other words: a genuinely successful interaction ritual between doctor and patient is the best way to ensure the treatment will be successful. If there are bad vibes, find another doctor.

Second, medical error is much lower in Japanese hospitals than in American ones: Why? because in Japan it is customary for a close relative to always be present in the patient’s room. Someone who cares personally can monitor whether staff are attentive, and accidents and oversights are avoided.

Hospitals are like factories, and even the most altruistic medical personnel are worn down by the sheer amount of things they have to do, with rotating shifts and a constantly changing cast of characters. The bureaucracy of the hospital can’t be changed; but it can be counter-acted, by adding people into the situation who have a personal concern for the individual patient.

Success or failure: having a party

The 4 REQS can be applied to anything. On a lighter note, what does it take to give a successful party?

The ideal is for a bunch of people to assemble, put all their cares aside, and have a good time. This is the CULTURE box, taken full strength since a party is supposed to be a happy time-out from everything else. Nevertheless the other three boxes have to be taken care of or the party will fail.

The ECONOMICS of a party is its material and practical side, as every party-giver well knows. Where to have the party; getting your house or venue fixed up; the food, the drinks, the music or entertainment if any, etc. That is not to say there is much correlation between how much money and effort is put into the party and how enjoyable it is.

There is little research on this comparison, but there are plenty of instances where very expensive parties fall flat. One kind of bring-down is where the hosts are too obsessed with the material side; another is where the guests are too self-conscious about it and spend their time comparing how lavish things are (or criticizing where they are not) rather than enjoying themselves.

The POLITICS of a party is where it overlaps with conflicts and alliances. Putting a collection of people who don’t like each other in front of a spread of food and drinks will not necessarily produce a happy occasion. That is why traditional hostesses (as in the British upper classes) elaborately strategized their dinner parties, deciding not only who to invite but who to seat next to whom. The shift towards greater casualness and informality since the latter part of the 20th century probably has not raised the level of success of social occasions, because this kind of deliberate concern for whether people will strike it off with each other has largely disappeared. David Grazian’s research on nightlife shows that most of the solidarity is confined to little groups of companions who go out together and make a game out of making any contact at all, however ephemeral, with the opposite sex.

Stressing the ECONOMICS box can’t determine whether a party will be successful, although too little attention to the material input will make it fail.

The POLITICS box works the opposite way, where paying a lot of attention to the right political mix contributes strongly to its success. Invitations which are too automatic run the likelihood of failure. One familiar version is the extended-family holiday gathering where the different relatives may not actually like each other; such gatherings can lead more to conflict than to collective effervescence.

A techno-solution has become widespread in modern times: instead of talking, people who have little to say to each other can all sit and watch TV. Similarly in night clubs extremely loud music not only sets the atmosphere but is a substitute for conversation.

There is a real historical break here, since before around 1950, parties and other festive gatherings did not rely much on conversations. There were traditional ways for getting people participating together: One was dancing in groups. The last remnant, line-dancing, goes back to the dance forms prevalent before the mid-1800s, where men and women maneuvered ceremoniously around the floor in set formations. Then couples dancing separated people into duos, and introduced a new element of political status and conflict over who danced with whom and who was left out. Another participation technique at traditional parties was playing games; the livelier ones had a lot of physical action, such as hurrying for chairs that diminished in number when the music stopped. There were also pretend-games like costume parties; in the 1700s and earlier, the mark of participating in a festive mood was wearing masks, underscoring the time-out from ordinary reality of the event.

Not all party games had this level of collective excitement.

Playing card games was popular since the 1700s. It provided a certain amount of shared attention, but it reduced the collective effervescence the more seriously it was treated, as in upper-middle class people playing bridge after dinner, or masculine gatherings playing poker. The obsessions and conflicts that go along with gambling can turn the fun-party occasion into a fantasy version of the POLITICS box.

Finally, the SOCIAL box. This is the home-ground of a successful party, a state of joyful collective effervescence, shared by (pretty much) everyone present. The key ingredients, as in any interaction ritual, are getting everyone focused on the same thing (something they are all doing together at the party), building up a shared mood (energy, exuberance, excitement), so that it bubbles over into a shared rhythm. Individuals at a good party get each other increasingly into the mood.

The other three boxes-- the ideal of having a party; the material provisions that are consumed; the politics of how people get along with each other-- all these succeed, or fail, because of how they affect the collective effervescence. None of the other boxes will guarantee it;

material inputs like alcohol or other psychotropic substances can affect the energy level, but drunken people can be boring, sad, or contentious rather than happy.

There is a formula for a truly successful collective effervescence. New Year’s celebration in Las Vegas is an example, when people don’t try to say anything significant, just blow your horns, throw streamers at people, hug people you don’t know. This works where everyone knows the tradition and throw themselves into it. This contravenes most of the customs of ordinary life. Everyday life is not like a party because everyday micro-politics runs counter to what is necessary for widely shared collective effervescence. That is one reason why successful parties are a time-out from everyday life. They need special conditions, which can’t be present all the time. If you insist on making your life one endless party, there are sure to be times when the party isn’t a very good one.

Try it yourself

You can analyze anything with the four-requisites model. Religion, education, or family; sports or literature; sex in any of its varieties; going on vacation. You name it. What is its ideal of success? What does it need to succeed, and what happens in the other 3 boxes that makes it fail, or keeps it in a state of conflict? Fill in the boxes:

How much of each requisite is needed?  e.g. business start-ups

From the examples given we can see that different kinds of things have different balances among the requisites. Any activity needs a minimum in all four boxes, but beyond that which boxes require the most emphasis depends on what arena you are playing in. It also depends on timing. For some enterprises, the early period needs a different mixture of inputs than later periods.

As a sketch, let us consider a business, during the early period of start-up; when it is established as a full-blown player; and the late mature phase when the rest of the world has caught up with it.

CULTURE box: the business’s product, identity, brand, skills and knowledge, and reputation.

ECONOMICS box: its plant, equipment, offices, markets, finances, and organizational structure.

POLITICAL box: on the external side, the state with its political and legal environment, whether supportive or threatening; on the internal side, the alliances and conflicts that make up its power structure.

SOCIAL box: Includes both external and internal networks and how well they are working on the personal level.

External networks connect to supply chain, customers, and recruiting employees. Because the people you do business with are also potential rivals, and everyone could jump ship in either direction, whether these networks work successfully or not depends on emotional flows ranging from mutual enthusiasm to domination to distrust. The same goes for internal relationships, among fellow employees and in the hierarchy of control.

All this depends on how successful interaction rituals are.

Which boxes are most important at which phase of the business’s life-time?

Early start-up stage: The most important factor is in the CULTURE box, since the new business has to establish its identity and name reputation. Economic resources are going to be needed, but if the owners don’t already have a lot of money, the key here lies in the SOCIAL box.

Economic resources are first built up, not from economic performance, but from leveraging social networks; above all, that happens by propagating emotions, so that other people feel a wave of enthusiasm about the new venture.

Compared to this social outburst, the economic aspect isn’t that important at the beginning. The POLITICAL box isn’t crucial at the outset either, as long as the start-up stays out of conflicts, since isn’t big enough to handle them yet.

Established stage: Your reputation, economic position, organization, supply chains are all established. You know where you fit in the field of rivals and competitors, and they know it too. All the boxes are active. The CULTURE box gets less attention, and routine sets in on the SOCIAL side, especially in the internal organization. The ECONOMIC box tends to get the most attention. Successful businesses may develop trouble at this stage-- this is what happened to Apple in the early 1980s, after it had mushroomed into a major corporation, took on managers who made it more similar to the rest of the field, and eventually brought about big internal conflicts that led to Steve Jobs’ departure. Failures in the internal POLITICAL box brought them down.

Over-mature stage: Now the rest of the field has caught up with what you are doing right. Rival firms are all encroaching on each other’s market niches; global competition over cheaper supply chains is intense. The most important box becomes the POLITICAL one, including the financial world as a political realm where coalitions are made and unmade. Pressure comes from financial markets, and the maneuvers of powerful financiers in raids, buyouts, campaigns over share-holder value alternatively forcing spin-offs or acquisitions.

The successful organization at this stage becomes more concerned with external politics than anything else; even organizations which are highly successful in the other three boxes can disappear because of the POLITICAL box.

One-sided theories

Most theories in the social sciences are one-sided, placing all the emphasis on one box.

Since all the boxes are important, this will usually yield some insight. But it leaves the theory with blind spots.

Marxian theory-- once known as “historical materialism”-- places the prime mover in the economics box. Marxists recognize other boxes exist but regard them as outcomes or screens for economic interests.

Ideology is a set of false beliefs, covering up for the dominant economic interests; ideas themselves are never autonomous, since they are produced by whoever controls the means of mental production (churches, schools, the media, etc.) Politics is an arena where classes struggle for control of the state and the legal system to favor their own interests. All this has a good deal of reality, and materialists have discovered some important causal links. Marxian theory is weak especially in the SOCIAL box. Key processes such as mobilizing political movements, fighting wars, and the success or failure of revolutions cannot be explained in a purely Marxian framework, but need theories about interaction rituals, emotions, and networks.

Economics as a discipline today has the same location as Marxism. (A rival form, institutional economics, argues that what happens in markets is shaped by the political and legal environment, and hence would be located in two boxes.) Rational choice theory in political science, sociology, and psychology attempts more abstractly to reduce everything to the dynamics of the economics box.

Here again its big flaw is obliviousness to emotional processes, to the influence of ideas, and to networks that do not resemble competitive markets.

Structuralist anthropology, and a related movement of the late 20th century, cultural studies, claims that the prime mover is the CULTURE box.

This claim gains some respectability from theories in cognitive science that schemas and categories are fundamental in structuring both brains and computers. For structuralists, the culture/cognitive map lays down the blue-print on which societies and social institutions are patterned. The weaknesses of giving primacy to the ideology box are: ignoring the importance of emotions-- an error that cognitive psychologists have begun to rectify, since emotions are key markers of what cognitions are paid attention to. There is also a theoretical dilemma between trapping oneself in a static universe where culture always repeats itself, and recognizing cultural change but being unable to explain it except as a mysterious “rupture,” as theorists like Foucault called it.

To explain changes in culture, the other boxes are needed.

Durkheimian sociology solves these problems by locating primacy in the SOCIAL box. And it spells out the mechanism by which social solidarity, energy, and action is generated (and conversely when solidarity, emotion, and action fail). Interaction ritual (IR) theory reverses the priority between the SOCIAL and the CULTURAL boxes; it is where successful interaction rituals are carried out that the ideas people focus on and talk about become sacred objects, thus making them dominant ideas. (Here Durkheim outflanks Marx.) Why ideologies change is no mystery from this point of view; when the carriers of ideas stop having successful IRs, those ideas fade away.

Durkheimian theory is one of the big pieces for solving the whole puzzle, but it can’t stand alone. To carry out successful IRs, material conditions are needed; so it is subject to inputs from the ECONOMICS box, both in the form of the material resources Marxists are good at analyzing, and the market processes seen by conventional economists. In the past, Durkheimian theorists have tended to downplay conflict, and to regard the POLITICAL box as little more than a place where the norms and ideals of society are enacted. We need all four boxes.

There are other important but one-sided theories in the SOCIAL INTERACTION box. Freud and his followers were especially imperialistic, applying the theoretical dynamics of early family life to remote fields like art and politics. To his credit, Goffman said that he was dealing with only one part of the puzzle.

The nearest to recognizing the pervasiveness of multiple causality was Max Weber. In his theory of stratification, he argued against Marx that there are not only economic classes, but divisions by cultural life-style groups (status groups), and by power groups or parties fighting over control of the field of state power. Weberians have elaborated this into a 3-dimensional scheme, in which everything has an economic, social/cultural, and political aspect. Weber merges the social and ideological boxes, since he argues (especially in the history of religions) that every kind of ideal has a social group that is its carrier.

The most important new development of Weber’s 3-dimensional theory is Michael Mann, who elaborates it to four dimensions in The Sources of Social Power. Mann does this by splitting the POLITICAL box into political power (the internal dynamics and penetration of the state), and military power.

Mann thus analyzes world history as a series of shifts in the four sources of power: Ideological, Economic, Political, and Military. (The Social Interaction box gets downplayed.) In Mann's theory, a major revolution must include changes in at least three of these.

 

Origin of the Four-Requisites model

Sociologists who know the history of our field will recognize that what I am saying is not original, but was stated by Talcott Parsons.

Since Parsons was my undergraduate teacher at Harvard in the early 1960s, there is no mystery about where I have gotten the four-requisites model.

I have made two changes, one minor and one major. Parsons had a much more abstract way of labeling the four boxes (he called them Adaptation, Goal-attainment, Interaction, and Latent pattern maintenance-- hence Parsonian students used to refer to them as the AGIL scheme); and he referred to the four boxes as “pattern variables.”

It is a lot easier to see what we are talking about if we call them ECONOMIC, POLITICAL, SOCIAL INTERACTION, and CULTURE boxes.

The major change is getting away from functionalism. Parsons regarded society as like a biological organism, in which all the parts are like organs that function harmoniously together to keep the organism healthy. Functionalists have trouble dealing with conflict, since there is no analogy in the physiological world. And their theoretical bias is to see everything as contributing to the success of the social organism. I have changed the model to four requisites for a social unit to succeed, without assuming that the requisites will be met. As we have seen in examining medicine, parties, and businesses, they often fail. And they are full of internal dilemmas, so that one box works against the success of another.

The key is to treat everything as a variable: how much and what kinds of material/economic resources, political alliances and conflicts, networks and emotional solidarity, and ideas are there? Our aim is to make the theory explain quantitative differences rather than merely checking off a set of conceptual boxes. As I have suggested, different kinds of social projects have different emphases among the four requisites; and these requisites can change over its life-history.

One-sided theories are popular. They have the practical advantage of making our cognitive world more manageable; and they appeal to feelings of membership in some ideological movement striving to dominate the intellectual world. Their disadvantage is that one-sided theories always fail through their blind spots.

The four-requisites model is a convenient way of dealing with the multi-causal processes that make up the real world. Combining the best theories in each of the four boxes is our most realistic way of explaining what will make anything succeed or fail.

 

Napoleon Never Slept: How Great Leaders Leverage Social Energy

Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs

E-book now available at

Maren.ink

and

Amazon

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

REFERENCES:

Among the huge literature on medical sociology, see:

Adam Reich. 2014.

Selling Our Souls: The Commodification of Hospital Care in the United States.

Daniel Chambliss. 1996.

Beyond Caring: Hospitals, Nurses, and the Social Organization of Ethics.

Charles Bosk. 2003.

Forgive and Remember: Managing Medical Failure.

Marshall Meyer and Lynn Zucker. 1989.

Permanently Failing Organizations.

parties:

David Grazian. 2008.

On the Make: The Hustle of Urban Nightlife.

Cas Wouters. 2007.

Informalization: Manners and Emotions since 1890.

David Riesman. 1960. “The Disappearing Host.”

Human Organization

19: 17-27.

business:

For an analysis in terms of networks and Interaction Ritual theory, see

<!-- /* Font Definitions */ @font-face {font-family:"MS Pゴシック"; mso-font-charset:78; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:1 0 16778247 0 131072 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-parent:""; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; mso-bidi-font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-font-family:"Times New Roman"; mso-bidi-font-family:"Times New Roman";} @page Section1 {size:8.5in 11.0in; margin:1.0in 1.25in 1.0in 1.25in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.Section1 {page:Section1;}

Randall Collins and Maren McConnell, 2015.

Napoleon Never Slept: How Great Leaders Leverage Social Energy

.

published as an E-book at

http://

maren.ink

THE NETWORKS OF LAWRENCE OF ARABIA

Lawrence of Arabia is probably the most famous name to come out of the First World War. It was a long grinding, muddy war in the trenches that ended more with exhaustion than victory, leaving nobody covered with glory. T. E. Lawrence was the exception, the lone individual who made a difference, an Englishman riding a camel out of the golden desert sands of the Middle East. Everywhere else, the generals are hard to remember, and the politicians ended up with reputations of blame rather than accomplishment. Other than Lawrence of Arabia, the only name of a WWI hero that is remembered is the Red Baron-- the top German flying ace. He wasn’t one of the good guys, but he was the heavyweight champion everyone else tried to beat. And like Lawrence, he was away from the dirty trenches, flying solo in the open sky, dog-fighting at a few thousand feet where everyone could watch his exploits from the ground.

Lawrence is remembered for organizing the Arab revolt in the desert that drove the Turks out of Palestine and Syria, bringing down the Ottoman Empire and putting in its place the Middle East that we know today: the arbitrary partitions that became Iraq, Kuwait, Saudi Arabia, Jordan, Syria, and Israel. Anyone who has seen the Academy Award-winning film (seven Oscars in 1962) Lawrence of Arabia, will know that Lawrence was full of good intentions for the Arabs, but was frustrated by the diplomats, especially the dirty deals between the French and the British. Although Lawrence did his best, the politicians always mess things up and the result was the endless series of illegitimate regimes whose resentments and infighting have lasted down to today. Peter O’Toole, the tall handsome actor who plays Lawrence, drives off sadly in a car (leaving his camel behind) after his last victory at Damascus, while Alec Guinness, who plays King Faisal (who in real life became the first ruler of Iraq) folds his hands and smiles cynically about these Western people who lack the simple honour of the desert.

We need to keep reminding ourselves that movies aren’t reality, and that just because you see it on the screen doesn’t mean that is the way it happened. Movies pick out a few exemplary scenes, chosen for their dramatic qualities, and fold years into a few hours. Add the film ethic of show-don’t-tell, and the result is that what we see on the screen sticks in our memory, but what gets lost is the tangled web of motives and the thousands of players that determined what went on. For the reality, there is no substitute for reading long books.

So how did we get to the towering Peter O’Toole image from the original T. E. Lawrence?

The real Lawrence, as of 1916 when he went off on his mission into the desert, was not only barely five feet six inches tall, but was just one of the British officers who could speak Arabic, went out on missions, rode camels, wore desert robes, and led guerrillas behind enemy lines. How did he get to be the famous one?

The problem is universal. There are many more capable people than the small number who get into the narrow spot-light of fame; and that is true in the intellectual world, in Hollywood, and in most other things. Most big enterprises take teamwork, with dozens of prime movers and thousands who contribute; no single hero accomplishes anything without all those other people. The spot-light on some necessarily puts many others in the shadows. So how does a particular individual get the chance to be the one in the spot-light? The career of T. E. Lawrence tells how.

Myths: Lawrence as isolate and rebel

The film image of Lawrence gives the impression that he was a loner. He didn’t like people, and the British military establishment didn’t like him. He is the true existentialist hero, who answers to himself alone. Lawrence tells the visiting American journalist that he likes the desert because it is clean-- while most of the world isn’t. And Lawrence feels uneasy about the dirty politics he has to get involved with; he feels uneasy about all sorts of things, whether he is coming to enjoy killing, whether he is homosexual and likes being flagellated (homosexuality barely peeping out of the closet in 1962). Lawrence is just plain uneasy because he is the last honest man in a world full of people who aren’t.

All of this is not exactly false; and the way he behaved in the 1920s after he became famous, up until his mysterious death in 1935, certainly shows he was a complicated person. But the impression that he was a loner, that he went off and did things by himself and against all authority, is extremely misleading. Lawrence was an agent of British policy. He was very familiar with political factions inside the army and the government, and he strongly agreed with some policies and opposed others. Lawrence was quick to devise plans for achieving goals that high-ranking people were glad to hear. He kept getting his chances because he was the bringer of good news in a war that was full of disasters, and he offered practical ways to carry out policies that sincere British imperialists also believed were right-- and cheap at that, since they could use native Arab troops without putting British boots on the ground. Lawrence was known for speaking his mind, but the way he spoke to key people went with the flow, not against the grain.

Throughout his life, Lawrence had extremely good networks. He started out as a protégé of the most important British archeologists, and excavating with them is how he became fluent in Arabic. He quickly moved into the center of British intelligence-gathering for the Middle Eastern Theatre, and soon had the ear not only of the local High Commissioner and the military Commander-in-Chief, but of top cabinet officials in London, the Foreign Office, and the Secretary of War. He became a confidant of Winston Churchill. It was not a case of who-you-know rather than what-you-know; that stupid cliché misses the key point that you have to know how to talk to important people, and that means having something important to say. Lawrence built his networks by leveraging the importance of what he could say to them. And vice versa.

Lawrence avoids Emotional Energy-draining scenes

Charismatic persons, as I have shown elsewhere* are highly energetic. They are dynamos at getting things done, and they get other people energized around them. But they are also good at picking their spots. Charismatic leaders don’t waste their time and energy on encounters that lead nowhere and only cost them emotional energy (EE).

Jesus, the most charismatic of all, told his disciples “shake the dust from your shoes” and leave a village behind once you see that they aren’t going to receive you.

*Napoleon Never Slept: How Great Leaders Leverage Social Energy. http://maren.ink

From quite early in his career, Lawrence avoided energy-draining social scenes. As a student at Oxford, he saw no point in trying to get into the aristocratic circles with their luncheons and drinking parties, or even dining in college. The posh social life depicted in Evelyn Waugh’s Brideshead Revisited fitted neither Lawrence’s personality nor his middle-class background. He knew where he wasn’t wanted. That doesn’t mean he was simply a grind or a timid person. He liked excavating Roman ruins in the countryside and bicycling in foreign countries. He would carry a pistol on the streets of Oxford in solitary wanderings late at night and fire it off in the underground sewers to alarm passersby above, and outside friends’ room to announce his arrival. There was a long-standing tradition of drunken carousers climbing into their colleges over the roofs after the gates were locked; Lawrence was not one of these, as he lived at home, but he had his own way of raising a little hell breaking rules. Unlike many a college toff, Lawrence never got caught and was never reprimanded by the college authorities.

Stationed in Cairo during the war, Lawrence stayed away from the stilted social life of the British community. Cairo was the headquarters for the High Commissioner, the center of the British Empire in the Middle East. The round of formal dinners and receptions presided over by wives of high officials continued unabated after 1914. Lawrence had invitations, too, as his reputation grew and his intelligence work made him friends among fellow Arabists. But he turned down opportunities when his friends entertained the so-called smart set. The pecking order of titles and social precedent would be condescending to him at best, and the rigid protocol and bright chatter in platitudes and subtle put-downs would only bring down his EE. Later in his life, after his return to England in the 1920s as a famous man, he attended such events sometimes but had nothing but scorn for vapid sociability. On such an occasion, an aristocratic lady seated next to him at dinner said, after a series of conversational sallies, “I’m afraid I don’t interest you very much.” Lawrence replied: “You don’t interest me at all.”

Formality for its own sake Lawrence avoided. It gave a taste of social membership and rank, but he was determined not to play that game. He disliked the rituals of dressing for dinner and other polite occasions, with their panoply of white-tie, black-tie, sashes and decorations, and he disliked army protocol of saluting, marching and donning the prescribed uniform for the different events of military routine. Regular army “spit and polish” referred to the amount of time soldiers were required to do things like polishing their boots with their own spit preparing for inspections. Lawrence would have none of it. Regular army officers were offended by his sloppy appearance and neglect of military ceremony.

It seems ironic that he made his fame as a soldier, and a British officer. In fact, he became an officer by coming in through a side door. He never underwent officer training, much less graduated from any of the famous military academies. His training consisted of weekend exercises at Oxford with the student Signal Corps, something like an advanced version of Boy Scouts. But he was an outdoorsman, and even more to the point, a Middle-Eastern explorer, and his Arabic skills got him into the Intelligence Section at Cairo, first as a civilian, then with an army rank as lieutenant. When he was sent to advise Faisal in the desert, with every success he got a more impressive title, and ended as Colonel Lawrence by the time his Arab levies entered Damascus.

Military rituals and formalities of self-presentation-- saluting and being saluted to demonstrate respect for rank, holding one’s posture rigidly for hours, officers shouting peremptory orders and expecting prompt submission-- were for Lawrence both superfluous and energy-killing. As he learned from experience, they were the opposite of effective in motivating Arab warriors in the desert. But even before then, Lawrence thought military formalities were useless. Certainly for his own career they were. He became a competent combat soldier, but he learned it by first-hand observation, a self-directed apprenticeship rather than basic training in a Western-style army, where formalities were primary. Every drill sergeant repeats the tradition that automatic obedience to orders is the essence of being a soldier, and marching in step and being shouted at by NCOs is the way to learn it. For Lawrence, war was about the realities of dealing with the enemy and motivating one’s own side; formalities got in the way.

For Lawrence, military formalities were like aristocratic ladies’ receptions: a lot of showing off of rank, while deadening one’s perceptions and lowering one’s energy. One reason he became a charismatic leader was that he avoided energy-draining situations as much as possible. What remained was to find stimulating encounters that pumped up his energy.

He already was beginning to find them, among the intellectual leaders at Oxford, and among his fellow Arab experts in Cairo.

From Oxford outsider to archeological insider

Lawrence came from an economically comfortable middle-class family, but they were far from wealthy.

One advantage was that they lived in Oxford, and all the brothers won Oxford scholarships; they could not have afforded to attend the University otherwise. Lawrence did not go to a “public school” (i.e. the private boarding schools where the English elite acquired their networks), and instead attended Oxford city high school. In other words, Lawrence was just the kind of day-boy that aristocratic students wouldn’t bother to notice. But he did have a head start on his career. Already as a teen-ager he was an amateur archeologist, digging up pottery fragments and other artifacts from the ancient Roman period of Britain. Lawrence would take these to the Ashmolean Museum at the University, and became known to the curators. By the time he was an undergraduate, he was accompanying famous archeologists on digs in the Middle East. When he graduated in 1910 he was granted funds to carry out his own excavations.

The period before WWI, and continuing again in the 1920s, was a Golden Age of archeology.

Research teams from universities in England, France, Germany and the United States competed to dig up remains of the ancient Biblical civilizations, and made sensational finds like Pharaohs’ untouched tombs. Like rival Great Powers, archeologists divided up sites from Egypt to Mesopotamia. Lawrence had a good four years in the field, eventually heading his own expedition on the upper Euphrates River at the border of what is now Syria and Iraq. (The same territory became the stronghold of the Islamic State militants in 2014, a little more than 100 years later.) Lawrence encountered French and German archeologists, consuls and railroad-builders, the whole face of contemporary imperialism. It was good for his self-esteem and his emotional energy.

Foreign archeologists and other important visitors traveled under official permission from the Ottoman Empire, which was severely in debt to the Western powers. Lawrence, like others, got an escort of Turkish soldiers to guard against robbers and local troubles. He carried a pistol and showed off.

Lawrence also found that he could get along well with the natives. He was in daily contact, hiring and firing, giving orders for the grunt work of digging and excavating. He became fluent in colloquial Arabic, learning from the ground up rather than in school. He had found a place where he could be a leader.

Learning to go semi-native

Lawrence became expert in Arabic manners. He observed the differences among urban townsmen (who he didn’t like), rural peasants, and the nomadic Bedouin of the desert. When the war broke out, Lawrence as an intelligence officer had great success interrogating prisoners. He didn’t threaten them, but guessed where they came from by their dialect, and chatted about local personalities and gossip. This quickly earned their trust, and he heard all sorts of information from the point of view of low-level soldiers in the Turkish army. Lawrence got to be good at small talk with the natives, just the kind of sociable chit-chat that he avoided with his British compatriots. The difference was that chatting with the natives had a purpose-- it brought information, and it gave him an important status both among the people he talked to, and his colleagues in Intelligence. Chatting at polite English dinner just underlined his own marginal position.

Among the Arabs, chatting was energy-gaining; in English society, it was an energy-drainer.

What Lawrence was doing was going semi-native. No one ever mistook him for a native, except for unperceptive European outsiders. His accent and his facial complexion would label him immediately. But being able to deal with Arabs of all ranks on a daily basis gave him a special status as a go-between, the advantages of which were recognized on both sides. Above all, he acquired the manners for it. Lawrence avoided the style of the arrogant colonial official shouting orders at the natives. He once commented about such an officer that any self-respecting servant would murder him. (Later, he was.) By the time he was leading Arab troops in the desert, visiting British officers noticed that Lawrence preferred to spend his spare time with the Arabs. Riding with Arab soldiers in the desert, Lawrence would spend endless hours as they did, repeating family genealogies, gossiping about old feuds, reciting Arab poems and songs.

Lawrence was not the only European to go semi-native. It was fairly common for officers in the hot Middle East to don at least part Arab dress, sometimes full robes, but often the head covering against the sun. A British officer in the Gallipoli campaign had extricated himself and his troops from being overrun in the trenches by calling out commands to attacking Turkish troops in their own language, successfully pretending to be a Turkish officer. A German consul at a diplomatic post in Iran acquired the reputation of “a German Lawrence” by recruiting an army of tribesmen to fight the British.

In short, not all European officers were arrogant colonialists cut off in their aloof superiority and their cocoon of upper-class manners. Lawrence worked with officers like Colonel Stewart Newcomb, who accompanied him into the desert to meet Faisal, and who later commanded his own guerrilla forces behind enemy lines.

British officers in Arab garb, 1917

The Arabist circle at Cairo GHQ

Lawrence was acquiring networks. When war broke out in 1914, he was soon recruited by his archeologist connections into intelligence work. There was already a circle of scholars and diplomats, skilled in Arabic language and affairs, attached to the headquarters of the High Commissioner in Egypt. Lawrence, 26 years old, was low in rank but well-positioned to be noticed for his skills as an Arabist.

The Arab Bureau became his support group and an important part of his identity.

They shared the view that the Arabs’ perspective must be taken into account. The Ottoman Empire was multi-ethnic, and the Young Turk reformers then in charge had a tricky ideological problem. On the one hand, they were trying to reform Turkey into a modern, European-style power, including a military alliance with Germany.

On the other hand, they posed as defenders of the Islamic world from Christian Europe, painting the English as imperialists. The Turks attempted to leverage the fact that the holy cities of Mecca and Medina were part of their territory, and maneuvered to have their war against England declared a jihad. To counter this, the Arab Bureau favored recruiting Arab tribes to rise against their Turkish overlords, the British supplying them with arms and support. On the ideological front, the Islamic message had to be countered by stirring up Arab national identity.

The trick was to offer some Arab leader a kingdom, under benevolent English tutelage: in short, to get them to opt for the liberal British Empire against the oppressive Turkish one.

Lawrence did not create the idea of an independence movement for the Arabs. He picked it up from his colleagues at the Arab Bureau, and did everything he could to further the plan. His own skills at getting along with the Arabs meshed with the grand strategy of his team.

The career accelerator: advantages of staff expert over line authority

Although Lawrence was an inexperienced civilian with a temporary rank in the Army, his connections through the Intelligence Section and the Arab Bureau led closely to the top. His boss in Intelligence, Clayton, became the Chief of Staff to Wingate, the Army chief confronting Turkish forces threatening Egypt from Palestine. His own Oxford professor Hogarth became head of the Arab Bureau. The Minister of War, Kitchener, was an army hero, famed for his victories in the Sudan, who had made Egypt his base before being promoted to London. The Turkish war was a side-show to the Western front, but the war in France was a costly stalemate, with little hope for a decisive victory. If a breakthrough was going to happen, it might well come through the weaker flank, Germany’s Turkish ally. Winston Churchill thought so, and as Lord of the Admiralty had pushed the Gallipoli campaign to take Istanbul from behind. It proved another costly failure. Still, something might be started by an Arab revolt, that would roll up the Turkish empire and shift the balance in Europe. At any rate, higher-ups were primed to listen and give support.

By early 1916, with everything going wrong in France and the Gallipoli campaign a disaster, Lawrence was given an important mission. Troops had mutinied in the Turkish army in Iraq. The British had sent an army to support them, but it advanced too far inland and was cut off.

The Turks counter-attacked and now there was a danger that the British force itself would be lost. From the Cairo point of view, the problem was made more intricate by inter-agency rivalry.

The British government in India-- which had its own semi-autonomous standing and its own Minister in the Cabinet-- regarded Iraq as part of their expanding sphere of influence; and most of the 10,000 soldiers surrounded there were from the Indian army, led by British officers. The Cairo and India offices did not trust each other, but now India was looking for Cairo to bail them out. Lawrence was sent with two other officers to investigate the situation and see what could be done. Lawrence sent confidential messages to his chief that the India staff in Iraq were incompetent and that the force could not be extricated before supplies and ammunition ran out. Indian army officers tried to evade blame for the disaster, which was being compared to the surrender at Yorktown that concluded the American revolution.

Lawrence, as an outsider, was given authority to negotiate whatever terms could be reached with the Turkish commander. This was a strange situation: a young lieutenant sent on an intelligence-gathering mission from the British Middle Eastern GHQ was put in charge of negotiating the surrender terms of an Anglo-Indian army under the Government of India.

But Lawrence was a linguist and the agent-on-the-ground, while the India Office was content to let someone else take the disagreeable duty off their hands. The situation at the battlefield was hopeless, and Lawrence was unable to get more than assurances from the Turks that the British prisoners of war would receive decent treatment. He had been given dirty work to do, but his superiors knew where the blame lay. On his return to Cairo, he was promoted to captain, with a reputation as a clever agent who could make good decisions in the field, however eccentric he might be.

It was an advantage that Lawrence was a staff officer. He had no command over anything.

If that were the case, he would have been in a chain of command, controlling a small number of troops below him, while carrying out orders from a series of officers above. But as a staff officer, he was attached to a collegial group of intelligence experts and strategists, where his ideas could go directly to the top. A military officer holds two different statuses: one is the rank (until recently, lieutenant), the other the position of command.

Lawrence had none of the latter, but it also meant he was not tied down to a specific position in the hierarchy. His working network trumped his rank, and made it an unimportant formality.

Already in the previous year (September 1915) Lawrence’s ideas had reached the top levels. With Gallipoli a disaster, Lawrence and his Intelligence Section boss Clayton worked out a plan to hit the Turks in a more vulnerable place: a naval attack to seize the port of Alexandretta in northern Syria. This would take advantage of Britain’s naval superiority and could be linked to a national uprising of Arabs against Turkey. The plan was approved by Kitchener and the top generals and admirals, and was favorably received by the War Cabinet. But the French Commander-in-Chief angrily rejected it; we are pouring out our blood against the Germans, and you English want to take the land of Syria that should be France’s reward for her sacrifice! French-English rivalries over their respective empires, as well as their respective battlefronts, would continuously strain the Arab Bureau’s plans. For Lawrence and his colleagues, it was always a multi-sided struggle, and the Turks were not the only enemy.

Go-between opportunities: native revolts and indirect rule

Lawrence’s opportunity to act as go-between was ideal for increasing his freedom of action. We have already seen how distrust between the India and Cairo branches of the British Empire put Lawrence in the position to negotiate the end of the Iraq campaign with the enemy. Another opportunity was built into the British structure of indirect rule.

The technique was to find a figurehead ruler who would keep up native traditions while being directed behind the scenes by a British advisor controlling the military, treasury, and administration.

Lawrence in Arabia was sent to set up just such an arrangement. If he improvised and exceeded his authority, he would not be the first.

Much of the Empire had been created by British agents in far-away places who took the initiative, made ad hoc alliances, and led natives troops in conquests that the British government would accept as fait accompli; Clive in India during the 1740s and 1750s was the pattern for many others.

The power of negotiating agents was highest in multi-sided situations with many players, and especially where alliances were volatile, and fortunes of the players rose or fell depending on whether the coalition they joined did well or badly. This was the situation of the Ottoman Empire. But native revolts were inherently ambiguous; a local leader might just as well be playing for a better title, or for his tribe, his family, or just plain money. The plan of the Cairo Arabists was to detach the Arabs from the Turks and ally them with the British Empire.

But all sides could play that game; just who comes out on top is still to be decided.

In Persia when the war broke out, a German consul with good language skills, Wilhelm Wassmuss on his own initiative recruited 3000 native tribesmen to revolt against the Persian puppet government, leading them in guerrilla warfare and wrecking havoc with the British sphere of influence.

In Arabia, all eyes were on Hussain, Sharif of Mecca, who refused to call a jihad against the British and took the holy city into revolt against the Turks.

But that was hardly the end of it. The Germans believed Hussain could be bribed back into loyalty. Hussain was in the favorable bargaining position of getting offers on all sides, and could sit back and consider among them while the bidding mounted. Sit there he did, satisfied to wait and see what developed, frustrating the British who hoped he would raise an army to drive the Turks out of the entire Arab-speaking crescent.

On top of everything, there were the French. Since the British seemed to be accomplishing nothing, and the French didn’t trust them when it came to empire-building,

they decided to steer their own Arab revolt with a pro-French figurehead. The French already had an enclave in Lebanon, and sent forces down the Red Sea to Jeddah, the port nearest to Mecca.

The French leader Colonel Cadi, was even ahead of Lawrence at this point, wearing Arab robes and carrying a gold dagger, although he also annoyed the British by raising the French flag over Jeddah. He offered arms and money to Hussain, and to bring in more troops to beef up Hussain’s forces (and keep their loyalty with the French). The Arabist faction in Cairo had to act. They sent a mission to Jeddah, including their best field agent, Lawrence.

Lawrence chooses the network bridge and shapes the Arab Revolt

Sociological theory of networks says that the best position to be in is where networks are separated, and you get to be the only bridge between them. Two different networks cut off from each other are distinct pools of information. If you can make the unique connection from your own network to the other, you can use information that no one else has. You are a step ahead of the competition; you can get the job, make the investment, publish the big news story, put together the invention and announce the discovery first. Ron Burt calls this the theory of structural holes; his research on business careers shows that the advantage goes to the person who becomes the bridge across the hole.

But in the volatile situation of multiple possible alliances that Lawrence found himself in, it wasn’t just a matter of establishing a bridge to the other network.

In this fluid situation, it wasn’t clear who was the key person to contact on the other side. Most people thought it was Hussain. But when Lawrence arrived in Jeddah, he quickly concluded that Hussain was the wrong person to lead a revolution. * Hussain’s son Abdulluh was in Jeddah to meet the British emissaries. But Lawrence sized him up too: Abdulluh was too timid, wouldn’t make a move without his father, as Lawrence observed that he held up negotiations repeatedly to call his father. Lawrence heard there was another son (Hussain had plenty of wives and children), camped with his forces in the desert. Lawrence got permission to go inland to visit this son, Faisal, and soon decided he was the man.

* Lawrence was right. Even after the Ottomans were defeated, Hussain did not end up as ruler of Arabia. A rival tribe led by Ibn Saud, which had been hanging in the background all the time, stepped in and took over the new state, now called Saudi Arabia.

Faisal was impressively fierce looking, a warrior, with the prestige and ambition to lead the revolt the British were looking for. His main problem was his father. Lawrence’s job was to insinuate that a connection with the British would be better than relying on Hussain.

Faisal may not have been convinced; like other Arab leaders, he thought that the British might lose (they were doing poorly in the World War up to this point), and there had been feelers from the other side. Lawrence’s task was to buck him up, to build a strong tie between themselves personally that would carry them along together in the joint enterprise. Of course there was a lot in it for Faisal; he had the promise of being set up as King of all the Arab-speaking people, from Arabia around to Iraq. But he had to have confidence in the British that it would really happen. And that meant having confidence in Lawrence, who was the point of contact.

Lawrence was building a bridge, all right, but it was more than just seeing where there was a hole in the network and making a connection across it. He had to choose who to connect with; and he had to make the connection strong enough so that it worked.

It wasn’t just a conduit of information but an alliance for joint action. Advantageous network ties are sometimes referred to as “weak ties,” because it is easier to get new information from someone you don’t know well, someone in a different social circle than your immediate friends who all know the same things. But Lawrence had to build the connection with Faisal into something that was emotionally strong. This is often referred to as “trust” or “social capital,” but the terms are too pallid. What Lawrence had to do was generate emotional energy: to energize his new contact, Faisal, with feelings of confidence, aggressiveness, initiative, to pick up the ball and run with it. And the mechanism of emotional energy, as I have explained elsewhere, is the art of energizing other people while simultaneously energizing yourself.

Lawrence building up Faisal was also building up himself. He couldn’t do one without the other. His networking skills put him on the path to becoming Lawrence of Arabia.

Once Lawrence became Faisal’s advisor, the process repeated itself. He didn’t rest on a static network.

Faisal had to become the leader of a movement, the symbolic point around which the Arabs would rally. Concretely, this meant recruiting tribes to join his army. Lawrence himself became the recruiting agent. Now besides being a network link between Cairo and Faisal, Lawrence becomes the network link between Faisal and one tribe after another. The tribes were wary, waiting to see which way the shoe would fall. Lawrence had to convince them. He did this in the name of Faisal. But he was the one who improvised, concocted schemes, found military targets they could handle, promised them spoils. He made promises for the future. To build confidence in the uprising, Lawrence had to invent a good deal as he went along.

And this was the way Lawrence operated with his British superiors as well. The further he got into the desert, and the more tribes he assembled, the more balls he had to keep in the air. What Lawrence and the Arab Revolt were doing was always a matter of propaganda and myth. This was not a trait of Lawrence, although his detractors later said he was a mendacious personality.

That wasn’t the way he came across early in his career, as an archeologist and as an intelligence expert at Cairo, where his reports were regarded as the most reliable information. It was the structural position as network bridge, out in the blowing sands of Arab politics, that made him blow with the winds. Better said: that made the winds appear to blow the way Lawrence told it. The bridge who builds networks out of shifting alliances has to become a whirlwind of emotional energy. Lawrence was on his way to becoming a charismatic leader.

Flows of network resources-- to the Arabs: money, weapons, information, impressiveness

What did Lawrence have to offer? First of all, money. His government knew that Arab loyalty wouldn’t be cheap, and they were ready to provide what was needed. Since the Arabs did not trust paper money, Lawrence carried gold coins from the British treasury in Cairo.

On campaign, he rode with gold in bags of a thousand pounds sterling. As his success in recruiting Bedouin tribes grew, his subsidy from the Foreign Office grew to 200,000 pounds per month-- about $10 million today. [Fromkin 223]

The money translated into the weapons and accoutrements of war. Lawrence could deliver thousands of camels in full harness, a sign of great wealth and power in the desert. Guns and ammunition were also provided; as the war progressed, machine guns, artillery and armored vehicles also arrived, with British military crews to operate them.

The British empire was wealthiest state in the world at the time; they could afford the expense.

Between 1914 and 1918 Britain spent as much on the war as all the other Allies combined. It was their pattern to use money rather than their own troops, where possible.

As Lawrence’s ad hoc army moved north towards the Turkish strongholds, he had complete authority to distribute gold to whichever tribes he chose. Ceremonially it was Faisal’s army, but it was Lawrence who built up network connections and kept them operative with his monthly deliveries of gold. Network theorists take note: what was passing through this bridge was not primarily information, but money. The most effective networks provide a flow of material payoffs, where the paymaster keeps his partners on the hook because they rely on him repeatedly.

The same principle operates in high finance.

True enough, Lawrence also had information to provide. The Arabs were amazed at the details Lawrence could tell them about the disposition of the Turkish army.

Lawrence was relying on the British intelligence service back in Cairo, with its far-flung agents, its electronic communications, and its success in breaking Turkish codes. But he didn’t explain this; his own support network was all the more impressive because invisible. In fact, Lawrence’s information was of little practical use to the Arab tribes, except as he organized them, paid them, armed them, and led them to fight. In that sense, his information was more theatrical display than a real exchange of advantages.

Remarkably, although Lawrence was carrying huge sums of gold coins in the desert, he was never robbed. This shows in how much respect he was held, even by tribes outside the alliance. His reputation preceded him, and when he arrived, his charisma did the rest.

Lawrence wore white robes, with a gold dagger and gold headpiece, given to him by Faisal. It was the costume of a sharif, although of course the Arabs recognized he was a European and not a Muslim religious leader. Faisal did not like Lawrence to appear among them wearing his British army khaki; as his deputy, he provided Lawrence with the outward signs by which Arabs would immediately recognize him as a man of wealth and power. When Lawrence reported back to Cairo, however, he generally resumed his army uniform.

The film shows a famous scene when Lawrence arrives from the desert with news of his military triumph, shocking British officers by entering headquarters in his desert robes. But on the whole Lawrence played both ends of his network in the locally appropriate way; one could see immediately by his outfit which role he was playing.

Lawrence (left) reports in Cairo, March 1918

Flows of network resources -- to the British High Command: good news in bad times; cheap victories; support for the Arabist faction against French imperialism

Lawrence was not shy about approaching the highest British authorities with his reports of success among the Arabs. As soon as he reemerged from the desert in November 1916 after fingering Faisal as the leader of the Arab revolt, Lawrence went immediately to visit all the top British officials in theatre. Without specific orders, he went to Khartoum in the Sudan to confer with the pro-consul, then back to Cairo to inform the commanding general that an Arab army could be raised. He crafted his message to what they wanted to hear. No British troops would be needed; it wasn’t even desirable to send them into the Muslim holy land. All it would take was money, some weapons, and above all Lawrence’s connections in the desert. Almost immediately he was sent back as liaison to Faisal, carrying everything he asked for.

The time was auspicious for an enterprise like this. War on the Turkish front had been an expensive disaster; 250,000 troops lost at Gallipoli. War on the Western Front was even worse; a million casualties in the bloody stalemates at Verdun and the Somme during 1916 had convinced many top leaders that the war could not be won, that a peace would have to be negotiated. The British cabinet was in crisis; the Prime Minister was about to thrown out. The Germans were winning in the East, and the Russians were soon in revolution and withdrawing from their Western alliance. Through this dark period-- from the British war-aims of view-- Lawrence’s successes in the desert were the one bright spot.

In reality, for many months his successes were hazy and exaggerated. Not until June 1917 when his Arab forces took the port of Aqaba did Lawrence have something palpable to show. But he was his own best promoter, and for the British, the sole source of information about what was going on with this Arab army forming in the desert.

And of course Lawrence’s home base of supporters was cheered and energized. It was their program he was carrying out. The Arabists knew what the French were demanding in the Middle East; knew that a secret protocol had been signed in January 1916 between ministers, the Sykes-Picot agreement to divide up the Ottoman lands. Like Lawrence, the Arabists in Cairo GHQ were playing a double game. In Cairo, they pushed to make the promises to the French null and void. In the desert, Lawrence had to convince Faisal and the Arabs that the agreement with the French was nothing but Turkish propaganda: that the British really were going to carry through what Lawrence was promising them: an Arab kingdom of their own. Double game though it might be, Lawrence had a firm hold. His own networks, in Cairo and in the desert, believed in him; and he told them what to believe.

Lawrence’s interactional Style

Lawrence made an unusual kind of charismatic leader. To his British colleagues, he was quiet, efficient, and to the point. His informality and lack of military manners marked him as eccentric, but his reports and advice were always welcome. He never threw his weight around: how could he? He was a relatively low-ranking officer. Everything depended on his off-the-books success as go-between.

Charisma in the desert: quiet, undomineering, steering the indecisive

With the Arabs, Lawrence adopted another style. He was a uniquely important person, the sole conduit to British gold, weapons, and promises of future rule. But although Lawrence was always in the center, he played it low-key. In Faisal’s presence, Lawrence treated him as the revered leader, giving him all expected deference and flattery. Faisal was actually a rather poor military strategist, and politically he was still wavering as to whether to ally with the British or let the Turks and Germans buy him out. Lawrence knew Faisal, like the rest of the Arabs, would only become enthusiastic for the British war effort when the bandwagon was growing and victory looked inevitable.

Lawrence’s first task was to strengthen Faisal’s prestige. Lawrence never disagreed with Faisal, never pointed out weaknesses in his ill-considered plans. An observer noted that Lawrence in conference with Faisal always spoke softly, “carefully choosing his words and then lapsing into long silences” [James 183]. No need to stand on ego; everyone knew who he was, and his magnificent clothes marked him out as someone they would have to listen to sooner or later. He made himself indispensable, Faisal’s halo shining ever brighter as Lawrence expanded the war-coalition in his name.

Away from Faisal, with the tribal leaders and with his own soldiers in the desert, Lawrence followed much the same style. He never gave orders; in a memo to his British colleagues, he told them that the European mind-set of a drill sergeant would backfire.

The very fact that Faisal had no skills at military tactics left a vacuum for Lawrence to step into. But he stepped quietly and indirectly. Meetings were free-flowing discussions. Arab tribes were rather egalitarian, inchoate democracies in the sense that it was hard for anyone to give orders; the chief got flattery and deference but rarely obedience. Lawrence would patiently let them talk, starting divergent plans, flaring with momentary enthusiasms and denials. In the end, when everyone had had their say and indecision remained floating in the air like smoke, Lawrence would make his suggestion for action and the meeting would end. Usually they rode with him.

It was the charisma of action more than the charisma of authority.

Like other charismatic leaders, Lawrence was a good micro-observer of individuals. He carefully studied the Arab leaders and soldiers, discerning which way they were tending. A master of timing, he sensed the moment when they would move.

Lawrence’s mastery at indirect control came under test in his final battles, when the Turkish front was collapsing in the north, and his Arab soldiers were capturing large numbers of prisoners. Flushed with victory over an emotionally dominated enemy, Arabs often plundered and killed their prisoners. At times, Lawrence himself was able to put a stop to it. On one occasion, he prevented a massacre by calling the warriors to debate over what to do with 200 prisoners. Another British officer accompanying Lawrence made a speech, in Parliamentary style, that the Arabs thought was hilarious. The meeting broke up in good humor, the passion for killing having passed.

In violence, as in most situations of exerting power, emotional momentum is of the essence. Lawrence interrupted the timing and broke the emotional tone. Again, it was quiet charisma. Quiet, but not mysterious for a micro-sociologist. Charisma is mastery of the micro-interactional details.

Network speed: Lawrence as modernist

To many people, Lawrence was a romanticist, harking back to the past. He was anti-bureaucratic and disliked cities and crowds. He seemed like a wandering knight escaping the modern world in the desert. But Lawrence was ultra-modern in one respect: he liked modern technology, and especially the technology of speed.

Assigned to Faisal in the desert, Lawrence took a wireless apparatus with him, and a crew to operate it. He could communicate directly with headquarters, above all to guarantee the smooth flow of money and weapons. And he controlled the communications link; when he was off with his troops on camels in the desert, he alone could decide when to call in. Similarly with airplanes. As his Arab army grew larger and engaged the Turks more directly in Palestine and Trans-Jordan, Lawrence recognized the value of air strikes to hit fortified Turkish positions, and to give a psychological lift to his troops. Airplanes could land at improvised airstrips in the desert, bringing him ammunition and money. Lawrence made friends with the pilots, got them to carry out impromptu raids for him, and used planes to ferry him in and out of the desert. Lawrence could be an isolate, but only when he wanted to be. As his influence and reputation grew, he frequently made flying visits to Cairo. He worked his networks actively for maximal resources and support.

Lawrence as anti-modernist modernist?

It wasn’t such an unusual combination in the 1920s, when literary and political alienation from modernity became a prominent theme, indeed a hallmark of “the lost generation” after the war. Lawrence just had it a little earlier.

Aircraft were still quite new, and WWI greatly expanded their prominence. Lone pilots were heroes, both as fighters and as explorers. This was part of the attraction for Lawrence, but above all they gave him network speed.

Similarly with motor vehicles. Camels had their advantages, especially their ability to cover hundreds of miles without roads,

go several days without water, and of course without motor fuel. Where camels were the speedy way to move, Lawrence used them. But he also added automobiles and armored cars to his repertoire. When he entered Damascus triumphantly in October 1918, he was wearing his Arab robes, but riding in an armored car.

Lawrence’s career shows two crucial ingredients of becoming a charismatic leader: the micro-interactional techniques that made him impressive to the people he dealt with, and enabled him to recruit and expand his networks. But also, he rose above all potential rivals by his network speed. He found the crucial bridge-position in the networks, and exploited it to the full. As he grew more powerful, he moved faster and faster, keeping connected with all the different parts of his far-flung networks: Arab politicians like Faisal, the multifarious tribal warriors that made up his army, the British army that supported him; his connections with the High Command in Cairo and increasingly on the far-flung battlefields of the Middle East; his connections with the Arab Bureau and through them to top politicians in London. At the height of his career, Lawrence became a demon of network speed. He was visible everywhere: here and then gone, reappearing unexpectedly. How fast the network operated was up to him.

The facade of Arab guerrilla war

The truth of the matter is that Lawrence’s Arab army was not very important. The main action in the Middle Eastern Theatre was a regular-style war near the coast, where the British army had 150,000 men guarding the Suez Canal against a Turkish army threatening Egypt. In 1916-17, Lawrence had a few hundred Arab warriors intermittently raiding the Turkish railroad connection down into the Arabian peninsula. These raids occupied the attention of a few thousand Turkish troops, but in fact the railroad was never broken. Turkish railroad troops were quick to repair the line, and they had plenty of materials stockpiled from pre-war plans to build more railway lines. Nevertheless British GHQ were happy with Lawrence’s periodic reports, and assured the War Office they were getting good returns on all the gold they were pouring into Arabia.

Although it was a military side-show, it was becoming a political snowball. Lawrence had seized his informal role as Faisal’s free-lance recruiting officer and was beginning a gathering avalanche of emotional energy, energizing the desert tribes and himself at the center of it. Lawrence’s Arab raiders largely confined themselves to destroying trains and railroads. Lawrence himself carried the dynamite and set off the fuses. The desert tribes regarded these explosions as a great show, and enthusiastically rushed to the scene. Lawrence himself commented that whatever its military effect, “the noise of dynamite explosions we find everywhere the most effective propaganda measure possible.” [James 212]

The Arab troops were not effective in conventional warfare. Their style of fighting was that of tribal forces everywhere, ambushes and raids upon unsuspecting enemies. Faced with determined resistance, their traditional tactic was to retreat, using mobility of their horses or camels to get away. Lawrence quickly understood this. Desert warriors would “attack like fiends,” shouting and firing in the air, especially when they spied booty like a derailed railroad car. [James 180] When the emotional momentum shifted, they would fade away just as quickly. The Turks had a disciplined modern army, accustomed to holding ranks and taking orders, and the Arab raiders were no match for them when it came to sustained firepower. Lawrence soon acquired the Arabs’ attitude about taking casualties; even a few men killed in a raid was considered too high a price, and a battle of attrition was out of the question.

Lawrence eventually saw that he needed propaganda victories more than anything else.

He began to shift his recruiting campaign among the desert tribes further and further north. Raiding the railroad to Medina, 500 miles down the Arabian peninsula, was becoming repetitious, and too far from the grand objective, which was to liberate the entire Arab-speaking crescent in Palestine and Syria.

The plan of the Arab Bureau had been to foment an Arab revolt behind enemy lines, but this never happened; local populations were too cautious, awaiting military events before they changed overt allegiance. Lawrence decided to push his recruitment campaign as Faisal’s agent northward out of Arabia.

The target became Aqaba. On today’s map, it is the bottom-most outpost of Israel, at the head of a narrow gulf forming the eastern side of the triangle of the Sinai desert-- the western side of the triangle being the Red Sea, with the Suez canal at the top. In 1917, there was no state of Israel, just a large British army east of Suez, facing off against a large Turkish army in Palestine. 

GHQ agreed that taking Aqaba would give the British an alternative line of advance, a back door into Palestine, Trans-Jordan, and Syria. But a naval assault would be costly. The Turks had big guns covering the water approaches. Troops could be landed on the beaches to take the guns; but this looked like a repeat of the Gallipoli campaign to take out the guns on the straits of the Dardanelles, that had ended in a disaster of trench warfare. While the planners wavered, Lawrence took matters into his own hands. Leading a small column of 36 men, he recruited among tribes in the northern desert, with his usual gold and his growing reputation. A 14-day circuitous journey through remote deserts brought his little army into Aqaba from the land side, where the Turks had no defenses, never expecting anyone would attack from that direction.

The Arab army took 600 prisoners and Lawrence immediately set off across the Sinai by camel to bring the news to Cairo. Four days later the British navy was in Aqaba with supplies and weapons.

It would become Lawrence’s new base of operations-- and not incidentally, for the flow of gold that he would use to recruit a far larger army, as many as 4000 tribesmen, for the advance into Syria.

Lawrence at maximal freedom of action

Lawrence’s arrival in Cairo in July 1917 with news of the conquest of Aqaba created a sensation. The Arabs were advancing out of Arabia, and now it was “Lawrence’s Arabs.” Full of his own emotional energy, Lawrence presented a new plan to the C-in-C of British forces in Egypt, General Allenby.

The regular army would advance along the coast; the Arab army would operate inland, distracting the Turks; the two armies would converge on the major objects of attack, Jerusalem, and then Damascus. Allenby agreed.

In reality, it always remained unclear just what the Arab army contributed. The size of its forces fluctuated from week to week, depending on local fortunes and Lawrence’s on-going recruitment. Nominally the chain of command was from Faisal, but Lawrence as liaison to Faisal had all the initiative. Lawrence was placed directly under Allenby’s command, but everything depended on when Lawrence would show up from the desert and what he would report.

Now that Lawrence was operating in closer conjunction with the main British army, the character of his own army began to change. It became a pseudo-Arab army, in part high-tech weapons and troops to operate them, in part camel warriors from the desert. Through the port at Aqaba came a stream of equipment, British officers, even regular army troops. “Lawrence’s Arab Army” acquired supporting forces in signals, supply, transport, armored cars, mobile artillery. Lawrence’s raiders were not just hitting railroads and isolated Turkish outposts, but confronting well-armed garrisons. It was not the kind of warfare the Arabs were good at; and the brunt of the serious fighting was carried out by the non-Arab forces and their heavy weapons.

Lawrence, although not a trained military officer, learned on the fly; soon he was a reasonably competent battlefield commander, who knew the limits of his Arab troops, managed forces held in reserve, called in artillery support and RAF air strikes. Even so it was touch and go. The Arab army made slow going in the latter half of 1917 and into 1918 up the backside of the Palestine front, attacking Turkish bases in what is now Jordan.

There were more British officers with Lawrence now, and they saw the weaknesses of the Arabs, calling them “fickle and feckless,” [James 290] and noting their inability to fight disciplined Turkish troops. At best, it was becoming a war of attrition against the Turks, a war where regular army forces were carrying most of the load.

Nevertheless, even as the character of the war was becoming less romantic, Lawrence’s legend was growing. Access through Aqaba and by plane allowed a considerable number of British officers and even civilians to visit him in the desert.

One of his friends, an aristocratic Member of Parliament, rode 300 miles with him on camels. The officers assigned to desert duty came to adopt Lawrence’s ways, dispensing with army regulations, growing beards and dressing in make-shift uniforms or even in Arab robes They were charmed by Lawrence’s non-directive, egalitarian style and the aura of success that swirled around him as he disappeared and reappeared. In reality, there were many military failures on remote battle sites, but “a few famous successes made up for many unspectacular failures.” [290]

The British field staff with the Arab Army nicknamed themselves “Hedgehog” (from a complicated military acronym) and acquired the camaraderie of an exciting adventure. Like the retinue of a charismatic leader, those who had personally been around Lawrence became disciples propagating his legend.

Ordinary British enlisted men (what the Brits call “other ranks”) called him a “wizard” and were astounded by his informality with them.

Among the Arabs, Lawrence always made a dramatic appearance. He would ride up with 20 bodyguards, mounted on the best thoroughbred camels and splendid in coats of many colours, his approach greeted by excited shouts. It was the gold, of course, and the growing tide of victories; but more than that, Lawrence rode among them in an aura of charisma. Stories about him were circulating as more and more tribes joined in: his reputation for courage, his exploits behind enemy lines, the exciting things that were always happening around him.

Lawrence with Arab troops, 1917

It was during this period that an enterprising American newsman, Lowell Thomas, flew in to interview him. Thomas’s film would make Lawrence a transatlantic hero.

Lawrence’s Emotional Energy struggles and his quest for dangerous adventures

Lawrence’s time was becoming increasingly taken up with administration, as de facto commander of Faisal’s army with a large and crucial contingent of modern British forces. He often traveled by car or lorry rather than by camel, for greater speed and to keep up with the far-flung claims on his attention. He reported to headquarters by plane and boat. Nevertheless, at this very time, Lawrence became even more adventurous, going off on missions on his own.

Although he could have stayed back in his role as commander-- given his rank and responsibilities, should have stayed back-- Lawrence led train attacks in person. He still set dynamite fuses himself, was grazed by bullets, and on occasion was knocked unconscious. He reconnoitered and raided with small groups far behind enemy lines, around the expected line of advance towards Damascus. Alone except for his Arab servant boy, disguised in robes borrowed from gypsy prostitutes, Lawrence followed a group of prostitutes into Amman (now capital of Jordan) to look around; stopped by Turkish soldiers, he was barely able to escape.

On the way back, his servant was badly wounded by a Turkish patrol, and Lawrence finished him off with a pistol so that he wouldn’t fall into Turkish hands.

What was going on? First of all, how was he able to do it? Lawrence was in the extremely unusual position of being able to free-lance anywhere he wanted. He still had no official position or command responsibilities; it was all in his informal network, and he could go anywhere in it at any time. And he had all the resources he needed to move anywhere. He could travel by camel, with his magnificent escort, or by himself in disguise. It was his reputation to pop up anywhere, and he did. He could travel by car, order a plane, or hitch a ride with a pilot who happened to land nearby. At the British end, this was what they were used to. His visits were always welcome, upbeat; although he played his role more quietly there (and switched back into his khaki uniform), he had an aura with the British too, of military advances out beyond the horizon towards their common goal. Then he was off again.

Second question: why did he risk himself so much? Just at the time when he was becoming more successful, when most careers settle into greater responsibility and organizational routine, Lawrence was becoming reckless.

One reason was that in fact things were not going well everywhere in his war zone.

During the period from his triumph at Aqaba in July 1917, until the great offensive launched by Allenby to break through the Turkish lines in September 1918, results with the Arab Army in the desert were spotty. This was covered up by his aura, but Lawrence himself, as a careful observer, certainly knew that his Arab troops often failed against the Turks, especially when he wasn’t there to lead them personally. So he took advantage of his enhanced mobility and moved rapidly from one place to another, always initiating something, always generating some action.

Why would he push the envelope, disappearing for weeks at a time, making huge journeys in the desert, scouting out Turkish strongholds as if he were a low level native lookout?

A clue is in conversations he had with a British companion on one of his desert rides.

“... as he told me last night, each time he starts out on these stunts, he simply hates it for two or three days until movement, action and the glory of scenery and nature catch hold of him and make him well again.” [James 198]

His emotional energy was not always high; it fluctuated. The down times came when he had to think about the political web he was in; the strain of keeping up his enthusiasm with Arab leaders like Faisal, hiding his doubts about what the outcome of the war eventually would be, hiding his doubts about the equivocal role he was playing in it. As the end came more closely in view, the strain grew stronger.

Lawrence always had an escape: action. Out at the forward edge, his Arab followers pumped him up with charisma.

It was his emotional-energy magnet.

The down times came in the moments of transition, when he had to move from his British connections back to his Arab network. As he related, there would be a bad two or three days, feeling the strain of his double life, then the flow of being the cutting edge of action got him energized again.

Lawrence became an action junkie, hooked on danger. It was his way of avoiding the fate of successful leaders, of being trapped upstairs in the formality and the hypocrisy of power.

It fed his personal charisma even more.

The height of ambition, the height of ambiguity

Lawrence by now was acting contrary to official British policy, and misrepresenting that policy to Faisal and the Arabs. Why didn’t the British rein him in? Because the policy that embarrassed the British with their Arab allies was their agreement to divide up the Middle East with the French. Lawrence as liaison to Faisal had to keep assuring him that the Arabs would get the independent kingdom promised them.

Presumably Lawrence knew better, but the only way he could keep operating with the Arabs was to deny that an agreement with the French existed. One might call this the dirty world of foreign agents and secret deals; the British needed to have an agent whom they could let go at arms length. The British probably knew that Lawrence was out of their control, but this was in their best interest. Whatever Lawrence said or promised could be denied; just as, out in the desert, whatever the British diplomats had said could be denied. The arms-length structure was needed by both links in the chain.

Whoever plays the bridge between far-flung-- and dynamic-- networks has vast freedom of action; but also, if there are strong feelings of loyalty, much psychological strain.

The regular British army along the coast advanced in slow phases. In December 1917, Allenby pushed back the Turks in southern Palestine and took Jerusalem.

In September 1918, a long-awaited offensive routed the Turks and sent them retreating in disorder across the northern hills and into Damascus. The Arab Army’s part of the plan was to cut off Turkish railroad links, and trap the Turkish army in a bottleneck. Lawrence’s troops accomplished their part well enough, although the deciding factor was the massive artillery and aerial bombing Allenby had assembled.

The Turks fell back in disarray, just the kind of target the Arabs were good at attacking, and there was a great deal of looting and massacring wounded and retreating troops.

Damascus, according to diplomatic agreement, was slated for the French. They had a small battlefield contingent, and a colonial base in Lebanon, on the coast west of Damascus. Nevertheless, Lawrence sensed an opportunity for an Arab coup. He sent for Faisal to hurry to the front. As Allenby’s liaison, Lawrence was in a position to know exactly what was happening. He had hoped the Arabs would get to Damascus first, and get the credit for liberating it; and this would be the prelude to setting up Faisal as King. But Australian troops from the British command got to Damascus first; finding the city empty of enemy forces, they continued on through chasing the fleeing Turks.

Next morning, Lawrence showed up at the Australian division headquarters and heard that Damascus was undefended. He immediately got an armored car and had himself driven into Damascus. At the town hall, there was pandemonium as rival factions argued over who was the legitimate local government now that the Ottomans had gone. Using all his charisma, backed up by armed force, Lawrence threw his choice behind a local supporter of Faisal’s father. When British and French forces arrived, Lawrence presented them with a fait accompli: a governor in favor of the Arab Bureau’s plan, whom he represented as having been elected by the will of the citizens. For a moment at least, the plan had succeeded.

Game’s up

Next day Allenby arrived and official reality set in. The diplomatic agreement still held. Faisal would not get what he had been promised. As a symbolic token, Arab troops could lead the parade into Damascus, but the Arab governor would be under French command. Lawrence as liaison to Faisal would henceforth report to the French. Lawrence immediately asked for leave to go back to England. It was accepted and his war was over.

His network bridge was broken.

Reputational networks and the travails of celebrity

Although Lawrence was on the losing side of the diplomatic struggle, his reputation was made. If fame was what he was seeking, he had it. His superiors in Egypt and in the Army never held anything against him, and lauded his performance (which implies that they applauded his role as ambiguous go-between). Back in England, the British elite treated Lawrence as a man to know. His pro-Arab and anti-French stance had much sympathy at home, but what could be done? Lawrence attended the Versailles peace conference, continuing to act as Faisal’s advocate and joining in his entourage. To no avail. Lawrence was not the only sophisticated participant at the Versailles treaty conference (others included Max Weber and John Maynard Keynes) who thought its results disastrous. To get an idea of the tone of the conference, consider that the French Prime Minister, Clemenceau, proposed to fight a duel with the British PM, Lloyd George, over the Arab/Syria issue. [Fromkin 289] The Arabs lost again. Lawrence was photographed again, wearing his Arab robes in Versailles.

In 1922 Lawrence was at another disastrous treaty conference, the Middle Eastern settlement made in Cairo, which drew the boundaries of the modern Middle East that have been objects of contention ever since. Lawrence now attended as a confidant of Winston Churchill. They had their picture taken in front of the Great Pyramid, just two camels away from each other, along with Gertrude Bell, another friend of Lawrence from the Arabist circle. Lawrence is back in civilian clothes, disguised in the black suit of a minor civil servant.

from left: Churchill, Gertrude Bell, Lawrence 1922

From the time he had arrived back in England in late 1918, Lawrence was a popular media hero.

The American newsman Lowell Thomas had been sent to Europe to stir up enthusiasm when the US entered the war in April 1917.

Finding nothing encouraging on the Western front, he went on to the Middle East and heard about Lawrence’s exploits. In early 1918, Thomas filmed interviews with Lawrence in his Arab robes.

Movie theatres showing full-length features were just coming into popularity; newsreels were being invented. Film of Lawrence were shown in the US and Britain in spring 1918.

Next year, Thomas launched a two-hour spectacular in a New York theatre, including film of the Palestine campaign accompanied by a symphony orchestra (it was the time of silent movies). Thomas himself gave the narration, playing up his discovery of Lawrence in the desert. It was the launch of his own career as well; Lowell Thomas went on to become the first of the new impresarios, like the TV anchors and interview hosts from that time until today.

Thomas took his show to London, where it ran for six months in 1919-1920.

Lawrence, Lowell Thomas,  1918

All this was just prior to the frenzy for all things Arab, reaching its height with Valentino’s 1921 film, The Sheik. For years during the 1920s, American college boys at dances referred to themselves as “sheiks.” 

Rudolph Valentino, The Sheik, 1921

Modern-style publicity was creating a new phenomenon, the celebrity: not merely someone in public life, or the old-fashioned nobility taking deference as a matter of course. The celebrity attracted the attention of crowds and fans, not because s/he was doing anything, but because of the self-reinforcing effects of media attention.

Lawrence was one of the first celebrities in the modern sense; and he quickly found he didn’t like it. Fame and recognition among the Arabs in the desert was one thing; there he wasn’t a passive recipient of curiosity, but a leader of action. The Arabs who shouted when he approached surrounded by his bodyguard on camels energized him. But being recognized on the street, asked for autographs and invited to dinner parties didn’t energize him; he was just a passive object for others’ curiosity.

He began to take disguises, seeking shelter in country hide-outs, using assumed names.

Being a recluse wasn't what he wanted, but success on his own terms. He had always had literary ambitions, and now he had an epic topic to write about. His personal memoir of the desert campaign, Seven Pillars of Wisdom, was privately circulated in 1922, and published in a large edition in 1927. It is a beautifully written book, capturing the sight and feel of the desert, the personalities of the people. It tells Lawrence’s adventures with self-deprecating modesty, and concludes on the ironic note of the prize of Arab freedom taken away from them at the end. There is no bragging and no rhetoric, but Lawrence is always at the center. What is omitted is crucial for the actual pattern of success: there is no mention of the gold Lawrence used to buy loyalties in the desert; little mention of the high-tech weapons Lawrence increasingly relied upon. The narrative is about his movements with his Arab army, so that an uninformed reader would scarcely know that Allenby’s regular army carried most of the fighting and broke open the way to Damascus.

It was another network triumph for Lawrence as his manuscript circulated among the literary elite. He became friends with its aging patriarch, George Bernard Shaw, whose name Lawrence used as one of his pseudonyms, T.E. Shaw. To gather material for another book, as well as to

escape public attention, Lawrence enlisted in the RAF in 1922 under an assumed name. In effect, he was seeking further adventures in a foreign land; but now it was in the underclass of ordinary British soldiers, who almost never came into intimate contact with the officer class in which Lawrence moved. The book drawing on his experiences, called The Mint, is an account of the rough, authoritarian military training camp. Lawrence himself thought it was a better book than Seven Pillars of Wisdom, but it was never popular. Because it was virtually the first book to record the obscene language of ordinary working men, it was regarded as offensive and never published in his lifetime. His adventure in the social class underground could not keep up the level of his adventure as network bridge and charismatic leader in Arabia.

Of course. The moving structures that supported his charisma were not replaced.

Not surprisingly, in the years after the war Lawrence continued his quest for the latest technologies of speed. He became enamoured of high-speed motor boats, which he tested for the navy. He joined the RAF to see the world of planes from the mechanic’s point of view. He liked fast motorcycles. He was riding one of them in 1935 when he was killed in an accident. He was 46 years old, recently discharged from the RAF, his action network behind him.

Charisma without speech-making

We generally think of charismatic leaders as great speech-makers: Lincoln, Martin Luther King, Churchill, and even on the dark side of the force, Adolf Hitler.

For most of them, what is best-remembered are the speeches they made.

But if the key to charisma is generating high emotional energy in masses of people and rallying them around oneself, Lawrence shows there is another way to do it.

A charismatic leader energizes other people, and thereby energizes oneself. Lawrence did this by talking quietly, observing silently, never giving orders, waiting his time and then making suggestions that others accepted. Of course there were other reasons why he was in a position to get attention even with his quiet style: his unique network bridge, where both ends depended on him alone to give them something they really wanted; his success in delivering things: gold, hope for future plans, a growing coalition, victory. His network made Lawrence.

But also vice versa.

One lesson of Lawrence’s career is that networks are most powerful when they are dynamic. Static networks don’t make careers; they certainly don’t generate charisma. Networks build and contract; and the attracting force that unites them best is emotional energy. Lawrence had the micro-interactional style to generate EE; and thus to grow his networks with enthusiasm. He always had the sense to avoid networks where he lost EE.

Perhaps we should say, he had that sense most of the time, until the moment he left Damascus in political defeat.

After that, he kept looking for new networks, but the flashier ones did little to energize him further; and the more adventurous ones he tried to substitute just brought him down.

His life was like an experiment demonstrating the power of networks, high and low.

 

How charismatic leaders build their careers in war, politics, or business:

Randall Collins and Maren McConnell. 2015.

Napoleon Never Slept: How Great Leaders Leverage Social Energy

published as an E-book at

Maren.ink and Amazon

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

 

References

T. E. Lawrence. 1926. The Seven Pillars of Wisdom.

T. E. Lawrence. (posthumously published 1973) The Mint.

David Fromkin. 1989. A Peace to End All Peace: The Fall of the Ottoman Empire and the Creation of the Modern Middle East.

Lawrence James. 2008. The Golden Warrior. The Life and Legend of Lawrence of Arabia.

Max Boot. 2013. Invisible Armies. A History of Guerrilla Warfare from Ancient Times to the Present.

Bruce Kuklick. 1996. Puritans in Babylon. The Ancient Near East and American Intellectuals, 1880-1930.

[on the golden age of archeological exploration]

 

on advantages in networks:

Ronald Burt. 1992. Structural Holes: The Social Structure of Competition.

John Levi Martin. 2009. Social Structures.

Randall Collins and Mauro Guillèn. 2012. “Mutual halo effects in cultural production networks.” Theory and Society 41.

WHY DOES SEXUAL REPRESSION EXIST?

Freud’s classic argument is that sex is a strong human drive, active from earliest childhood, but it becomes repressed by an internal mechanism. You repress yourself, eliminating consciousness of desire, driving it from your thoughts as well as your behavior. But it comes out anyway, in dreams, in symptoms, in displacements. For Freud, the world is pervasively sexualized, but in symbolic form, via transformations of sexual drive onto targets seemingly far removed from its original erotic objects.

Freud wrote at a turning point now 100 years behind us. He psychoanalyzed his patients at the end of the Victorian era, making his discoveries between the 1890s (“the gay 90s” in the original sense of the time, which meant heterosexual) and World War I. Sex was coming out of the closet-- better said, out of the corset-- and Vienna was the leading center of the action. It was the first period of the modern sexual revolution. Official prudery was being challenged; the heavy layers of clothing were starting to come off, and people were not only starting to talk about sex (and to paint it) but to act on it more overtly.

Egon Schiele, 1917

It is ironic that Freud should formulate a theory of sexual repression at just this time. In fact his patients were caught in the gap, repressed persons in a world where heightened sense of sexuality was rising around them.

Freud’s first patient cured,

Anna O.

Since then have been a series of sexual revolutions. The “roaring twenties”-- the “Jazz age” which in its original slang meant the verb to have sex (“he tried to jazz me” a young woman says in a Faulkner novel). The sixties counterculture, famous for sexual communes-- in fact not very many and all of them short-lived; the counterculture had more long-lasting effects in the shift to cohabiting without getting married, a shock wave around 1968-71 about what used to be called “shacking up” or “living in sin,” and then quickly becoming accepted almost everywhere. Cohabitation was soon followed by acceptance of what used to be called “illegitimacy," which soon changed to "out-of-wedlock childbirth,” and now is completely normalized.

The outburst of pornographic magazines and films in the 1970s, going mainstream and eventually becoming the early cutting edge of video and the Internet. The homosexual liberation movement that achieved public legitimacy in the scandal of the AIDS epidemic of the 1980s, and transformed proper terminology into the elaborations of LGBT. Battles still take place, most recently over gay marriage. The configuration has repeatedly replayed since Freud’s time-- a struggle between one form or another of sexual repression and victorious movements of sexual liberation.

Can we say sexual repression still exists?

That is, exists in the large part of society which is liberated, outside of a few backward enclaves who have lost every battle and appear likely to lose the rest of them?

What Would Totally Liberated Sex Look Like?

Let us define complete sexual liberation as a condition where anyone can say or do anything sexual that they want.

Would there be any limitations?

Consider our own sexually liberated times. Can you go up to any person at all and say, I’d like to have sex with you?

Circumstances where one cannot say this are limitations on sexual speech. As for sexual action, consider its least intrusive form, touching. Can you touch any person you feel attracted to?

The quick answer to both questions is: no. There are very definite limitations and circumstances in which sexual speech and action is allowed.

The phrase is widely used, “between consenting adults.” But this implies a lot more sexual liberty than actually exists, even among the most liberated. “Consenting adults” applies more to action than to words; and many forms of sexual speech are strongly sanctioned-- so that even approaching the topic is socially prohibited in most situations. This doesn’t mean talking philosophically about consent.

What is prohibited is requesting personal, particular consent.

The title, “Why Does Sexual Repression Exist?” is not a rhetorical question. It is not a way of saying “Isn’t it absurd for us not to express our sexual desires any time we want to?”

It is a real question, a sociological question that asks what causes people to limit expression of sexual desire and sexual behavior.

It is a very answerable question. Sexual behavior and talk have varied a great deal historically, across societies and within any particular one. There is ample evidence for showing what determines what sex is and is not allowed.

There may be a tendency to think that the answers are obvious, at least for our own enlightened times. Obviously, certain categories of persons have to be protected; certain situations are just not appropriate.

Why do we think this? It is more revealing to distance ourselves from our contemporary point of view. Not very far in the past, people assumed different standards. It is safe to predict that in the future, people will look back at us-- including the most liberated-- with scorn and moral condemnation, just as we look backwards at our own predecessors. The power of comparative sociology is to rise above our historical self-centeredness, and to show what makes people feel

this

is right and

that

is wrong about sex.

Does “Consenting Adults” explain the contemporary sexual standard?

Social rules are embedded in tacit understandings as to when a rule is to be invoked. Freud would have called this unconscious; Durkheim called it pre-contractual solidarity. There are many persons and many situations where one cannot ask for consent, or even bring up the topic.

By way of generating some sense of the social conditions, ask yourself: how many people can you ask to have sex with you? Think of the variations of how to say it to particular persons: politely, indirectly, blatantly, using slang, using obscenity: “Excuse me ma’am (or sir), would you like to fuck?” What would happen if you said this? In some situations today one would be accused of using inappropriate language, in others, of sexual harassment. In the era before WWII, you could get your face slapped.

I have appealed to your imagination of real-life occasions because in the vast number of situations where someone has a sexual interest in someone else, it does not get expressed at all. David Grazian’s book,

On the Make: The Hustle of Urban Nightlife,

shows what happens when young adults are out in the scenes where they are most explicitly looking for sex. Although both the boys and the girls* talk about what they are aiming for and what happened, they do this among themselves before they go out and after they come back. When they are on the front lines in the night club, virtually no one ever says anything like, let’s hook up. They are playing a game of pickup, but at a very distant level. Most of the excitement is in the tease and innuendo, and sexual scores are so rare that the boys end up bragging about getting a girl’s phone number, and the girls laugh about giving out fake numbers. Like most ethnographies, Grazian cuts through the ideal and shows the social realities of how the atmosphere of sexual excitement is constructed, like putting on a performance in a theatre.

*

“girls” is how they refer to themselves. [See also Armstrong and Hamilton.]

Why doesn’t this scene, about as blatantly sexual as they come, have more real sex or at least more sexual talk? This means asking about the sociological processes that repress sex. We will come to the list of causes shortly; here they have little to do with the kind of sexual repression which concerned Freud.

Compare the few places where expressing one’s sexual desire face-to-face with its target is actually done. One is in front of fraternity houses, on heightened occasions like party-night afternoons, or at the beginning of term when new student cohorts arrive. There can be a lot of raucous hooting at passing women, commenting both positively and negatively on their sexual desirability. Notice two things: This is sexual expression, but without a serious aim to get consent from any particular woman; in fact, the impersonal and collective nature of the hooting makes this impossible. Secondly, even the frat boys’ tactic of strength-in-numbers-and-anonymity does not necessarily shield them from the negative reaction they would likely get if one of them said the same things to an individual woman.

As social movements and administrative organization have mobilized, fraternities hooting at women are sometimes sanctioned or even closed down (as happened for instance at San Diego State University in 2014). The sociological pattern holds:

expressing sexual desire is limited even in the most liberated society; it can be gotten away with more if it is ostensibly not really serious, or is carried out at a safe distance. Today, the most blatant sexual talk is telephone sex [Flowers 1998]. Sexual expression also gives rise to counter-movements. The sociological pattern is not sexual expression alone, but sexual conflict.

Inside the fraternity house, the situation is not too dissimilar from Grazian’s description of downtown nightclubs. [Sanday,

Fraternity Gang Rape

; Armstrong and Hamilton,

Paying for the Party

] There is a lot of bragging talk about sex within the male-bonded group; but only a small sexual elite actually gets very much action. At parties, disguised by loud music, semi-darkness, and plentiful alcohol, the reality is that most of the frat boys are on the sidelines watching their sexual heroes. So, are they the ones who boldly ask for sex? Even here, the conversations are more tacit and oblique than blatant; the most successful approach is by sheer body language, dancing in high sync, laughing together. The less formal and coherent the talk, the more likely it is to build the mutual mood that may lead to sex. Uninhibited extroversion is favored by the scene, not the rational-legal language of discussion and consent.

The prevailing social pattern when talking person-to-person about possible sex is that explicit sexual desire is never directly expressed, until the situation has evolved non-verbally to the proper point; any violation of this tacit rule gets a negative reaction.

The main exception is in commercial sex work, talk between prostitutes and prospective clients (Elizabeth Bernstein,

Temporarily Yours

). But even here, the initial steps of negotiation are surprisingly round-about. Sex work illustrates a pattern found more generally in social stratification: low-class street prostitutes are most blatant in verbally offering sex; high-class prostitutes or “escorts” play out the girlfriend experience (GFE in advertisements) minimizing explicit negotiations in order to set a non-commercial atmosphere.

The most explicit sexual talk is in scenes dominated by males, not only because they control violence but because they are at the center of the carousing, “where the action is.”

This is the pattern in unpoliced

lower-class black inner city ghettos, where groups of dominant males-- both teenage gang members, and some adult men-- fling sexual banter at girls and attractive women, and humiliate those who object. [Jody Miller,

Getting Played

]

This fits the sociological pattern of more blatant sexual talk lower in the class hierarchy. But it isn't the whole explanation, since upper-middle class fraternities resemble lower-class gangs, except that they have the money for their own club house, and do not need to engage in street crime for income. Whether in tribal societies or enclaves in modern ones, a hyper-sexualized pattern of blatantly exploiting women occurs where the power center is a "men's house" that is also the ceremonial center of the community.

In the gender-integrated middle and upper classes, much sexual talk is tabooed even when it has nothing to do with consent.

Circumstances are rare in which persons can say directly to another: “How big is your penis?” or “You’ve got really great tits.” In the talk regime of liberal late 20th/ early 21st century, such talk would be considered over the top, if not called “politically incorrect,” “sexist,” or actually resulting in formal charges.

Socially constructed age limits

The other part of “consenting adults” is the age limitation. We take this for granted, as all customs generally are. But it is palpably constructed by social regimes, as we can easily see by comparing laws and customs in different historical periods. In this area, instead of a post-Freudian trend of increasing sexual liberation, sexual repressiveness has historically increased.

The strongest contrast is sex in tribal societies. One of the most detailed is Malinowski’s ethnography of the Trobriand Islands north of Australia. The Trobrianders have almost the opposite of the official American position (that adults are sexual and children are sexless, unless adults impose sex upon them). In this tribe, childhood is the time for unrestrained sex; whereas adults are expected to settle down and devote themselves to work.

Both girls and boys flaunt their sexual activity; both sexes go off on group sex-seeking expeditions; both show off marks of love-bites and scratches on the skin as proud tokens of sexual passion. This happens almost exclusively among what modern Americans legally define as children.

Among the Sambia tribe of Papua New Guinea, there is a homosexual version (Herdt,

Guardians of the Flutes

). The normal life-course pattern is for an adolescent boy to become the sexual partner of a young man, the older initiating the younger into sex simultaneously with the mysteries of the warrior men’s house. It is not lifetime homosexuality but a stage of age-graded promotion. The boy is shown the sacred flutes and also taught to suck the man’s penis and swallow the sperm, religiously interpreted as giving manhood. As William Graham Sumner said, the social mores can make anything legitimate.

A similar pattern existed in ancient Greece. Young men of the upper classes, in the long wait for marriage to upper class women (who were snapped up very young by older men), had love affairs with boys of their same social class-- preferably adolescents before the beard and pubic hair had grown. These were genuinely passionate love-affairs that would be recognizable today, except that young males rather than females set the ideal of bodily beauty. The ideal eventually became transferred to the female body, once Greek and Roman women became more emancipated. [Dover,

Greek Homosexuality;

Keuls,

Reign of the Phallus

]

This is a striking reversal of modern homosexuality, which is legitimate among adults but harshly penalized across age lines. For the ancient Greeks, homosexual relations among adult men was considered ludicrous; and anal sex-- the predominant form of modern male homosexual practice [Laumann et al. 1994]-- was regarded only as a humiliating punishment.

What can we get out of these comparisons other than that societies change and can decree anything is right and wrong?

Sex between adults and “children”-- generally defined by the cutting point of age 18-- is now labeled child sexual abuse. It is the most stigmatized of contemporary crimes. The rationalized argument is that the child has no power of consent, and that any adult must automatically be considered as taking advantage of them. This is a legal judgment, not a sociological one.

Where sociological evidence does exist is for the pattern that children who have had sex with adults-- victims of child abuse-- have many more life problems. [Finkelhor 1986] They have more drug use and alcohol excess; more unstable marriages and sexual partnerships; they are more likely to become victims of spousal abuse and violence; to have more trouble with education and jobs. The data analyses do not always control very well for confounding factors, such as lower social class and broken family structure; but on the whole, one can make out a sociological case why child sexual abuse is a very bad thing in its consequences.

Social shame causes the damage

There is one major problem: what is the causal mechanism? One might assume that a child who has sex with an adult feels traumatized; but this is not always the case. Sometimes the adult uses force, but in many cases the adult is a parent or close relative, often when the opposite-sex spouse is absent, and the child gets an early sense of intimacy and initiation into adult sexual privileges.

The mechanism that causes the trauma, most typically, is

shame

.

Shame produced by the reaction of the larger society, shame transmitted to the child by having to keep the sexual relationship secret. Shame and humiliation when the case comes to official notice; even bureaucratic policies to keep such cases secret (as far as the child’s identity is concerned) have the effect of segregating the child in an atmosphere where the secrecy is itself a mark of shame. This is shown in studies of juvenile facilities where children in such cases are segregated; and where a culture of precocious sexuality is further enhanced, since the one thing these kids have over others is more sexual experience, and they share it among their peers.

Social labeling theory has been applied to explaining mental illness, retardation, and numerous other things. The theory has not always held up when controls are applied to the data. But in the case of the life-long effects of being labeled a victim of child abuse, the labeling process is by far the strongest explanation of the debilitating consequences.

As social psychologist/family therapists Thomas Scheff and Suzanne Retzinger have shown, shame is the master motive of social control. Even tiny episodes of shame from broken attunement in a conversation bring hurt reactions; and if the shame is not overtly expressed and resolved, but hidden away (by embarrassment, by shame about being ashamed) it comes out in long-term destructive rage, against self and others. In my own theory of successful and unsuccessful Interaction Rituals [Collins 2004], disattunement and its concomitant shame lead to difficult social relationships; to loss of emotional energy, and instead to a cycle of depression, passiveness, and interactional failure.

Via the shame mechanism, it is possible to explain why many sexual relationships between adults and children result in very negative life consequences for the children as they grow up.

Many sexual relationships, not all of them. We know that because of societies like tribal New Guinea and ancient Greece, where adult-child sex was honorable and celebrated, not regarded as shameful at all. In those societies, it had no negative consequences.

The purpose of this discussion is not to make sexual policy; but here is a point where sociological theory suggests what is being done wrong, and what could be done to solve it. The negative consequences of adult/child sex could be eliminated if society stopped treating it as shameful.

Who Can Touch Who When?

The formula “between consenting adults” has similar limitations in explaining the tacit social norms about touching another person.

Consider the range of touches that exist in our society, whether commonplace, restricted, or forbidden:

-- shaking hands

-- patting on the shoulder (usually clothed)

-- kisses of all varieties: air kisses, cheek kisses, gentleman-kissing-lady’s-hand, kissing the Pope’s ring, lip kisses, tongue kisses, tongue-to-genital kisses

Notice, apropos of consenting adults, hardly anyone ever asks, “Can I kiss you?”

(although social consent is explicit in the traditional wedding ceremony, with its climax “You may now kiss.”) When persons kiss, and what kind of kiss it is, is a tacit, unspoken part of a particular kind of social relationship. If it is the wrong social relationship, or the wrong kiss, there are repercussions. This is a sociological rule for all forms of touching.

Similarly with hugs. The style has palpably changed in American society, with a big shift in the 1970s towards much more hugging-- not necessarily spontaneous, because it has become so strongly expected in particular situations. Take a look at the polite hugs which are now de rigueur in social gatherings of the higher classes-- hugs around the shoulders, leaning forward, avoiding full body contact. In the 1940s, an enthusiastic hug consisted in grasping the other person’s arms with both hands, above the elbows-- more enthusiasm shown by more body contact, within the limitations of the time. The ritual of sports celebrations (victories; home-runs crossing the plate) has shifted from merely verbal, to hand-shaking, to the now-required full-body pile-on.

It is notable that body contact among American men is more extensive the more violent it is; swinging high-fives, forearm smashes, chest bumps, pile-ons are more favored than gentle contact, probably because the violence sends the message that it isn’t sexual.

Historical comparison helps explain the meanings of body contact vary. In traditional societies such as Arabs, it was common for groups of men in public to walk along holding hands or linking arms. Similarly, women in traditional societies linked arms in public.

It was an explicit show of group tie-signs. It had nothing sexual about it; it expressed the politics of the situation when kin-groups and other close solidarities were all-important. As modern societies have become more individualized, tie-signs such as hand-holding or linking arms have narrowed in meaning, explicitly confined to sexual ties. It is the same with the decay of old kissing rituals like the French official who kisses the recipient on both cheeks after pinning on a medal.

In our sexually liberated age, many bodily gestures are restricted, because the default setting is to take them as sexual.

Four causal mechanisms that control sex

[1] Sexual property regimes

[2] Sexual markets

[3] Sexual domination and counter-mobilization

[4] Sexual distraction and sexual ugliness

These mechanisms, in one degree or another, have existed in every society.

What varies is the strength of the ingredients that go into each mechanism.

[1]

Sexual property regimes

Sexual property is present wherever there is jealousy. It is analogous to property over a thing, or more exactly, property over behavior-- like a professional athlete signing a contract that requires certain kinds of performance on the field and prohibits other behavior in the off season.

Sexual property is the right to touch someone else’s body sexually. Like other forms of legal property, it takes many forms: sometimes the rules are elaborate and restrictive, sometimes not; sometimes it is a permanent, life-time contract, sometimes breakable (e.g. by divorce), sometimes very short-term indeed (e.g. a half hour deal with a prostitute or an overnight with an escort).

The forms of sexual property have changed historically. But despite movements of sexual liberation, it has not gone away.

The gay liberation movement has coincided with a great deal of private fighting over sexual jealousies-- more commonly among male homosexuals than females [Blumstein and Schwartz 1983].

Legitimating a particular form of sex does not mean turning it into open access.

What determines the forms sexual property has taken? Most important are changes in the political power of the family.

The most blatant and restrictive forms of sexual property existed in patrimonial households-- roughly speaking, the medieval pattern where big households with their own warriors were the backbone of the state.

Important households were tied together by marriage politics. Women were treated as tokens to exchange with other important families, so they had no choice in their own sexuality. Any incursion into the sexual property of the household was regarded as a combination of rape and treason, with both parties punishable, sometimes by spectacularly violent death. This is the background for Romeo-and-Juliet romances, and for real-life versions in places like Saudi Arabia, Kurdistan, and Pakistan. A royal princess can be assassinated, and brothers can stone a sister to death for sex or mere flirtation with an outsider to the clan. [Cooney 2014]

The era of the patrimonial household upheld a double standard, technically unilateral sexual property: males controlled females as sexual property to be used for political alliance-making, but not vice versa. The big historical changes in sexual property go in either direction from this medieval pattern-- backwards towards tribal societies, and forward to the modern state.

Tribal societies like those described by Malinowski and Margaret Mead, and that greeted sailors in Polynesia during the 19th century, seemed like sexual paradises to people from the modern West. The reason was that their politics were extremely rudimentary. Where there were no strong military coalitions, and nothing like a warrior class living in castles or big households, marriage alliances were not very important. In very simple societies without class differences in wealth, divorces were extremely easy. Sex was not politicized and therefore left up to individual discretion. Jealousies were personal and not backed up by group forces. Sexual property was ephemeral.

Coming forward historically from the Romeo-and-Juliet world toward our own is the rise of the bureaucratic state. Governments acquired their own armies and tax-collecting machinery. Households became more private and their sexual affairs depoliticized. This set the stage for the shift to the private marriage market.

[2]

Sexual markets

A market exists whenever there are numbers of actors who want something and have to find someone else to trade with to get it. Markets can range from many competitors to virtually none; the more competition, the more each individual must be concerned about the “price” for what they are buying or offering. This structure exists whether its participants recognize it consciously or not. The price can be in money, but it can be in other things too-- sexual attractiveness, subservience, social status, even love. In fact a bundle of all these things has become the preferred way that people find sexual partners in the modern era of the private sexual marketplace.

The ideal of marrying for love came into existence in European societies around the turn of the 1800s. It was called the “Romantic” era because so many writers made a theme out of love affairs defying social convention and expressing the individual’s wild, uncontrolled passions. The literary ideal reflected a real change. Parents gradually stopped controlling their children’s choice of partners. In one respect this felt like an era of freedom, but it also meant that young people were thrown into a marriage market they had to negotiate for themselves.

A market is freedom but it is also constraint. The freedom is to make choices. The constraint of a market is that you do not necessarily get what you want, at the price you would like to

offer. The romantic image is that love happens like magic, a meeting of two persons with perfectly matched desires; it scorns social differences and mere material things like inheritance and money. In reality, the love ideal came along with the market of who can offer what. Many persons may desire a very beautiful, sexually arousing partner, but s/he may not find you sufficiently attractive in return. Other things get thrown into the mix: today not so much inheritance, but a good job and earning capacity.

Material things become part of

romancing, in the form of treating, paying for dinners and entertainment, gifts, not to mention the degree of attractiveness one can muster by one’s clothing and grooming.

Viviana Zelizer has shown there is no clear gulf between purely sentimental considerations and material offerings; even if the latter are ignored in the ideology of love and sexual passion, they exist in a semi-conscious underground of bargaining, an almost Freudian repression of the sexual market itself from polite consciousness.

We don’t care about somebody’s social background, and assert that all that matters is whether we really like each other. We can take this attitude with a fair degree of success because in fact what we like about another person is their cultural tastes and their social personality, and liking consists of fitting together people who find their manners match. It is not surprising to sociologists that the prevailing pattern is homophily-- personal ties with someone similar to oneself on as many dimensions as possible. And this applies to ties of all degrees of permanence: from long-term marriage down to passing affairs. In fact, the closer the homophily, the longer the relationship is likely to last.

Affairs across big social gaps do happen, but they are also more likely to break up. The shift during the last century from divorce-proof lifetime marriage, to serial monogamy, to cohabitation without getting married, to hookups, has not affected the dynamics of sexual markets. In none of these long-term or short-term relationships is it irrelevant who are the competitors, and competition always affects what one needs to offer in order to find a partner.

Sexual repression inside a sexual market

The idea of a sexual market makes it sound like everything is very blatant, but on the whole modern sexual markets repress the overt expression of sexual desires. The more people who are actively out there on the market looking for partners, whether for the evening or for a lifetime, the more likely it is that any particular person will encounter rejections. Experienced individuals in such markets-- those who often go to nightclubs, or to parties, mixers, conferences, dances, dating services, you name it-- generally get to know their own value from the way they are treated by others. Very attractive individuals become very picky-- in part because they can afford to be, in part because they are overwhelmed by advances, most of which they scorn.

A very beautiful young woman of my acquaintance complains that she is constantly being stared at by strangers, who she regards as completely boring.

Of course: by her standards, she can do much better. Interview data show the same thing [Gardner,

Passing By

]. This is a main reason why persons at the top of the sexual market tend to pair off with each other.

Homophily is everywhere. Systematic observations of persons who are together on streets and public places (my own research), shows that pairs and small groups tend to be similar on every dimension, including clothing style, physical size, and attractiveness-- i.e. they have sorted themselves by cultural capital and social class, but also by their positions in sexual markets.

Women tend to be friends with women of similar attractiveness, because they have similar backstage issues.

An open sexual market represses overt expression of sexual desire for several reasons. One reason why people very rarely say something like “I’d like to have sex with you,” is that most of the time they will be rejected. The target of the advance may not at all be a prude, but simply someone higher in the sexual market. And rejection is not only a downer in its own right, but it also tends to publicize one’s own level of sexual un-attractiveness. Paradoxically, the more open the sexual market, the more individual-level psychological pressure exists to avoid exposing one’s own sexual desires. The expression of desire risks a negative judgment about one’s market position.

Thus the persons who are most open in expressing their sexuality tend to be among those who are most sexually attractive.

The expression of sexual desire itself becomes stratified in a time of sexual openness.

sexual ranking at IMF meeting

[3]

Sexual domination and counter-mobilization

Sex is a potential site for conflict and domination. Some feminist theorists have asserted that sex is always a form of domination, or at least heterosexuality always is. In social science, “always” is a dangerous term, since variations spread across the spectrum, and it is more useful to look for the causal conditions rather than an alleged constant. In some arenas (such as prisons), homosexual sex is more frequently the target of coercive practices. [O'Donnell 2004]

Since the aim of this article is causal explanation rather than protest and policy, let us ask the question: what settings produce the most sexual domination, both coercive and indirect? And what conditions mobilize social action against sexual domination?

Indirect sexual domination is implicit in some kinds of sexual property, especially in the patrimonial household politics already discussed. In those settings, sexual violence mostly comes out when the informal controls are challenged.* Modern sexual markets have probably increased the historical incidence of some kinds of sexual coercion, since date rape could hardly exist in societies where there was no dating, and fraternity party rapes could not exist before the era of co-ed schooling.

* This doesn’t inevitably happen.

Cooney [2014] shows that the weaker the clan’s political control and the more the family lives in modern urban conditions, the more likely they are to let off the culprits from their tribal code. When they can keep the sexual defection of a daughter or son secret, it is often indulged; but when it comes out in the ethnic community, the family may be goaded to act violently to protect their reputation.

There are at least five distinct causal pathways of rape (date rape; serial stranger rape; carousing zone rape; political rape; rape in the course of another crime.)

I will put aside the topic of the causes of rape for fuller treatment in another post.

Here I will concentrate on two arenas where opportunities for sexual domination have changed, and where counter-movements have mobilized against them. My analysis focuses on the theme we have been pursuing, what causes sexual repression.

The two arenas are age restrictions on sexual contact, and restrictions at work.

We have already seen that age limits on sexuality have grown historically. They were virtually non-existent in most tribal societies. In patrimonial household politics, child sexuality was promoted when political marriages were arranged at a young age. The category of childhood is a modern construction, at least in the sense of a social category backed up by law. Of course medieval people recognized that children were sometimes too small for adult activities, but there were no rigid dividing lines; what children did was determined by their particular capacities and the political maneuvers that took place around them. For centuries in Japan, children were put on the throne so that they could be manipulated by regents, often from the family of the child-Emperor’s wife; and child sexuality was encouraged precisely because political influentials wanted an heir from their line.

What created the sharp dividing lines that separate childhood from adulthood, legally as well as moralistically, was the rise of modern bureaucracy. The power of the household was reduced by the bureaucratic state. The state began to impose requirements for children to be educated in a bureaucratic school system; labor laws were created, under a variety of influences including both labor and humanitarian movements, which restricted or prohibited employment under particular ages. States have increasingly penetrated households; at first (starting in Europe in the 1700s and 1800s) this was done to enroll the population for military conscription, and for taxation; approaching our own times, for the purposes of social welfare, public health, equal opportunity, prevention of child abuse, and a growing list of causes.

Bureaucratization means setting out formal rules and keeping records. The rules are designed to disregard individual circumstances and lump everyone into abstract and easily measurable categories. The growth of mass education has placed increasing emphasis on age-appropriate activities, as

laws have mandated education for lengthening stretches of everyone’s lifetime.

Schools in medieval times, and up through the 19th century (as in rural schools in America), generally lumped together children of very different ages; they all learned in the same classroom, with the abler ones moving through faster at their own pace. (For instance, Sir Francis Bacon went to Cambridge University from age 12

to 14, tagging along with his older brother; he entered law school at age 15, but soon went off on an informal apprenticeship as secretary to an ambassador. This cursory formal education did not prevent Bacon from becoming the most learned man in early 17th century England.) By the early 20th century, schools were moving students through rigidly according to age-graded classes; skipping grades was allowed as an exceptional policy, but both formal and informal pressures were against it.

It was in this context that laws controlling the sexual behavior of children-- now a strictly age-graded category, with no concern for individual variation-- became formalized in law. Children are now defined by their age, not by their capabilities. Social movements have been mobilized, since the mid-19th century, to protect children, as seen through eyes and social values of the reformers. Some of these movements were notorious for imposing the values of puritanical Protestants upon immigrant families in American cities; others have dropped the religious themes, and put themselves forward in the name of humanitarian, scientific, or medical ideals. Because resources for mobilizing social movements have continuously expanded in the 20th and 21st centuries, movements to control the lives of age-defined persons (“children”) have become increasingly influential.

There is no natural, culture-free reason why persons above the age of 18 should be regarded as sexual predators against those below 18. Since boyfriend-girlfriend relationships are typically between males a year or two older than females, there comes a life-passage when what was acceptable at least informally in these age-segregated enclaves becomes illegal. Increasing pressures on courts to impose uniform penalties, has combined with the successful efforts of social movements to punish all sexual offenders not only with prison but by labeling and segregating them for the rest of their lives. The results include instances where the sexual activities of boyfriends with girlfriends end up in the public roster of sex offenders as indistinguishable from the most violent rapist. Young female teachers in their 20s who have affairs with teenage boys (probably the most sexually mature ones) are treated as if they were raping little children. The spread of surveillance cameras, where videos are routinely monitored by bureaucratic authorities and handed over to prosecutors bent on increasing their conviction rate, is one more feature of today’s impersonal organization intruding on private lives to enforce laws that are oblivious to individual differences.

Genuinely humane persons might recognize that the category of statutory rape should be replaced by more flexible consideration of circumstances.

But it is characteristic of a bureaucratic society that once rules are written into laws and standard organizational practices, unintended consequences merely become normal. Peeling back such laws and procedures is more difficult than the flurries of scandal and melodrama that first enacted them.

From the high ground of sociological analysis, we can summarize: the combination of modern age-graded bureaucracy and the ease of mobilizing social movements is a new source of sexual repression, rolling back waves of post-Freudian liberalization.

The other arena of new sexual controls is work.

Gender integration of women into formerly male occupations increased opportunities for sexual contact. The result has been two kinds of controversies. One is that men can take advantage of women working with them, either by superior force or by rank. Counter-movements have mobilized, and rules to prevent such victimization have grown, both within organizations and under government legal pressure. Since sexual advances are also made in an indirect manner, rules to control sexual domination have expanded to a wide variety of activities under the category of “sexual harassment.” One result has been that the loosening of sexual talk that happened from the 1930s through the 1980s, has been reversed. Whether this is good or bad from the point of view of men and women in the world of work, is no doubt mixed. One conclusion is clear: post-Freudian sexual liberation-- although still strong in popular culture and in the high arts-- has been turned back to a Neo-Victorian standard of official prudishness.

[4]

Sexual distraction and sexual ugliness

This is a topic rarely discussed. It is more universal than our current movements for and against particular kinds of sexuality. Even if sexual domination were eliminated, this issue would remain.

Sexual arousal can be overwhelming, obsessive, shutting everything else out. This leads to practical norms to limit sexual arousal.

Why is there a taboo against sex in public? Even in the most liberated arenas and sexual scenes of modern society, it is rare for people to actually engage in sexual intercourse in public, as well as other sexual acts. Anthropologists and sociologists [Ford and Beach 1951; Reiss 1986] have noted that with all the variety of sexual regimes around the world, there is one constant:

sexual intercourse almost always takes place in privacy.

The exceptions help pin down the sociological rule. Even in the most liberated circles, there are restrictions. Swingers groups (AKA wife-swapping), popular in the 60s and 70s, developed a rule: couples only, no unaccompanied singles [Gilmartin 1978].

The exchange had to be complete; everyone had to take part. Swinging was breaking the rule of monogamous sexual property; but it had to be equal-- both man and woman got the same license as their partner. Another rule: no meeting illicitly on the outside. What happens in swinging, stays in swinging!

If they were all going to have sex together, it was going to be in one place: group privacy, no public allowed, no side-involvements.

Studies of communes in the 1960s and 70s (Zablocki 1980; Martin and Fuller 2004)

found that the longevity of the commune was inversely related to its sexual openness.

Communes that strictly banned sex (especially religious communes) or communes composed of married or monogamous cohabiting partners lasted longest. Communes that had a policy of free love-- anyone can have sex with anyone, no questions asked-- were the most volatile. Why? In part, because their idealistic rule overlooked the sexual market and sexual attractiveness.

On one side, the men vied to have sex with the best-looking women, hence squeezed each other out. * On the other side, women sex stars were overwhelmed, and played their favorites.

And there was the snake in the garden, social rank:

a charismatic commune leader hogged most of the sex; and swingers groups among businessmen tended to fall into the pattern of the younger men with good-looking wives swapping with older men and fading wives, a trade-off of rank for sex, or sex for promotion.

*The same was observed in the bathhouse scene of gay sex in the 1980s and 90s, when the overt rule was anything goes, but in fact bathhouse participants queued up in order of personal attractiveness to get the most attractive men.

Although there is a fantasy ideal of orgiastic sex, it is structurally difficult, if not impossible. Orgies are depicted on ancient Greek drinking-bowls; but what we know about these scenes is that the group of upper-class men hired professional prostitutes for the orgy. [Keuls 1985]

Even with this commercial dominance, the border seemed to be enforced: everyone present took part in the orgy, closed to the world outside.

To repeat the question: why the taboo against public sex? The answer is that sexual arousal is distracting, it is contagious. There are rapes on record where men wandering around come upon a couple making love on a deserted beach, and

intrude themselves into the sex. Gang rapes often get started in the same way, without plan, sheer arousal-driven piling on.

The answer is a sociological transmutation of Freud. Sex is too strong a drive for people to let it go untrammeled-- which is to say, to let it go outside of privacy that limits it to just two people, or in rare circumstances, a larger but equally circumscribed group. This continues to be the pattern of Vegas-style all-girl junkets for sexual adventure. Pictures posted on Internet sites typically show a group of women of which one is having sex with a well-built man while the others watch; the partying atmosphere is displayed in their fancy clothes and their drinking. It remains a private group enclave, where everyone present is a potential sexual participant.

Sexual distraction helps explain the proliferation of sexually-inhibiting rules in the contemporary work place. In addition to the threats of sexual dominance, there is also the possibility that sexual arousals may take over and pull people from their work. It is hard to estimate realistically the strength of this threat, given that most organizations are not working at full capacity, and ideal efficiency is always hard to estimate.

A hint is that sexual relationships at work are tolerated when they do not upset the organizational hierarchy or blur its chain of command.

On the Eastern front of World War II, it was common for Soviet commanders to take “combat wives”-- secretaries or telephone operators pressed into service in the manpower shortage, who became the sexual property of the highest-ranking officer for the duration. There was little push-back about the system. There are indications similar things happened on the Western front, at least among the Americans (such as the C-in-C Eisenhower having an affair with his chauffeur). *

But organizational sex only functioned when women did not upset the hierarchy. As women started making careers of their own, even vying for CEO in their own right, the stories that circulated in the 1970s of fast-track young women having affairs with CEOs gave way to the current standard of sexual restraint.

*

It is striking that the three most famous Presidents of mid-20th century-- FDR, Ike, and JFK-- all had illicit affairs, well-known to insiders and journalists, but no scandals were launched against them. Ari Adut

(On Scandal)

notes that in a more puritanical culture of polite discussion ("all the news that is fit to print," in the New York Times' now-outdated slogan) scandals don't happen because it is improper to talk about them in public. Bill Clinton's blow-job affair with a White House intern happened in the late 1990s when the public culture of sex was at its most blatant. Sex scandals have become part of the normal political repertoire for bringing down politicians and government officials.

A notorious example of what happens when sexual partying gets into organizational duties is the Abu Ghraib scandal. [Mestrovic 2006] The American guards carried out their torturing of prisoners with forced nudity and sexual humiliation, and in an emotional tone of joking and laughter. The presence of young women guards in the gender-integrated US Army-- one of whom got pregnant by a leader of the revels-- was a major ingredient in the partying atmosphere. Politicians supporting the guards argued it was nothing more than the fun of a fraternity initiation.

It was so much fun that they couldn't help sending out the photos that implicated them.

Bottom line:

sexual arousal upsets organizational hierarchies. The solution has been to keep it rigidly under control.

The hypothesis this gives rise to is the opposite of post-Freudian liberation:

the more gender equality in the future, the more Neo-Victorian repression in the realm of work and politics.

Sexual ugliness

There is another dimension of how sex disrupts everyday life. Since almost all depictions of sex are tittilating, this one runs against the ideological grain:

Sex is often ugly.

Freud himself said, the sight of the genitals is not beautiful, although it is exciting. This is confirmed by photographic evidence. There are exceptions, but these help tell the sociological story.

The history of pornographic magazines in the 20th century provides evidence on how sexuality is depicted in styles varying from idealized to ugly. The first successful magazines (

Playboy,

founded 1953;

Penthouse

, founded 1965) projected an upper-class image, a fantasy of sexual luxury.

Playboy

reached a peak monthly circulation in 1972, at 7 million copies-- for a time it had the second biggest circulation of any magazine of any kind except

TV Guide

.

Penthouse

peaked in 1984 at 5 million.

Put this in context of the so-called Pubic Wars:

Playboy

had pioneered in showing beautiful nudes and semi-nudes, including stars like Marilyn Monroe and Jayne Mansfield, featuring bare breasts and pin-up leg shots. Under competition from

Penthouse

, by the early 1970s

Playboy

was showing similar women, in luxurious lingerie and decorator interiors, with a hint of pubic hair. Peak circulation in 1972 was early on in this process of genital strip-tease. By 1973-75,

Penthouse

was showing the same kind of luxurious bedroom scenes with women’s legs starting to come apart, revealing the interior of the crotch-- through a drawn-out sequence of disguising through shadows, fingers, and semi-revealing panties.

Penthouse

soft porn photography was famous for the heavy use of flowers, sometimes to set the atmosphere, sometimes to lend cover or suggest shape to the genitals.

Playboy

followed suit for a while at a discrete distance.

But as

Penthouse

in the late 1970s and early 80s printed increasingly clear pictures of genitals with outer labia parted and then inner labia aroused,

Playboy

began to pull back

to its older pubic-tease standard of its greatest success. [

Wikipedia articles; Venusobservations.blogspot.co.uk/pubic-wars

]

Although

Penthouse

followed the pathway of increasingly edgy photos, this was not the formula for greatest market success. By the 1990s,

Penthouse

had lost much of its circulation, as well as virtually all of its mainstream advertisers. Like most sex magazines of the time, its advertising revenue shrank to telephone sex services. It continued to push the edge as a glossier version of hard-core smaller-circulation magazines, now showing

vaginal penetration as well as oral sex on both male and female genitals.

Penthouse

with its money could still present explicit porn with better looking models, high-quality photographers, and luxury settings.

In contrast,

Playboy

in the 90s held to its stronger market niche of extremely beautiful, clean-cut models in slightly provocative nude poses. This was the luxury-sex market, on the conventional edge of respectability, where

Playboy

could get the most beautiful models by offering fees as high as $10,000 in the late 1970s (equivalent to $45,000 today)

for the monthly centerfold. Competing

magazines like

Gallery

in the 1990s offered $2,500 for the monthly winner of amateur nude photo contests featured in the magazine, $25,000 for the yearly winner. But it all went down hill. By 2003,

Penthouse

went bankrupt, then re-emerged with a modest circulation of 300,000.

Playboy

too was down, but held on at a respectable 3 million circulation as of 2006. The 40-year sequence is an experiment showing the greater attractiveness of idealized sex over blatant sex.

Another rendition of this history just says the porn magazine business was destroyed by sex on the Internet. This is a factor, but it doesn’t alter the point that blatant sex doesn’t sell so well.

Playboy

, the most conservative sex magazine, survived in reasonable shape, and was joined by new magazines like

Maxim

, playing for the niche of idealizing a respectably sexy life-style of successful men. Hugh Hefner’s celebrity-laden Hollywood partying style was the biggest attraction, not the amount of explicit sexual display. The most blatant sex magazines were already declining before the Internet

became dominant. A further peculiarity of Internet porn is that most of it is posted for free, by amateurs showing themselves off to each other. This resembles a private enclave of sex cultists, like swingers in a previous generation.

Compare now the sex mags that deliberately aimed at a non-elite, real-life, working-class view of sex.

Hustler

, founded in 1974, rocketed to a circulation of 3 million by the end of the 70s. It never rose above third place, but it did open a market for more blatant sexual display: what publisher Larry Flynt bragged about as “showing pink,”

i.e. fully lighted photos of open labia and vaginas. It should be noted, though, that

Hustler

during its early high-circulation years stayed closer to the

Playboy/Penthouse

style of luxurious settings, often with quite beautiful models. It parted company most blatantly in its cartoon features, rather juvenile satire of the scatological kind, toilet-bowl humor in pictures, with scuzzy-looking derelict characters. This was in sharp contrast to

Playboy

’s cartoons, which tended to feature stereotypical old roué millionaires with willing bimbos and trophy brides. The social class ambience is explainable in the trajectories of the publishers: Hefner started in the sophisticated literary men’s magazine

Esquire

, Flynt as promoter of a string of roadside strip clubs.

As

Hustler

got more blatant, more working-class in appearance, and lost its idealized

settings for pornographic displays, it lost ground in the market faster than its rivals. By the early 2000s, it was hanging on below 500,000 circulation, and offered $1500 for the monthly amateur photo winner.

Even cheaper-style presentations of sex were on the market by the late 1980s and 90s, pioneering photos of actual sexual intercourse (rather than the genital-hiding couples features in classic soft-porn

Penthouse

that resembled body-double sex scenes in Hollywood movies). A useful comparison is

Lips

, which imitated a popular feature in

Hustler

and

Gallery

: amateur nude photo contests, with cash prizes for the winners.

Lips

appeared to print more or less all comers, pushing the edge by concentrating on close-up photos of female genitals in their opened and aroused state. There are no luxurious backgrounds, in fact usually no backgrounds at all (although

Hustler

and

Gallery

amateur photos show that most of them are taken in cheaply furnished working-class homes or rural outdoors). Such magazines are relatively limited circulation, sold primarily in non-corporate, independent liquor stores and mom-and-pop markets-- lower class all the way around. This is sheer un-idealized genital sex, and one effect is to show how often genitals are rather ugly.

By what standard can one make such a judgment? Beautiful depictions of human bodies are very symmetrical, with clear simple geometry; long graceful curves, proportions that have been calculated as a “golden mean,” inflections of curves that gracefully change direction and convey a geometry of three dimensional solids. Art instruction books tell how to draw a beautiful woman by staying very closely in form; especially drawing the face with as few lines as possible, highlighting the curves of jaw, cheekbones, lips, eyes. Superfluous and complicated lines (not only wrinkles, but contorted body lines) are avoided. Obviously this is not the standards of abstract and expressionist art, but it is the standard of success in erotic art from the pin-up era through the peak of

Playboy/ Penthouse/ Gallery

market sales.

As one can see in close-up depictions of genitals, and especially labia in an aroused state, they do not often fit the criteria of graceful curves. Aroused genitals are often asymmetrical, full of bulges and pockets; colors when engorged with blood range through purple, brown and grey. This is not universally true, but classically beautiful genitals are probably rare, judging from the array of commercial porn. The 1970s era of the Pubic Wars, when photos at most showed inner labia peeking through the dark hair of outer labia, were piquant, but close-ups of hairy crotches themselves generally are more of an jumble than as aesthetic pattern. This is so even in photos of women chosen for their overall beauty.

In collections of amateur photos of ordinary working-class women, frequently what can be seen of the rest of the woman’s body is flabby, wrinkled, boney, or sometimes with skin eruptions. (This last is completely excluded in magazine porn with professional models, who are selected for their good skin, indeed as the sine qua non of every kind of modeling.) Surprisingly often the fingers and nails shown in crotch shots are unmanicured, even cracked, bandaged, or dirty. One conclusion is that the people submitting these photos (usually the husband or sexual partner of the model) find this an object of desire that outweighs any aesthetic considerations. True enough; these photos come from the lower end of the sexual marketplace, but individuals match up by desire as best they can at that level too. This underscores the point that successful porn depicts a fantasy

of the upper end of the sexual marketplace, where a fantasy of wealth matches a fantasy of perfectly sexy bodies.

And even under those conditions, female genitals are not on the whole highly aesthetic.

The same is true of male genitals. Not to say that male genitals are incapable of being idealized, like Michelangelo’s statue of David; and porn photos sometime show classically beautiful male bodies and even penises with classic proportions. The bigger circulation sex magazines started showing male genitals relatively late, in the 1990s when female genitals had been shown for about 20 years. Why this is the case has not been sociologically explained. Even the most blatant of the sex mags,

Hustler

, instructed amateur photographers that it would not accept photos of erections, although occasionally it printed frontal photos of “well-hung studs”

in the amateur section. When photos of erections and intercourse started appearing in the 1990s, it became apparent that an erect penis is often bulging with veins, trails off into loose skin, sometimes distended testicles; altogether quite far from the aesthetic criteria of a few smoothly inflected curves. (Depictions of erect penises in ancient pottery and statuary, included ritual door-marking

herms

, clearly idealized penises to an aesthetic standard, since modern porn photos rarely look so pure.)

Bottom line: comparison of un-idealized, naturalistic photos of both male and female genitals indicates that genitals per se are not the most beautiful part of the body; they rarely fit the criteria of symmetry and graceful geometry that many people display in their legs, hips, breasts, arms and faces. To underline the point: genitals are not usually attractive aesthetically, but of course they can be a powerful center of attraction as the target for sexual action.

Comparing the techniques of idealized and un-idealized pornography shows that erotic attractiveness is constructed through the total effect of the body in its setting. Professional photographers at the high point of soft porn popularity in the 1970s showed genitals in the midst of photos posed and manipulated for maximal aesthetic effect and social prestige in the non-genital features of the photo. Luxurious upper-class and fantasy settings. Women with curvy legs, firmly rounded breasts; long beautiful hair, stylishly coiffed; beautiful faces with high cheekbones and full cupid’s bow lips.

Body postures carefully posed to get at the best angles to display the curve of a thigh or a calf, the hang of a breast; awkward poses eliminated in the pile of rushes. In the midst of the picture, increasingly the peep-hole widening on the crotch. But since close-ups of pubic hair and crotch hair are not in themselves aesthetic, photographers position them as accent marks in the total picture; something like beauty-marks, actually skin blemishes that set off the rest of the face. Of course the genitals are the object of erotic interest, in the fantasy of consummating with oral touch or real penetration, but the photo is frozen in the visual moment. Dark pubic hair was especially dramatic for the total aesthetic project; that nuance disappeared with the shaved style that came in the late 90s.

An analogy is depictions of breast nipples and areolas. During the pin-up era of the 1930s through the 50s, both in drawing and photos, artists worried over the question, to nipple or not to nipple, and if so, how distinctly. Large, dark areolas can have a strong effect as an accent mark, making breasts look spectacular when they echo larger curves with concentric ones. But close-ups of breasts tend to zoom in beyond optimal aesthetic distance. Close-up, nipples and areolas are often lumpy and unsymmetrical; this appears to be especially common for women with very large breasts. Since big-breasted women display the strongest distant marker of female form, they tend to be the favorite for lower-class pornography. Here again we see a contradiction between the object of erotic action and optimum aesthetic presentation.

Finally, we should note that erotic photographs are often manipulated post-production. I am not referring here to censoring features like pubic hair by old-fashioned airbrushing, but the opposite-- making bare bodies look sexier. Photos in sex magazines are often printed in enhanced colors, especially a golden light that makes the skin look honey-blonde or coppery. (One sees this in non-erotic photography as well, especially in tourist magazine photos of hotel lobbies.) Comparison with amateur photos shows what needs to be touched up: natural skin color (even of Caucasians) is often a dull white, yellowish or brownish; the vivid hues are added by professionals. Some magazines print pictures both of the amateur photo submission and the results of the professional photo shoot; the same woman generally looks transformed, not only better coiffed and made up, but her whole body comes across as more vivid.

For all these reasons, the more blatant or hard-core the pornography, the less likely it is to be attractive aesthetically. Sexual ugliness is a fact that is widely covered up for most people in everyday life.

Social repression of sexual ugliness

Thus we have another facet of why totally out-front sex is controlled-- by most people themselves. Concern about sexual ugliness is not unconscious Freudian repression, but Goffmanian strategy of self-presentation.

Not to overlook all the changes that have happened historically in how much of their bodies people have displayed publicly. The bedrock limitation I am pointing to here is about people displaying their genitals. This is very rare throughout all societies, except in privacy with a person one is about to have sex with. It has often been noted that what someone looks like does not match very well with how their body feels up close, and that the quality of intercourse diverges widely from how beautiful or not the partner is. The point remains, that the

visual

repression of genital display has been widespread, and will likely continue to be so.

What has varied is displaying the rest of the body. Bodily ugliness has changed a great deal in recent centuries. In the Middle Ages, most of the population were ill-nourished, largely unwashed, often afflicted by skin diseases and other illnesses. Medieval aristocrats regarded the peasants who worked their land as dirty animals, hardly sexual objects. The sexual status of non-elite classes improved as indoor servants became better treated. In the early 20th century, working-class people started becoming much better fed, healthier, and better looking. Upper class persons, especially women, started getting more exercise and developed fitter bodies. This is one reason why shorter clothing became popular, especially in the series of economic booms since WWII. More people can wear things like bikinis (invented in 1946), because more people look better in them. The sexual revolutions of the 20th century have a lot to do with these kinds of improvements in general physical health and bodily attractiveness. If the trend continues, some forms of bodily display will further increase in the future-- but we can expect it will be confined to showing the more attractive parts of the body.

Future Limits of Sexual Repression

Further sexual revolutions in the future are certainly possible. In fact, we have been running at the rate of one sexual revolution every 15 or 20 years, since at least the beginning of the 20th century. Gay marriage is only the latest of the series. What else can happen? Sociological predictions take more than imaginative speculation, and are best made when we have a theory of what causes what.

Sexual property regimes have shifted historically depending upon the political uses of sex for family alliances; male and female incomes and wealth-holding; and the bundling of shared household property with sex and love.

All these generate possessiveness, in the form of jealousy and anger when the existing form of sexual property is violated. Whatever else is bundled with sex may well change in the future, but it seems likely bundling of sex with some kind of property will continue.

Sexual markets exist whenever people have choices of partners; this means competition, rejection, and psychological defenses against rejection. In a sexually liberated era, this is a major reason why most people are not very blatant about offering and asking for sex.

Sexual domination, in an era when it is easy to mobilize social protest movements, typically gives rise to counter-movements that restrict sexuality in arenas like work and government. Ironically, sex becomes more scandalous in an era of gender integration.

A similar process in the future may create new arenas for scandals as discrimination against homosexuality declines. Another possible future is that as social class inequality widens-- and we are rushing down that slope-- the advantages of the wealthier occupations will give more sexual leverage to the upper classes. A version of this exists already in the black lower classes of urban ghettos, where men who have jobs or even just substantial illegal incomes have many women seeking them, and can play the sex market in a cavalier fashion. (See the forthcoming ethnography by Waverly Duck,

No Way Out

.)

Sexual arousal is disruptive of normal routines, and will continue to be confined to enclaves where everyone takes part and outsiders are excluded (the prototype “what happens in Vegas, stays in Vegas” junket).

The contemporary world is thus a patchwork of different arenas, some of them rigidly policed by political correctness, others blatantly displaying idealized images of sex. But sexual display is safest when it happens at a distance, not in personal relationships but for a mass audience; and when it is wrapped in aesthetic and class markers of eliteness and luxury. In the early years of the 21st century, advertisements in women’s fashion magazines-- especially those depicting the fantasy of a non-existent world of total sophistication-- show models in poses that mimic the soft porn of men’s magazines around the early 1970s.

jewelry ad, 2006

luggage ad, 2006

And the future of sexual ugliness? Further advances in electronic technology might produce virtual reality sex-- not just today’s pictures for masturbation but stimulating brain centers so as to convey the actual feelings of sexual intercourse, combined with idealized images of a beautiful body. Ordinary sexual ugliness would be side-stepped, the sexual market of person-to-person barter turned into a completely commercial market for non-human surrogate experiences. And then what? Social processes don’t go away just because of technology. Counter-movements would probably mobilize, treating virtual-reality brain-stimulation sex as dangerous as heroin. Since people generally enjoy sex most with someone they like, love and family will probably not disappear, although they would have to compete in the market with virtual sex.

In short, there will likely always be some social controls on sex. The Oedipus complex may be far behind us, along with the jealous father internalized as the Superego of the child who has to give up sexual desires for the mother. For reasons Freud could not have foreseen, there will always be some mechanisms of sexual repression.

Napoleon Never Slept: How Great Leaders Leverage Social Energy

Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs

E-book now available at

Maren.ink

and

Amazon

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

REFERENCES

Adut, Ari. 2008. On Scandal.

Armstrong, Elizabeth A. and Laura T. Hamilton. 2013. Paying for the Party.

Bernstein, Elizabeth. 2007. Temporarily Yours. Intimacy, Authenticity, and the Commerce of Sex.

Blumstein, Philip, and Pepper Schwartz. 1983. American Couples.

Collins, Randall. 2004. Interaction Ritual Chains.

Cooney, Mark. 2014. "Death by family: Honor violence as punishment." Punishment and Society 16. http://pun.sagepub.com/content/16/4/406

Dover, K.J. 1978. Greek Homosexuality.

Finkelhor, David. 1986. A Sourcebook on Child Sexual Abuse.

Flowers, Amy, 1998. The Fantasy Factory. An Insider’s View of the Phone Sex Industry.

Ford, Clellan and Frank Beach. 1951. Patterns of Sexual Behavior.

Freud, Sigmund. 1916. Introductory Lectures on Psychoanalysis.

Gardner, Carol Brooks. 1995. Passing By: Gender and Public Harassment.

Gilmartin, Brian. 1978. The Gilmartin Report.

Grazian, David. 2008. On the Make: The Hustle of Urban Nightlife.

Herdt, Gilbert H. 1994. Guardians of the Flutes.

Keuls, Eva C. 1985. The Reign of the Phallus. Sexual Politics in Ancient Athens.

Laumann, Edward O., John H. Gagnon, Robert T. Michael, and Stuart Michals. 1994. The Social Organization of Sexuality. Sexual Practices in the United States.

Malinowski, Bronislaw. 1929. The Sexual Lives of Savages.

Martin, John Levi and Sylvia Fuller. 2004. "Gendered power dynamics in intentional communities." Social Psychology Quarterly 67: 269-284.

Mestrovic, S. G. 2006. The Trials of Abu Ghraib.

Miller, Jody. 2008. Getting Played: African American Girls, Urban Inequality and Gendered Violence.

O'Donnell, Ian. 2004. "Prison rape in context." British Journal of Criminology 44: 241-255.

Reiss, Ira. 1986. Journey Into Sexuality.

Sanday, Peggy Reeves. 2007. Fraternity Gang Rape.

Scheff, Thomas and Suzanne Retzinger. 1991. Emotions and Violence: Shame and Rage in Destructive Conflicts.

Zablocki, Benjamin. 1980. Alienation and Charisma. A Study of Contemporary American Communes.

Zelizer, Viviana. 2005. The Purchase of Intimacy.

TANK MAN AND THE LIMITS OF TELEPHOTO LENSES: OR, HOW MUCH CAN INDIVIDUALS STOP VIOLENCE?

The tank man photo is the most famous image of the 1989 Tiananmen Square democracy movement in Beijing. Indeed it is considered one of the most famous photos of the 20th century.

It has become a symbol of human resistance, a lone individual stopping a whole column of tanks. 

What the photo claims to symbolize, however, is only very partially true.

It is not a photo of Tiananmen Square, but of a boulevard nearby.

It was not taken during the crackdown on demonstrators, which took place on June 3 and the following night, but on the quiet morning of June 5, after Tiananmen Square had been cleared and government control had been reestablished in Beijing. And it was not a successful protest. The tanks stopped briefly; two men came into the street and took the protestor away.

The photo was taken by an American newsman, from a hotel balcony 800 meters distant--about half a mile. It was shot through a telephoto lens, like so many news photos of recent decades. This is one of the marvels of modern technology, and a hidden one: how seldom do we stop to think of how the photographer got so close, so near the action where history is made? Compare the equally famous, infinitely shocking photo from South Vietnam 1972 of children running from napalm—where did we think the photographer was standing? 

Telephoto lenses allow us to intrude closely into events that the participants would probably like to keep hidden. It is one of the sharpest differences between our images of the world before about 1960 and the present. The Vietnam War was the first war in history where we could see what it actually looked like. Before then, we had to be content with what officials allowed for patriotic publication, plus (as of World War II) candid shots of soldiers, generally far behind the front lines. And not just for violence in war, but violence in all its peacetime forms, telephoto lenses have brought us first-hand records of how violence really looks. And other forms of conflict, too—the expressions on faces and bodies that give us clues to how conflict plays out, and enable us to cut through the rhetoric and the mythology that have obscured it since humans first began to tell lies about violence.

I would go so far as to say that the telephoto lens, even more than the advent of television, has changed our access to reality. Even more than the camcorder which in 1991 first showed the police beating Rodney King; even more than the ubiquitous mobile phone cameras that now flood the Internet-connected world with images. The reason I make this exorbitant claim is that all the other devices depend on being up close; the telephoto lens zooms in from a great distance. It can go where it is too dangerous or too private for other devices to go. Unlike TV, it gives us photos that are not posed, since no one knows there is a camera to pose for. And it can give photos of great detail—the emotional expressions on faces, the exact postures of bodies, that are so important for a micro-sociologist’s explanation.

The purpose of my writing, however, is not to pick a fight as to which visual technology is best; they all work together to make our times the golden age of visual sociology.

Having extolled telephoto images, I want to raise a caveat about their limits. Taken out of context, they carry the danger of modern myth-making. To see what is distorted and what can be salvaged, let us examine the tank man photo in greater depth.

The Surrounding Context of the Tank Man Photo

The Beijing democracy demonstrations began on April 17, 1989, and went on for 50 days until they were crushed. The tank man photo was taken on day 51. Here I will summarize only the very last days. (More detail on the entire sequence is given in my post, Tipping Point Revolutions and State Breakdown Revolutions: Why Revolutions Succeed or Fail, The Sociological Eye, June 2013.)

Over the 50 days, the size of the crowds at Tiananmen Square rose and fell. After most of the initial enthusiasm had fallen off, on day 28 (May 13), the remaining few hundred militants launched a hunger strike, which recaptured public attention, and brought hundreds of thousands of supporters to Tiananmen. On day 34 (May 19), the Communist elite purged its dissidents and declared martial law, and began to bring troops into Beijing.

The next four days were a showdown in the streets; crowds of residents blocked the army convoys; soldiers rode in open trucks, unarmed-- the regime still trying to use as little force as possible, and also distrustful of giving out ammunition-- and often were overwhelmed by residents. Crowds used a mixture of persuasion and food offerings, and sometimes force, stoning and beating isolated soldiers. On May 24 (day 39), the regime pulled back the troops to bases outside the city. The most reliable army units were moved to the front, some tasked with watching for defections among less reliable units. In another week strong forces had been assembled in the center of Beijing.

Momentum was swinging back the other way. Student protestors in the Square increasingly divided between moderates and militants; by the time the order to clear the Square was given for June 3 (day 49), the number occupying was down to 4000. There was one last surge of violence-- not in Tiananmen Square itself, although the name became so famous that most outsiders think there was a massacre there-- but in the neighborhoods as residents attempted to block the army's movement once again. Crowds fought using stones and gasoline bombs, burning army vehicles and, by some reports, the soldiers inside. In this emotional atmosphere, as both sides spread stories of the other’s atrocities, something on the order of 50 soldiers and police were killed, and 400-800 civilians (estimates varying widely). Some soldiers took revenge for prior attacks by firing at fleeing opponents and beating those they caught. In Tiananmen Square, the early morning of June 4, the dwindling militants were allowed to march out through the encircling troops.

The Tank Man Photo and What It Shows

The Tank Man photo was taken the following morning. The revolutionary crowds had been beaten. Massive arrests were being made, especially of workers, whom the government regarded as far more dangerous than students. Hundreds of thousands of security agents were beginning to spread across the country, picking off suspects one by one, ultimately arresting tens of thousands in the following months. The tipping point had passed, and the regime had clearly won.

What then was the point of the tank man protest?

By his white shirt and dark trousers, we can surmise that he was a government bureaucrat, a class of people whose sympathies were strongly on the side of the protestors. But it is also a category of persons, numerous in all demonstrations, who offer support but do not take part in the actual confrontations with authority.

In virtually all photos of demonstrations and riots everywhere in the world, a small portion of crowd is at the front doing the violence, while most stand at a distance and watch. Very likely tank man had seen or heard about the previous days’ violence, and came forward in the quiet atmosphere to do something to demonstrate his own commitment.

As we can see in the photo, the streets are virtually empty. He has no visible supporters, although a small audience gathered on the sidewalk to watch from a distance. On the other hand, the tank troops too are anonymous, hidden inside their armored stations. The tanks are moving slowly, making a show of force, not an actual military operation.

– One can know this, because the tanks are in column, a parade-like movement; deployed into combat they would go into line. I would surmise that the soldiers are calm; their action has been over for 24 hours or more.

Thus it is a symbolic confrontation: the lone man, respectably dressed in the garb of the urban apparatchik, stepping in front of the column of slow-moving tanks. In that atmosphere, there is little danger of being run over. The lead tank swerved to avoid him, but he kept in its path until it stopped. Very likely the troops had returned to the orders that prevailed during days 34-38, when unarmed troops were sent to assemble in the city as quietly as possible, and had given no resistance when crowds forced them back. On the whole, the regime had used a mixture of appeasing the crowds, waiting for them to dwindle away, and sporadic application of military force. On day 51, they were back into the mode of calm normality. The government machinery was operating again; bureaucratically organized investigations and individual arrests were the regime’s weapon now. The rebellious crowd has its best chance when it is assembled in huge numbers, in an atmosphere of emotional support that flows outward, dangerously lapping at the solidarity of the government apparatus. Now the crowd has dispersed; and it is in this configuration that bureaucratic authority can exercise its unrelenting and comparatively unemotional control.

And that is what happens. Tank man steps in front of the tank column; the lead driver stops; the tank drivers behind him stop because the tank in front stops. Two men in dark suits come and take tank man away.

The column grinds slowly on.

When Does Local Resistance Succeed?

It would have been an unknown incident except for the newsmen in the hotel with the telephoto lenses. Pictures of violence on the previous days were just making their way into Western newspaper and television, so little attention was paid to the exact sequence when things happened. A famous photo showed bicycles crushed in a street where fighting had taken place—and in the absence of photos of actual bodies, these were taken as emblems of how the revolution had been crushed. It was easy to conjure up a scenario of tanks rolling over a crowd of demonstrators. And then, in the midst of this—the heroic image of the man who stopped the tank column. All was not lost: the human individual still prevails.

We are living in the realm of symbolism here, not in the realm of history. Never mind that no one stopped the tanks, or more likely the trucks that rolled over the bicycles and carried troops into the streets where fighting had taken place 36 hours earlier.

It carries a nice message, although only through the more careful retrospect of micro-sociology do we actually see what it is: the violent confrontations between crowds and army on June 3 and the early hours of June 4—confrontations in which violence was used on both sides—did not stop the army. But at the right moment, approached with the tools of non-violence, the army was stopped.

Micro-sociology, above all, attempts to be realistic.

Violence cannot be stopped everywhere. Sometimes force rolls on and crushes everything in its path. But—sometimes violence is stopped. It happens locally, and by persons acting in local conditions.

This is something to build on. What are those conditions?

Comparisons: Pockets of Successful Non-violent Peace-making in Riots

Turn now to the work of Dr. Anne Nassauer, of the Free University, Berlin. Using videos posted on-line from mobile phone cameras, plus GPS maps of streets, charting time-lines from police radio traffic, in short with the whole array of tools now available, she reconstructs protest demonstrations in Germany and the US.

With this cutting-edge media high-tech, she is able to reconstruct the micro-history of protests and to pin-point just when and where a demonstration will turn violent.

On the whole, I should mention, Nassauer finds that most demos stay peaceful, and their peacefulness can prevail even when militant protestors announce in advance that they will use force; or indeed, when police announce a tough crack-down-on-everything policy.

That is to say, whether a protest turns violent or not depends on local and emergent conditions; violence-threatening events can end up peaceful, and peaceful demos can turn violent.

I will not try to summarize here Nassauer’s findings of the several pathways that lead to violence. Let us concentrate on a single point: if violence has already broken out, nevertheless all is not lost. It is not too late to stop the violence—not everywhere, but locally, at the place where human individuals use the right techniques.

What are those techniques?

When the police surge forward and the crowd starts running and ducking, people are likely to be beaten.

Photos often show clusters of police or soldiers, attacking anyone in their path—swinging clubs at women, old people, news reporters, anyone.

Minsk demonstration 2006

Katmandu 2006

This is an emotional rush by the police, that I have called Forward Panic.

It is like the crowd contagion of running away, except in this case forces that have been pent up by confrontational tension, run forward into the vacuum left by a sudden weakness on the other side. It is an adrenaline surge that has been kept in suspense, suddenly released into action. That is why the police go out of control, swinging at anything in their path. It is important to see that this is a reciprocal emotion—the crowd running away is the counterpart of the police running forward, the display of emotional weakness feeding the surge of dominance of the attackers. In Nassauer’s data, she often finds that when one cop at the front swings at a target—it may be a person who has stumbled and fallen to the ground—the cops just behind will also swing at the same target. One policeman’s attack leads others to repeat the attack.

But—and here is the good news—Nassauer also finds that these attacks can be stopped locally.

When an individual stands still, directly facing the police, and calls out in a strong, clear voice:

“We are peaceful. What about you?” – or words to that effect, the attack almost always stops.

This does not mean that the riot as a whole can be stopped in this way.

There can be hundreds or thousands of persons spread out over a considerable space. Violence in a riot is not like one huge rugby scrum, not like huge battle-lines of ancient phalanxes, but a series of little clusters of violence here and there.

Each one of these clusters may be checked, could be rendered no longer violent, by the right local action.

To repeat: the details are important. The peace-making person must stand still, no longer moving. When almost everyone’s back is turned, he or she stands in direct eye contact with the on-coming forces. And one’s voice must be clear and steady, neither threatening nor fearful.

Especially important is not to scream.

Someone in the crowd, in the fear or rage of being attacked, can cry out the identical words:

“WE ARE PEACEFUL!! WHAT ABOUT YOU!!” but in this case it will not work.

The police perceive and feel the crowd as being out of control. To scream at the police does not correct this impression, but reinforces it.

Screaming is an expression of being out of control; and that is precisely the problem with the interactional situation. Tension and fear pervades everything, and the violence is coming out of the situation of one-sided emotional dominance by the police. The victim who screams does nothing to change the emotional field. It is the strong, calm tone that changes it, back towards local equilibrium, where the violence stops.

A similar technique can work when it is not a confrontation of police (or soldiers) versus a protesting crowd, but a violent attack by one crowd upon another. David Sorge, in research at University of Pennsylvania (2014), shows that in an incident of communal violence in India, the technique was used that stopped violence in a specific location.

The individual under attack was a peace-maker, a citizen who had stood up in a town meeting the day before, to urge the Hindu populace not to pay attention to rumours and not to attack the local Muslims.

As often happens in the early phases of communal violence, the peace-maker became targeted as a traitor. A crowd gathered in front of his house and pelted it with stones, the usual preliminary to an attack. But the peace-maker came out of the front of his house carrying a chair. Before anyone could attack him—there is usually a time-lag of shouting before someone starts the personal assault—he stood up on the chair and started to make a speech in a loud voice. The crowd quieted down and eventually dispersed.

Notice the details. He stood up above the crowd, where he could be seen. He met them face to face. For the members of a violent crowd, usually the target is someone anonymous up there behind all the surging bodies; for the few in the front with clearer visibility, someone cringing, showing weakness and fear, usually cowering, hiding their face, or knocked to the ground where all we can see is their side and back. Standing up in a prominent position, in this instance the peace-maker remained a human individual. He spoke in a loud, strong voice, not in anger, but resolutely.

He spoke to them as individuals, and took apart the collective emotion of the crowd, where each relies on the others to carry out acts of violence that ordinarily would outrage our moral sensibilities.

Again, we must recognize, it was a local solution only.

The riot as a whole was not stopped. The crowd moved elsewhere, where emotional dominance was easier to establish. Nevertheless, this is a hopeful sign. The whole pattern of a riot consists in all its local parts; and the more of these parts that can be stopped, the less damage it does.

Practical Advice in Violent Crowd Situations

--Don't turn your back.

--In a situation of violent threat, don't hide your face.

--Don't run away in panic.

--Above all, don't fall down.

That is to say: your eyes and your face are your strongest weapon of defense.

--Keep up a clear confrontation with a potential attacker. But don't raise the level of tension; don't scream; don't make further threats; just keep it steady as you can.

--Don't get isolated as a single individual surrounded by a cluster of about half a dozen attackers. This is the configuration in photos where persons are badly beaten. Try to stay with at least a small cluster of your own side, but not in the panicky flight mode.

I should add that this advice is for non-violent participants. It is unclear that they will work if you are throwing rocks, fire-bombs, or engaging in other kinds of violence.

This advice is drawn from research on the micro-sociology of riots. Does it work in other kinds of threatening situations, both more organized or macro-structured violence such as massacres and war, and in more individualized confrontations like street fights?

Our field of research has much more to do in examining all these types. But so far, the results are optimistic. The desk clerk in the Atlanta school on August 20, 2013 [http://edition.cnn.com/2013/08/21/us/georgia-school-gunshots/] who calmed down an armed man threatening a rampage shooting shows that even the most dangerous situations may be defused. Research colleagues have told me they have walked safely through a violent riot in Tehran, by keeping in mind what emotional tone they were projecting in their body language, playing neither attacker nor victim.

Stefan Klusemann's research (2010, 2012), on the tipping-points to genocidal ethnic cleansing in Bosnia, and in Rwanda, shows that even in the midst of a mass-murder campaign, there are micro-situational stumbling blocks, and threatened victims sometimes escape by a timely show of emotional resoluteness.

Lowering the Tension: Putting the Situation Back in Emotional Equilibrium

When the confrontation is one-on-one, the prospects are especially optimistic that violence can be avoided.

We are accumulating a significant amount of data on such situations, and two patterns stand out.

First: The audience has an important effect.

Public fights rarely get very far without audience support.

Most angry arguments stay at the level of bluster and insult, unless the audience shows that it wants them to fight. The audience that cheers and urges them on will almost always get a prolonged fight. A neutral or uneasy audience, standing at a distance, usually results in a brief fight without much damage. And when an audience (or part of it) tries to intervene, it is almost always successful in stopping a fight. This pattern is shown in my comparison of fight incidents with different audience reactions (Collins 2008); in British research using CCTV videos of fights in pubs (Levine et al. 2011); and studies of what network relationships result in successful third-party interventions (Phillips and Cooney 2005).

There are limits to this pattern. It applies to arguments and fights in public, but not to domestic violence, which often takes place without much of an audience. (But there is ongoing research here, too, investigating the effects of indoor audiences that may be present.)

Since domestic violence is a considerable portion of small-scale violence, that is a serious limitation on our optimistic news. On the other hand, the following point applies both to domestic and public violence:

Second: Small-scale conflict and violence peters out when the emotional field is in equilibrium.

That means: when both sides are showing the same amount of emotional energy, the same degree of bodily agitation, the same emotional intensity.

This equilibrating effect can take place at any level of intensity, as long as both sides are evenly matched.

Research on mobile phone videos of street fights (Jackson-Jacobs) shows that even in the case of fights that have already started (and where the audience is distant and neutral) tend to wind down after both sides have thrown a few blows. On the whole, evenly matched fights do not do very much damage; the high level of adrenaline arousal makes fighters sloppy and incompetent, and Jackson-Jacobs’s videos show the fighters who have thrown a few wild punches tend to let their swings carry themselves out of range, where the fight devolves into threats, and eventually into mutual disengagement.

After all, in an honor fight, it is showing one’s willingness to fight that counts, not the result.

Let me conclude with a favorite photograph.

It was taken in Jerusalem during the height of the second Intifada, and it shows an Israeli soldier and a Palestinian political leader locked in angry conflict.

Jerusalem stalemate 2000

The news story tells there was no violence at this flash point of sacred territories that day. The angry confrontation wound down and ended. How?

The photo shows the two men exactly mirroring each other. Reading the facial expression of emotions using Ekman’s methods, we see them displaying anger, and in an identical manner: both have the hard, staring eyes, the clenched eyebrows with the vertical line between them, the square, shouting mouth. As is characteristic of angry talk, both are vocalizing at the same time, not listening to what the other has to say. Their faces, like their bodies, are tensed like muscles about to strike. But they do not strike.

They are in equilibrium at a high level of intensity.

From similar incidents observed over a few moments of time, we can surmise that they eventually become tired of the situation. No one else in the crowd is taking up their level of intensity; they are doing all the audience’s work for it. It is boring to say the same thing over and over again, getting no intelligible response. They will deescalate, going down the scale of emotional intensity simultaneously, keeping in equilibrium step by step.

They will become bored. And in situations of conflict, boredom is the pathway to peace.

The Tank and the Human Face

Micro-sociology delivers some good news. Some kinds of violence we are able to mitigate. This is on the micro-level, face-to-face with a potential attacker.

Another level is harder to handle, or at least it will take another approach. This is violence at the level of the organization or bureaucracy.

If we take the column of tanks as a symbol of the hundreds of military vehicles and thousands of soldiers in Beijing, we are seeing the public face of an organizational network stretching far off into the distance. Orders to advance are given somewhere else, by a face we never see, a voice we never hear. Techniques of human face-to-face confrontation will not work here.

This is not to claim that the distant strategists are purely rational and coolly calculating. Their decisions are made in an atmosphere of emotions pervading the network of organized power, in counterpoint to the waves of emotions among the crowds who come into the streets over a period of weeks. It may be a long-distance chess game, but one played in shifting moods of anger or fear, confidence or deflation, righteousness and revenge-- and occasionally magnanimity.

The macro-level pattern is one of counter-escalation and de-escalation, and it has its time-dynamics that spread over weeks and months (Collins 2012).

There may be grounds for optimism about what sorts of processes can head off violence on the macro level too, although in some phases there is a kind of steam-roller momentum that is extremely dangerous once it gets rolling. We are learning about these kinds of time-dynamics, hopefully adding more tactics to the toolbox for peace.

In the meantime, as individuals in threatening situations, we can do our bit.

Appendix: How to regain calm when your heart is pounding

It's all very well to say, turn and face your attacker, call out in a firm strong voice, don't run and don't panic. But how do you manage to do this if your adrenaline is pumping and your heart is going 160 beats per minute?

There is a technique that will bring your heartbeat down, and with it, the panicky effects of adrenaline and the inability to control your voice. A useful version is described by Dave Grossman, a psychologist formerly with the US Army:

Repeat the following sequence of breathing:

-- breathe in slowly, counting 4 seconds (one-alligator, two alligator, three-alligator, four-alligator)

-- hold your breath for 4 seconds (counting...)

-- breathe out slowly, counting 4 seconds

-- hold your breath out (lungs empty), counting 4 seconds

do it again:

-- breathe in slowly, 4 seconds

-- etc.

as many times as you need until your get your breathing and heart rate under control.

Remember the details. This is not the simple cliché, take a deep breath. It is the rhythm you are after, the timing of how long each breath and holding period is. Your goal is to change your body rhythm. And after you accomplish that, to change the rhythm of the person confronting you.

 

Napoleon Never Slept: How Great Leaders Leverage Social Energy

Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs

E-book now available at

Maren.ink

and

Amazon

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

References

Randall Collins. 2008. Violence: A Micro-sociological Theory.

Randall Collins. 2012. “C-Escalation and D-escalation: A Theory of the Time-Dynamics of Conflict.” American Sociological Review 77: 1-20.

Paul Ekman, 1985. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage.

Paul Ekman and Wallace V. Friesen, 1975. Unmasking the Face.

Dave Grossman. 2004. On Combat. The psychology and physiology of deadly combat in war and peace.

Curtis Jackson-Jacobs. 2011. "Social Organization in Violence." Research in progress, UCLA.

Stefan Klusemann. 2010. “Micro-situational antecedents of violent atrocity.” Sociological Forum 25:272-295.

Stefan Klusemann. 2012. "Massacres as process: A micro-sociological theory of internal patterns of mass atrocities." European Journal of Criminology 9: 438-480.

Mark Levine, Paul J. Taylor, and Rachel Best. 2011. “Third Parties, Violence, and Conflict Resolution.” Psychological Science 22 (3) 406–412.

Anne Nassauer. 2013. Violence in demonstrations. PhD dissertation, Berlin Graduate School of Social Sciences.

Scott Phillips and Mark Cooney. 2005. "Aiding Peace, Abetting Violence: Third Parties and the Management of Conflict." American Sociological Review 70: 334-354.

JESUS IN INTERACTION: THE MICRO-SOCIOLOGY OF CHARISMA

What is charisma when you see it? Charismatic leaders are among the most famous persons of past history and today. What was it like to meet a charismatic leader? You fell under their spell. How did they do it?

One of the best-described of all charismatic leaders is Jesus. About 90 face-to-face encounters with Jesus are described in the four gospels of the New Testament.

Notice what happens: Jesus is sitting on the ground, teaching to a crowd in the outer courtyard of the temple at Jerusalem. The Pharisees, righteous upholders of traditional ritual and law, haul before him a woman taken in adultery. They make her stand in front of the crowd and say to Jesus: “Teacher, this woman was caught in the act of adultery. The Law commands us to stone her to death. What do you say?”

The text goes on that Jesus does not look up at them, but continues to write in the dirt with his finger. This would not be unusual; Archimedes wrote geometric figures in the dust, and in the absence of ready writing materials the ground would serve as a chalkboard. The point is that Jesus does not reply right away; he lets them stew in their uneasiness.

Finally he looks up and says: “Let whoever is without sin cast the first stone.” And he looks down and continues writing in the dust.

Minutes go by. One by one, the crowd starts to slip away, the older ones first-- the young hotheads being the ones who do the stoning, as in the most primitive parts of the Middle East today.

Finally Jesus is left with the woman standing before him. Jesus straightens up and asks her: “Woman, where are they? Has no one condemned you?”

She answers: “No one.” “Then neither do I condemn you,” Jesus says. “Go now and sin no more.” (John 8: 1-11)

Jesus is a master of timing. He does not allow people to force him into their rhythm, their definition of the situation. He perceives what they are attempting to do, the intention beyond the words. And he makes them shift their ground.

Hence the two periods of tension-filled silence; first when he will not directly answer; second when he looks down again at his writing after telling them who should cast the first stone. He does not allow the encounter to focus on himself against the Pharisees. He knows they are testing him, trying to make him say something in violation of the law; or else back down in front of his followers. Instead Jesus throws it back on their own consciences, their inner reflections about the woman they are going to kill. He individualizes the crowd, making them drift off one by one, breaking up the mob mentality.

The Micro-sociology of Charisma

Jesus is a charismatic leader, indeed the archetype of charisma.

Although sociologists tend to treat charisma as an abstraction, it is observable in everyday life. We are viewing the elements of it, in the encounters of Jesus with the people around him.

I will focus on encounters that are realistic in every respect, that do not involve miracles-- about two-thirds of all the incidents reported. Since miracles are one of the things that made Jesus famous, and that caused controversies right from the outset, some miracles will be analyzed. I will do this mostly at the end.

(1) Jesus always wins an encounter

(2)Jesus is quick and absolutely decisive

(3) Jesus always does something unexpected

(4) Jesus knows what the other is intending

(5) Jesus is master of the crowd

(6) Jesus’ down moments

(7) Victory through suffering, transformation through altruism

(Appendix) The interactional context of miracles

 

(1) Jesus always wins an encounter

When Jesus was teaching in the temple courts, the chief priests and elders came to him. “By what authority are you doing these things?” they asked. “And who gave you this authority?”

Jesus replied: “I will also ask you a question. If you answer me, I will tell you by what authority I am doing these things. John’s baptism-- where did it come from? Was it from heaven, or of human origin?”

They discussed it among themselves and said, “If we say, ‘From heaven,’ he will ask, ‘Then why don’t you believe him?’ But if we say, ‘Of human origin,’ the people will stone us, because they are persuaded that John was a prophet.”

So they answered, “We don’t know where it was from.”

Jesus said, “Neither will I tell you by what authority I am doing these things.” (Matthew 21: 23-27; Luke 20: 1-8) He proceeded to tell the crowd a parable comparing two sons who were true or false to their father. Jesus holds the floor, and his enemies did not dare to have him arrested, though they knew the parable was about themselves.

Jesus never lets anyone determine the conversational sequence. He answers questions with questions, putting the interlocutor on the defensive. An example, from early in his career of preaching around Galilee:

Jesus has been invited to dinner at the house of a Pharisee. A prostitute comes in and falls at his feet, wets his feet with her tears, kisses them and pours perfume on them. The Pharisee said to himself, “If this man is a prophet, he would know what kind of woman is touching him-- that she is a sinner.”

Jesus, reading his thoughts, said to him: “I have something to tell you.” “Tell me,” he said. Jesus proceeded to tell a story about two men who owed money, neither of whom could repay the moneylender. He forgives them both, the one who owes 500 and the one who owes 50. Jesus asked: “Which of the two will love him more?” “The one who had the bigger debt forgiven,” the Pharisee replied. “You are correct,” Jesus said. “Do you see this woman? You did not give me water for my feet, but this woman wet them with her tears and dried them with her hair... Therefore her many sins have been forgiven-- as her great love has shown.”

The other guests began to say among themselves, “Who is this who even forgives sins?” Jesus said to the woman, “Your faith has saved you; go in peace.” (Luke 7: 36-50)

Silencing the opposition

Jesus always gets the last word. Not just that he is good at repartee, topping everyone else; he doesn’t play verbal games, but converses on the most serious level. What it means to win the argument is evident to all, for audience and interlocutor are amazed, astounded, astonished: they cannot say another word.

He takes control of the conversational rhythm. For a micro-sociologist, this is no minor thing; it is in the rhythms of conversation that solidarity is manifested, or alienation, or anger. Conversations with Jesus end in full stop: wordless submission.

His debate with the Sadducees, another religious sect, ends when “no one dared ask him any more questions.” (Luke 20: 40)

When a teacher of the Law asks him which is the most important commandment, Jesus answers, and the teacher repeats: “Well said, teacher, you are right in saying, to love God with all your heart, and to love your neighbour as yourself is more important than all burnt offerings and sacrifices.” Jesus said to him, “You are not far from the kingdom of God.” And from then on, no one dared ask him any more questions. (Mark 12: 28-34)

A famous argument ends the same way: The priests send spies, hoping to catch Jesus in saying something so that they might hand him over to the Roman governor. So they asked: “Is it right for us to pay taxes to Caesar or not?”

Jesus knowing their evil intent, said to them, “Show me the coin used to pay taxes.” When they brought it, he said, “Whose image is on it?” “Caesar’s,” they replied. “Then give to Caesar what is Caesar’s, and to God what is God’s.”

And they were astonished by his answer, and were silent. (Luke 20: 19-26; Matthew 22: 15-22)

As with the woman taken in adultery, again there is an attempted trap; a turning of attention while everyone waits; and a question-and-reply sequence that silences everyone. Jesus does not just preach. It is at moments like this, drawing the interlocutor into his rhythm, that he takes charge.

(2) Jesus is quick and absolutely decisive

As his mission is taking off in Galilee, followers flock to hear him. Some he invites to come with him. It is a life-changing decision.

A man said to him: “Lord, first let me go and bury my father.” Jesus replied: “Follow me, and let the dead bury their dead.”

It is a shocking demand. In a ritually pious society, there is nothing more important that burying your father. Jesus demands a complete break with existing social forms; those who follow them, he implies, are dead in spirit.

To another would-be recruit he underlines it: “No one who puts a hand to the plow and looks back is fit for service in the kingdom of God.” (Luke 9: 57-62; Mark 8: 19-22)

Charisma is total dedication, having it and imparting it to others. There is nothing else by which to value it. Either do it now, or don’t bother.

This is how Jesus recruits his inner circle of disciples. He is walking beside the Sea of Galilee, and sees Simon and Andrew casting their net into the lake. “Come, follow me,” Jesus said, “and I will make you fishers of men.” At once they left their nets and followed him. A little further on, he sees James and his brother John preparing their nets. Without delay he called them, and they left their father in the boat and followed him. (Mark 1: 16-20; Matthew 4: 8-22. Luke 5: 1-11 gives a longer story about crowds pressing so closely that Jesus preaches from a boat, but it ends with the same abrupt conversion; here the influence of the crowd is more visible than in the truncated versions.)

Jesus recruits not from the eminent, but from the humble and the disreputable. Among the latter are the tax collectors, hated agents of the Roman overlord. There is the same abrupt conversion: As Jesus is passing along the lake with a large crowd following, he sees a man sitting at the tax collector’s booth. “Follow me,” Jesus said, and the man got up, left everything, and followed him.

They have a banquet at his house (Luke 5: 27-32; Mark 1: 13-17; Matthew 9: 9-13), with many tax collectors and others eating with the disciples. The Pharisees complained, “Why do you eat and drink with tax collectors and sinners?” Jesus replied, “It is not the healthy who need a doctor, but the sick. I have not come to call the righteous, but sinners to repentance.”

Jesus perceives who will make a good recruit, and who will not.

(3) Jesus always does something unexpected

Being with Jesus is exciting and energizing, among other reasons because he is always surprising. He does not do or say just what other people expect; even when they regard him as a prophet and a miracle-worker, there is always something else.

Pharisees and teachers of the law who had come from Jerusalem gathered around Jesus and saw some of his disciples eating food with hands that were defiled. They asked Jesus, “Why do your disciples break the tradition of the elders? They don’t wash their hands before they eat!”

Jesus replied, “You Pharisees clean the outside of the cup and dish, but inside you are full of greed and wickedness. You foolish people! Did not the one who made the outside make the inside also? But as to what is inside you-- be generous to the poor, and everything will be clean for you.”

He goes on with further admonitions, and his opponents accuse Jesus of insulting them. Jesus called the crowd to him to hear. The disciples came to him privately and asked, “Do you know that the Pharisees were offended when they heard this?” Jesus replied: “Leave them; they are blind guides. If the blind lead the blind, both will fall into a pit.”

Peter said, “Explain the parable to us.” “Are you still so dull?” Jesus asked them. “Don’t you see that whatever enters the mouth goes into the stomach and then out of the body? ... But out of the heart come evil thoughts-- murder, adultery, sexual immorality, theft, false testimony, slander. These are what defile a person; but eating with unwashed hands does not defile them.” (Matthew 15: 1-20; Mark 7: 1-23; Luke 11: 37-54)

Ritual purification is what concerns the pious and respectable of the time; Jesus meets an accusation with a stronger one. Even his closest disciples do not escape the jolt. “Are you still so dull? Don’t you see?” Everyone has to be on their toes when they are around this man.

How does Jesus generate an unending stream of jolts? He has a program: mere ritual and the righteous superiority that goes with it is to be brought down and replaced by humane altruism, and by spiritual dedication. When his encounters involve miracles, or rather people’s reaction to them, the program bursts expectations: On a Sabbath Jesus was teaching in a synagogue, and a women was there who was crippled for 18 years, bent over and unable to straighten up. Jesus called her forward and said to her, “Woman, you are set free from your infirmity.” Then he put his hands on her, and immediately she straightened up and praised God.

Indignant that Jesus had healed on the Sabbath, the synagogue leader said to the people, “There are six days for work. So come and be healed on those days, not of the Sabbath.”

Jesus answered him, “You hypocrites! Doesn’t each of you on the Sabbath untie your ox or donkey from the stall and lead it out to give it water? Then should not this woman, a daughter of Abraham... be set free on the Sabbath from what bound her?” When he said this, all his opponents were humiliated, but the people were delighted. (Luke 13: 10-17. Similar conflicts about healing on the Sabbath are in Luke 6: 6-11; Matthew 12: 1-14; and Luke 14: 1-6, which ends by silencing the opposition.)

It is not the miracle that is at issue; what makes the greater impression on the crowd is Jesus’ triumph over the ritualists. It is also what leads to the escalating conflict with religious authorities, and ultimately to his crucifixion.

Nearer the climax, Jesus enters Jerusalem with a crowd of his followers who have traveled with him from Galilee in the north, picking up enthusiastic converts along the way. He enters Jerusalem in a triumphant procession, greeted by crowds waving palm fronds. Next morning he goes to the temple.

In the temple courts he found people selling cattle, sheep and doves, and others sitting at tables exchanging money. So he made a whip out of cords, and drove all from the temple courts, both sheep and cattle; he scattered the coins of the money changers and overturned their tables. To those who sold doves he said, “Get these out of here! Stop turning my Father’s house into a market!” (Another text quotes him:) “Is it not written, ‘My house will be called a house of prayer for all nations’? But you have made it ‘a den of thieves.’”

The chief priests and teachers of the law heard this and began looking for a way to kill him, for they feared him, because the whole crowd was amazed at his teaching. (John 2: 13-16; Mark 11: 15-19)

One text gives a tell-tale detail: Immediately after entering Jerusalem in the palm-waving crowd, Jesus went into the temple courts.

He looked around at everything, but since it was already late, he went out to the nearby village of Bethany with the Twelve. (Mark 11: 11)

Jesus clearly intends to make a big scene; he is going to do it at the height of the business day, not in the slack time of late afternoon when the stalls are almost empty. Jesus always shows strategic sense.

Why are the animals and the money changers in the temple in the first place? Because of ritualism; the animals are there to be bought as burnt sacrifices, and the money changers are to facilitate the crowd of distant visitors. But also it was the case, throughout the ancient world and in the medieval as well, that temples and churches were primary places of business, open spaces for crowds, idlers, speculators, merchants of all sorts. In Babylon and elsewhere the temples themselves acted as merchants and bankers (and may have originated such enterprises); in Phoenicia and the coastal cities of sin anathema to the Old Testament prophets, temples rented out prostitutes to travelers; Greek temples collected treasure in the form of bronze offerings and subsequently became stores of gold. Jesus no doubt had all this in mind when he set out to cleanse the temple of secular transactions corrupting its pure religious purpose.

Jesus is not just shocking on the large public scene; he also continues to upend his own disciples’ expectations. In seclusion at Bethany, he is reclining at the dinner table when a woman came with an alabaster jar of expensive perfume. She broke the jar and poured the perfume on his head.

Some of the disciples said indignantly to each other, “Why this waste of perfume? It could have been sold for more than a year’s wages and the money given to the poor.” And they rebuked her harshly.

“Leave her alone,” Jesus said. “She has done a beautiful thing to me. The poor you will always have with you, and you can help them any time you want.

But you will not always have me. She did what she could. She poured perfume on my body beforehand to prepare me for my funeral.” (Mark 14: 1-10; Matthew 26: 6-13)

A double jolt. His disciples by now have understood the message about the selfishness of the rich and charity to the poor. But there are circumstances and momentous occasions that transcend even the great doctrine of love thy neighbour. Jesus is zen-like in his unexpectedness. There is a second jolt, and his disciples do not quite get it. Jesus knows he is going to be crucified. He has the political sense to see where the confrontation is headed; in this he is ahead of his followers, who only see his power.

(4) Jesus knows what the other is intending

Jesus is an intelligent observer of the people around him. He does not have to be a magical mind-reader. He is highly focused on everyone’s moral and social stance, and sees it in the immediate moment. Charismatic people are generally like that; Jesus does it to a superlative degree.

He perceives not just what people are saying, but how they are saying it; a socio-linguist might say, speech actions speak louder than words.

So it is not surprising that Jesus can say to his disciples at the last supper, one of you will betray me, no doubt noting the furtive and forced looks of Judas Iscariot. Or that he can say to Peter, his most stalwart follower, before the cock crows you will have denied me three times-- knowing how strong blustering men also can be swayed when the mood of the crowd goes against them in the atmosphere of a lynch mob. (Mark 14: 17-31; Matthew 26: 20-35; John 13: 20-38)

Most of these examples have an element of Jesus reading the intentions of his questioners, as when they craftily try to trap him into something he can be held liable for. Consider some cases where the situation is not so fraught but he knows what is going on: Invited to the house of a prominent Pharisee, Jesus noticed how the guests vied for the places of honor at the table.

He told them a parable: “When someone invites you to a wedding feast, do not take the place of honor, for a person more distinguished than yourself may have been invited... and, humiliated, you will have to move to the least important place. But when you are invited, take the lowest place, so that when your host comes, he will say to you, ‘Friend, move up to a better place.’ Then you will be honored in the presence of all the other guests. For those who exalt themselves will be humbled, and those who humble themselves will be exalted.” Then Jesus said to the host, “...When you give a banquet, invite the poor, the crippled, the lame, the blind, and you will be blessed. Although they cannot repay you-- as your relatives and rich friends would by inviting you back-- you will be repaid at the resurrection of the righteous.” (Luke 14: 7-16)

It is an occasion to deliver a sermon, but Jesus starts it with the situation they are in, the unspoken but none-too-subtle scramble for best seats at the table. And he makes a sociological point about the status reciprocity involved in the etiquette of exchanging invitations.

Jesus sees what matters to people. A rich young man, inquiring sincerely about his religious duties, ran up to Jesus and fell on his knees. “Good teacher,” he asked, “what must I do to inherit eternal life?”

“Why do you call me good?” Jesus asked, as usual answering a question with a question.

“No one is good-- except God alone. You know the commandments: ‘You shall not murder, nor commit adultery, nor steal, nor give false testimony, nor defraud; honor your father and mother.’”

“Teacher,” he declared, “all these I have kept since I was a boy.” Jesus looked at him. “One thing you lack,” he said. “Go, and sell everything you have and give to the poor, and you will have treasure in heaven. Then come and follow me.”

At this the man’s face fell. He went away sad, because he had great riches. (Mark 10: 17-22; Luke 18: 18-30; Matthew 19: 16-26)

Jesus knows who to recruit, who is ready for instantaneous commitment, by watching them. As his crowd of followers passed through Jericho, a chief tax collector wanted to see Jesus, but because he was short he could not see over the heads of the crowd. So he ran ahead and climbed a tree. When Jesus reached the spot, he looked up and said to him, “Zacchaeus, come down immediately. I must stay at your house today.” People began to mutter, “He has gone to be the guest of a sinner.” But Zacchaeus said to Jesus, “Here and now I give half my possessions to the poor, and if I have cheated anyone, I will pay him back four times the amount.” (Luke 19: 1-10)

This is the theme again, recruiting among sinners. But Jesus is a practical leader as well as an inspirational one. He normally sends out forerunners to line up volunteers to lodge and feed his traveling followers (Luke 10: 1-16; Matthew 26: 17-19;); in this case, he has picked out a rich man (class distinctions would have been very visible), and someone who is notably eager to see him. No doubt Jesus’ perceptiveness enables him to pick out early disciples like Peter and the other fishermen.

Jesus’ perceptiveness helps explain why he dominates his encounters. He surprises interlocutors by unexpectedly jumping from their words, not to what conventionally follows verbally, but instead speaking to what they are really about, skipping the intermediate stages.

(5) Jesus is master of the crowd

The important events of Jesus’ life mainly take place in crowds. Of 93 distinct incidents of Jesus’ adult life described in the gospels, there are at most 5 occasions when he is with three or fewer other people.*

When he is outdoors, he is almost entirely surrounded by crowds; in the early part of his mission in Galilee he periodically escapes the crowds by going out on boats and climbing remote mountainsides in order to pray in solitude. The crowds increase and follow him wherever he goes. Indoors, 6 incidents take place at banquets, including an overflow wedding party; 3 in synagogues; 2 are hearings before public authorities. There are also 9 occasions when he is backstage with his disciples, although often there is a crowd outside and people get in to see him.

Altogether, for Jesus a relatively intimate gathering was somewhat more than a dozen people, and most of his famous interactions took place with twenties up through hundreds or even several thousands of people amidst whom he was the center of attention.

*John 1: 35-42; two of John the Baptist’s disciples seek out Jesus after John has pointed him out in the crowd of the Baptist’s own followers, and the two spend the afternoon visiting Jesus where he stays. This is before Jesus is baptized and starts his own mission.

Luke 9: 28-30; Matthew 17: 1-13; Mark 9: 2-13; Jesus with three disciples go up on a mountain to pray, where they see him transfigured.

John 4: 31-42; Jesus meets a Samaritan woman at a well while his disciples have gone into town for provisions; they have a one-on-one conversation, and many in her village become believers that he is the Savior of the world, among other reasons because he has broken the taboo on Jews associating with Samaritans.

John 3: 1-21; Jesus is visited at night by a Pharisee who is a member of the ruling council; no one else is mentioned as present, although the conversation leads to some of the most famous Bible passages, “For God so loved the world that he gave his only Son, that whoever believes in him shall not perish but have eternal life.” Presumably someone heard this and wrote it down; not unlikely since Jesus always stayed in a house full of his disciples. Mark 14: 32-42; Matthew 26: 36-46; Jesus goes with his disciples (the Twelve minus Judas) plus at least some others to pray at Gethsemane. He then takes three close followers, goes a little further into the garden, and prays in anguish while the others fall asleep. This is the one important place in the narration where Jesus is alone, and the one time that he shows anxiety. I do not count the 40 days he spent praying in the wilderness before beginning his ministry; the only incidents described for this period are not pinned down in time and circumstance and all involve talking with the devil. I will discuss these below in the section on apparitions.

Crowds are a major source of Jesus’ power. There is a constant refrain: “The crowds were amazed at his teaching, because he taught as one who had authority, and not as their teachers of the law.” (Matthew 7: 29) His enemies the high priests are afraid of what his crowd of followers will do if they attack Jesus. As the challenge mounts in Jerusalem on the last and greatest day of the Passover festival, Jesus preaches in the temple courts in a loud voice, “Let anyone who is thirsty come to me and drink.” The crowds are divided on whether he is the Messiah. The temple guards retreat to the chief priests, who ask them, “Why don’t you arrest him?” “No one ever spoke the way this man does,” the guards reply.

“The mob knows nothing of the law,” the Pharisees retort, “there is a curse on them.” (John 7: 37-49)

Judas’ betrayal of Jesus consists in telling the priests when and where Jesus will be alone, so that he can be arrested. Alone, relatively speaking; there are at least a dozen of his followers with him at Gethsemane, but it is for arranging the absence of the crowd that Judas receives his 30 pieces of silver. (Luke 22: 2-6) The signal is to mark Jesus with a kiss, so the guards will know whom to seize in the dark.

Charismatic leaders live on crowds. There is no such thing as a charismatic leader who is not good at inspiring crowds; and the micro-sociologist adds, being super-energized by them in turn. Crowd and leader are parts of a circuit, emotional intensity and rhythmic coordination flowing from one to the other: charisma as high-amp electric current. It is what the Bible, especially in the Book of Acts, calls the holy spirit.

Jesus as archetype of the charismatic leader also shows how a charismatic movement is organized. His life moves in three spheres: crowds; the inner circle of his twelve disciples; and withdrawing into solitude. The third of these, as noted, does not figure much in the narration of important events; but we can surmise, from sociological research on prayer, that he reflects in inner dialogue on what is happening in the outer circles, and forms his resolve as to what he will do next.

The inner circle has a practical aspect and a personal aspect. Jesus recruits his inner disciples, the Twelve, because he wants truly dedicated followers who will accompany him everywhere. That means giving up all outside commitment, leaving occupation, family, home town. It means leaving behind all property, and trusting that supporters will bring them the means of sustenance, day after day. In effect, they are monks, although they are not called that yet. Thus the inner circle depends on the outer circle, the crowds of supporters who not only give their emotion, but also food, lodging, whatever is needed. Jesus is the organizer of a movement, and he directs his lieutenants and delegates tasks to them. Early in his mission, when the crowds are burgeoning, he recognizes that “the harvest is plentiful but the workers are few” and sends out the Twelve to preach and work miracles on their own, accelerating the cascade of still more followers and supporters. (Luke 9: 1-6; Mark 6: 7-13; Matthew 9: 35-38; 10: 1-20.)

When Jesus travels, it is not just with the Twelve, but with a larger crowd (who are also called disciples), somewhere between casual supporters and his inner circle. These include some wealthy women-- an ex-prostitute Mary Magdalene, women who have been cured by Jesus, the wife of a manager of King Herod’s household-- and they help defray expenses with their money. (Luke 8: 1-3) Even the Twelve have a treasurer: Judas Iscariot, pointing up the ambiguity of money for a movement of self-chosen poverty.

With big crowds to take care of, Jesus expands his logistics staff to 70. (Luke 10:1-16) He concerns himself about whose house they will eat in. Jesus accepts all invitations, even from his enemies the Pharisees; he especially seems to choose tax collectors, since they are both rich and hospitable and recognize their own need of salvation. It is the size of his peripatetic crowds that bring about the need for multiplying loaves and fishes and turning water into wine. Jesus’ crowds are not static, but growing, and this is part of their energy and excitement.

The inner circle is not just his trusted staff.

It is also his backstage, where he can speak more intimately and discuss his concerns and plans.

“Who do people say I am?” Jesus asks the Twelve, when the movement is taking off. They replied, “Some say John the Baptist; others say Elijah; and still others, one of the prophets.”

“But what do you say?” Jesus asked. “Who do you say I am?” Peter answered, “You are the Messiah.” Jesus warned them not to tell anyone. Jesus goes on to tell them that the Son of Man will be rejected by the chief priests, that he must be killed and rise again in three days. Peter took him aside and began to rebuke him. Jesus turned and looked at the rest of the disciples. “Get thee behind me, Satan!” he said. “Your mind is not on the concerns of God, but merely human concerns.” (Mark 8: 27-33; Matthew 16: 13-23)

There is a certain amount of jostling over who are the greatest of the disciples, the ones closest to Jesus. Jesus always rebukes this; there is to be no intimate backstage behind the privacy shared by the Twelve. Jesus’ charisma is not a show put on for the crowds with the help of his staff; he is charismatic all the time, in the backstage as well. Jesus loves and is loved, but he has no special friends. No one understands what he is really doing until after he is dead.

Jesus is famous for speaking in parables. Especially when referring to himself, he uses figurative expressions, such as "the bread of life," "the light of the world," "the shepherd and his sheep." The parables mark a clear dividing line.

He uses parables when he is speaking to the crowds, and especially to potential enemies such as the Pharisees. Their meaning, apparently, did not easily come through; but audiences are generally impressed by them-- amazed and struck speechless, among other reasons because they exemplify the clever style of talking that deflects questions in unexpected directions. “Whoever has ears to hear, let them hear!” Jesus proclaims. (Mark 4: 9)

His Twelve disciples are not much better at deciphering parables, at least in the earlier part of his mission; but Jesus treats them differently. It is in private among the Twelve that he explains the meaning of parables in ordinary language, telling “the secret of the kingdom of God” (Mark 4: 10-34; Matthew 13: 34-52; Luke 8: 4-18) They are the privileged in-group, and they know it. Jesus admonishes them from time to time about their pride; but he needs them, too. It is another reason why living with Jesus is bracing. There is an additional circuit of charismatic energy in the inner circle.

But it is the crowds that feed the core of the mission, the preaching and the miraculous signs. As his movement marches on Jerusalem, opposition mobilizes. Now Jesus begins to face crowds that are divided or hostile.

The crowd begins to accuse him: “You are demon-possessed.” Jesus shoots back: “Stop judging by appearances, but instead judge correctly.” Some of the people of Jerusalem began to ask each other, “Isn’t this the man they are trying to kill? Here he is speaking publically, and they are not saying a word to him. Have the authorities really concluded he is the Messiah? But we know where this man is from; when the Messiah comes no one will know where he is from.” Jesus cried out, “Yes, you know me, and you know where I am from. I am not here on my own authority, but he who sent me is true. You do not know him, but I know him because I am from him and he sent me.”

At this they tried to seize him, but no one laid a hand on him... Still, many in the crowd believed in him. (John 7: 14-31)

Another encounter: Those who heard his words were again divided. Many of them said, “He is demon-possessed and raving mad. Why listen to him?” But others said: “These are not the sayings of a man possessed by a demon. Can a demon open the eyes of the blind?” (John 10: 19-21)

The struggle shifts to new ground. The festival crowd gathered around him, saying, “How long will you keep us in suspense? If you are the Messiah, tell us plainly.”

Jesus answered, “I did tell you, but you did not believe. The works I do in my Father’s name testify about me, but you do not believe because you are not my sheep. My sheep listen to my voice; I know them, and they follow me. I give them eternal life, and they shall never perish... My Father, who has given them to me, is greater than all; no one can snatch them out of my Father’s hand. I and the Father are one.”

Again his opponents picked up stones to stone him, but Jesus said to them, “I have shown you many good works from the Father. For which of these do you stone me?” “We are not stoning you for any good work,” they replied, “but for blasphemy, because you, a mere man, claim to be God.” Jesus answered them, “...Why do you accuse me of blasphemy because I said, ‘I am God’s Son’? Do not believe me unless I do the works of my Father. But if I do them, even though you do not believe me, believe the works, that you may understand that the Father is in me, and I in the Father.” Again they tried to seize him, but he escaped their grasp. (John 10: 24-42)

Jesus can still arouse this crowd, but he cannot silence it. He does not back off, but becomes increasingly explicit. The metaphors he does use are not effective. His sheep that he refers to means his own crowd of loyal followers, and Jesus declares he has given them eternal life-- but not to this hostile crowd of unbelievers. Words no longer convince; the sides declaim stridently against each other. The eloquent phrases of earlier preaching have fallen into cacophony. Nevertheless Jesus still escapes violence. The crowd is never strong enough to dominate him. Only the organized authorities can take him, and that he does not evade.

(6) Jesus’ down moments

Most of the challenges to Jesus’ charisma happen during the showdown in Jerusalem. A revealing occasion happens early, when Jesus visits his hometown Nazareth and preaches in the synagogue. First the crowd is amazed, but then they start to question: Isn’t this the carpenter’s son? Aren’t his mother and brothers and sisters among us? Where did he get these powers he has been displaying in neighbouring towns? When Jesus reads the scroll and says, “Today the scripture is fulfilled in your hearing,” they begin to argue. Jesus retorts: “No prophet is honored in his home town,” and quotes examples of how historic prophets were rejected. The people in the synagogue are furious.

They take him to the edge of town and try to throw him off the brow of a cliff. “But he walked right through the crowd and went his way.” (Luke 4: 14-30; Matthew 13: 53-58) Even here, Jesus can handle hostile crowds.

Including this incident of failure gives confidence in the narrative.

Another personal challenge comes when he performs one of his most famous miracles, bringing back Lazarus from the dead.

Jesus' relationship with Lazarus is described as especially close. He is the brother of the two sisters, Mary and Martha, whose house Jesus liked to stay in; and Lazarus is referred to as "the one you (Jesus) love." Jesus had been staying at their house a few miles outside Jerusalem, a haven at the time when his conflict with the high priests at the temple was escalating. When the message came that Lazarus was sick, Jesus was traveling away from trouble; although his disciples reminded him that the Jerusalem crowd had tried to stone him, he decided to go back. Yet he delayed two days before returning-- apparently planning to wait until Lazarus dies and then perform the miracle of resurrecting him. First he says to his disciples, "Our friend Lazarus has fallen asleep, but I am going to wake him up." When this figure of speech is taken literally, he tells them plainly, "Lazarus is dead, and for your sake I am glad I was not there, so that you may believe."

When he arrives back in Bethany, Lazarus had been dead for four days.

A crowd has come to comfort the sisters. Why were they so popular? No doubt their house was strongly identified with the Jesus movement; and thus there is a big crowd present, as always, when Jesus performs a healing miracle.

But this is the public aspect. For the personal aspect: Each of the two sisters separately comes to meet Jesus, and each says, "If you had been here, my brother would not have died." After Mary, the second sister, says this, Jesus sees her weeping and the crowd who had come with her also weeping, he is deeply moved. (The King James translation says, "groaning in himself.") "Where have you laid him?" Jesus says. "Come and see," she answers. Then Jesus wept.

They come to the tomb; Jesus has them roll away the stone from the entrance. Again deeply moved, Jesus calls out in a loud voice, "Lazarus, come out!"

For some time afterwards, people come to Bethany to see Lazarus, the man who had been raised from the dead. (John 11: 1-46)

Leaving aside the miracle itself and its symbolism, one thing we see in this episode is Jesus conflicted between his mission-- to demonstrate the power of resurrection-- and his personal feelings for Lazarus and his sisters. Jesus let Lazarus die, by staying away during his sickness, in order to make this demonstration, but in doing so he caused grief to those he loved. The moment when he confronts their pain (amplified by the weeping of the crowd), Jesus himself weeps. It is the only time in the texts when he weeps. It is a glimpse of himself as a human being, as well as a man on a mission.

Jesus’ next moment of human weakness comes in the garden at Gethsemane.

“Being in anguish, he prayed more earnestly, and his sweat was like drops of blood falling to the ground.” Though he left his disciples nearby with instructions to “pray that you will not fall into temptation,” they all fell asleep, exhausted from sorrow. Jesus complains to Peter, “Couldn’t you keep watch with me for one hour?” But he adds, “The spirit is willing but the flesh is weak.” But their eyes were heavy, and they did not know what to say to him. (Luke 22: 39-46; Mark 14: 32-42; Matthew 26: 36-46) Everybody’s emotional energy is down.

Particularly personal is the passage when Jesus on the cross sees his mother standing below, “and the disciple whom he loved standing near by. Jesus said to her: ‘Woman, here is your son,” and to the disciple, ‘Here is your mother.’ From that time on, the disciple took her into his house.” (John 19: 25-27)

What is so telling about this is the contrast to an event during Jesus’ early preaching in Galilee, when his mother and siblings try to make their way to him through a crowd of followers. Someone announces, “Your mother and your brothers are outside waiting to see you.” Jesus looks at those seated in a circle around him and says: “Here are my mother and my brothers! Whoever does God’s will is my brother and sister and mother.” (Luke 8: 19-21; Mark 3: 31-35) But on the cross he is not only thinking of fulfilling scripture, but of his own lifetime relationships.

Pierced by pain, he cries out, “My God, my God, why have you forsaken me?” “And with a loud cry, Jesus breathed his last.” (Mark 15: 21-41; Matthew 27: 30-55)

Ancient myths of dying and annually resurrecting nature-gods are not described like this-- i.e. humanly; nor are the heroic deaths of Plutarch’s noble Greeks and Romans.

Other than in the anxious hours of waiting at Gethsemane, and the torture of the crucifixion, Jesus confronting his accusers is in form and on message.

When the high priests and temple guards approach to arrest him, Jesus calmly asks who they want. “Jesus of Nazareth,” they reply. When he says, “I am he,” they shrink back. Jesus takes the initiative: “If you are looking for me, let these men go.” When they seize Jesus, one of his followers draws a sword and cuts off the ear of a priest’s servant. “Put away your sword!” Jesus says to him, “for all who live by the sword will die by the sword.” To the hostile crowd, he says, “Am I leading a rebellion, that you have come with swords and clubs to capture me? Every day I sat in the temple courts teaching, and you did not dare to arrest me. But this is your hour.” (Matthew 26: 47-56; Luke 22: 47-55; John 18: 1-12)

Then all his disciples deserted him and fled. Peter, the boldest of them, followed at a distance to the outer courtyards when Jesus was being interrogated within.

But Peter too is intimidated when servants question whether he isn’t one of Jesus’ followers. Peter denying Jesus shows how Jesus’ own crowd has been dispersed, broken up and unable to assemble, and in the face of a hostile crowd lose their faith.

Strength is in the crowd, and now the opposing crowd holds the attention space.

But indoors, in a smaller setting of rival authorities, Jesus holds his own. Before the assembly of the high priests, Jesus wins the verbal sparring, if not the verdict. Many hostile witnesses testify, but their statements do not agree. The priests try to get Jesus to implicate himself, but he keeps a long silence, and then says: “I said nothing secret. Why question me? Ask those who heard me.”

When Jesus said this, an official slapped him in the face. “Is this the way you answer the high priest?” Jesus replied, “If what I said is wrong, testify as to what is wrong. If I spoke the truth, why do you strike me?” The chief priest asks him bluntly: “Tell us if you are the Messiah, the Son of God.” “You have said so,” Jesus replies. (Mark 14: 53-65; Matthew 26: 57-63; John 18: 19-24)

Finally Jesus is taken before Pilate, the Roman governor. Jesus gives his usual sharp replies, and indeed wins him over. “Are you the King of the Jews?” Pilate asks. “Is that your own idea,” Jesus asks in return, “or did others talk to you about me?” Pilate: “Your own people and chief priests have handed you over to me. What is it you have done?” Jesus said: “My kingdom is not of this world. If it were, my servants would prevent my arrest.” “You are a king, then!” said Pilate. Jesus answered: “You say I am a king. In fact, I came into the world to testify to the truth. Everyone on the side of truth listens to me.”

“What is truth?” Pilate replied, and breaks off before an answer. (Mark 15: 1-5; Matthew 27: 11-26; John 18: 24-40)

And he goes to the crowd gathered outside the palace to say he has found no basis for a charge against Jesus. Pilate tries to set him free on a legal loophole but gives in to the crowd demanding crucifixion. After Jesus dies, Pilate gives permission for a sympathizer to take the body away instead of leaving it for ignominious disposal. Pilate’s style of behavior, too, comes across the centuries as real.

In the crises, Jesus’ interactional style remains much the same as always; but the speaking in parables and figurative language has given way to blunt explanations. Parables are for audiences who want to understand. Facing open adversaries, Jesus turns to plain arguments.

Charisma, above all, is the power to make crowds resonate with oneself. Does that mean charisma vanishes when the power over crowds goes away?*

But that would mean charisma would not be a force in drawn-out conflicts; more useful to say that charisma has its home base, its center in enthusiastic crowds, even when the charismatic leader is sometimes cut off from base.

*Historical examples include the public popularity of Gorbachev rocketing like fireworks in the middle of the 1980s in a movement for Soviet reform, but dissipating rapidly in 1991 when he is overtaken by political events and shunted aside. Jesus is a stronger version of charisma that survives adversity.

More on this in a future Sociological Eye post on theory of charisma.

Charisma is a fragile mode of organization because it depends on enthusiastic crowds repeatedly assembling. Its nemesis is more permanent organization, whether based on family and patronage networks, or on bureaucracy. Jesus loses the political showdown because the authorities intimidate his followers from assembling, and then strike at him with a combination of their organized power of temple and state, bolstered by mobilizing an excited crowd of their own chanting for Jesus’ execution. But even at his crucifixion, Jesus wins over some individual Roman soldiers (Luke 23: 47; Matthew 27: 54), although that is not enough to buck the military chain of command. This tells us that the charismatic leader relates to the crowd by personally communicating with individuals in the crowd, a multiplication of one-to-one relationships from the center to many audience-members. But charismatic communication cannot overcome a formal, hierarchic organization where individuals follow orders irrespective of how they personally feel.**

**The “cast the first stone” incident shows, in contrast, how a charismatic leader takes apart a hostile crowd by forcing its members to consult their own consciences.

As we have seen, Jesus can handle hostile questioning from crowds in the temple courts, even if opponents have been planted there by an enemy hierarchy. It is not the crowd calling for crucifixion that overpowers Jesus, but the persistent opposition of the priestly administration. Sociologically, the difference is between charismatic experience in the here-and-now of the crowd, and the long-distance coordination of an organization that operates beyond the immediate situation.

(7) Victory through suffering, transformation through altruism

When Jesus is arrested in the garden at Gethsemane, he tells his militant defenders not to resist. “Do you think I cannot call my Father, who will send twelve legions of angels? But how would the scriptures be fulfilled that say it must happen in this way?” (Matthew 26: 47-56) Jesus does not aim to be just a miracle worker; he is out to transform religion entirely.

Miracles, acts of faith and power in the emotionally galvanized crowd, are ephemeral episodes. As Jesus goes along, his miracles become parables of his mission. He heals the sick, gives the disabled new life, stills the demonic howling of people in anguish. He lives in a world that is both highly stratified and callous. The rich are arrogant and righteous in their ritual correctness-- a Durkheimian elite at the center of prestigious ceremonials. They observe the taboos, and view the penurious (and therefore dirty) underclass not just with contempt but as sources of pollution. Jesus leads a revolution, not in politics, but in morals. From the beginning, he preaches among the poor and disabled, and stirs them with a new source of emotional energy. Towards the rich and ritually dominant, he directs the main thrust of his call for repentance-- it is their attitude towards the wretched of the earth that needs to be reformed. The Jesus movement is the awakening of altruistic conscience.*

*It does not start with Jesus. John the Baptist also preaches the main points, concern for the poor, against the arrogance of the rich. Earlier, Jewish prophets like Isaiah and Amos had railed against injustice to the poor. Around Jesus' time, there may have been inklings of altruism in the Mediterranean world but if so they had little publicity or organization. Greek and Roman religious cults and public largesse were directed to the elite, or at most to the politically active class, and do not strike a note of altruism towards the truly needy. Ritual sacrifices of children for military victory carried out by the Carthaginians took place in a moral universe unimaginable to modern people. Middle-Eastern kingship was even more rank-conscious and ostentatiously cruel. See my post, “Really Bad Family Values” The Sociological Eye, March 2014.

The moral revolution has three dimensions: altruism; monastic austerity; and martyrdom.

Altruism becomes an end in itself, and the highest value. Giving up riches and helping the poor and disabled is not just aimed at improving material conditions for everyone. It is not a worldly revolution, not a populist uprising, but making human sympathy the moral ideal. Blessed are the poor, the mourning, the humble, Jesus preaches, for theirs is the kingdom of heaven. (Luke 6: 17-23) Altruism comes on the scene historically as the pathway to otherworldly salvation. ** What is important for human lives is the change in the moral ideal: it not only gives hope to the suffering but calls the elite to judge themselves by their altruism and not by their arrogance.

**The mystery cults of the Hellenistic world (Orphics, Hermeticists, Neo-Pythagoreans and Neo-Platonists, various kinds of Gnostics etc.) had the idea of otherworldly salvation, but not the morality of altruism. Their salvation was purely selfish and their pathways merely secret rituals and symbols. They were still on the ancient side of the revolution of conscience.

The movement is under way at least a little before Jesus launches his mission at age 30.

John the Baptist preached repentance before the coming wrath. “What should we do?” the crowd asked. John answered, “Anyone who has two shirts should share with one who has none, and anyone who has food should do the same.” Even tax collectors came to be baptized. “Teacher,” they said, “what should we do?” John replied, “Don’t collect any more than you are required to.” Soldiers asked him, “And what should we do?”

He replied, “Don’t extort money and don’t accuse people falsely-- be content with your pay.” (Luke 3: 1-14) Repentant sinners were baptized in the river.

To the Pharisees and Sadducees-- who will not repent and be baptized-- John thunders, “You brood of vipers! Who warned you to flee from the coming wrath?”

Later, when John’s disciples come to visit Jesus’ disciples, Jesus speaks to the crowd about John: “What did you go out into the wilderness to see? ... A man dressed in fine clothes? No, those who wear expensive clothes and indulge in luxury are in palaces. But what did you go out to see? A prophet? Yes, and more than a prophet.” Jesus goes on to compare his mission to John’s.

“John the Baptist came neither eating bread nor drinking wine, and you say, ‘He has a demon.’ The Son of Man came eating and drinking, and you say, ‘Here is a glutton and a drunkard, a friend of tax collectors and sinners. But wisdom is proved right by all her children.” (Luke 7: 18-35)

Jesus not only amplifies John’s mission, he also moves into another niche: not the extreme asceticism of the desert, but among the lower and middle classes of the towns and villages.

Monastic austerity.

Jesus’ disciples give up all property, becoming (as John the Baptist did *) the poorest of the poor. But they are not as the ordinary poor and disabled. They retain their health, and have an abundance of the richness of spirit, what they call faith-- i.e. emotional energy.

Committed disciples who have left family, home and occupation, rely on the enthusiasm of a growing social movement to provide them with daily sustenance. They live at the core of the movement. Since this location is the prime source of emotional energy, there is an additional sense in which living by faith alone is powerful.

*Matthew 3: 1-8 stresses John’s asceticism, a wild man living in the wilderness on locusts and honey, dressed in clothes of camel’s hair.

Later this arrangement became institutionalized as the relationship between monks and lay people.** During the missionary expansion of Christianity, monks were the pioneers, winning converts and patrons on the pagan frontiers through personal impressiveness-- their institutionalized charisma, which is to say Christian techniques of disciplined austerity generating emotional strength. Still later, movements like the Franciscans, deliberately giving up monastic seclusion to wander in the ordinary world among the poor and disabled, combine austerity with a renewed spirit of altruism and thereby create the idealistic social movement. Altruistic movements first used modern political tactics for influencing the state in the late 1700s anti-slavery movement, but the lineage builds on the moral consciousness and social techniques that are first visible with the Jesus movement.

**There were precedents of monasticism in the 300s BC such as the Cynics, who lived in ostentatious austerity--- such as Diogenes living in a barrel. Cynics denounced the pitfalls and hypocrisy of seeking riches and power, but they lacked any concern for the poor and did not advocate altruism.

 

Martyrdom.

The crucifixion of Jesus becomes, not the end of the movement, but its rallying point. The cross becomes the symbol of its members, and a source of personal inspiration for individuals in times of suffering and defeat. We are so used to this symbol that the enormity of the shift is lost on us. Crucifixion, which existed for several hundred years previously in the authoritarian kingdoms of the Middle East before spreading to Rome, was an instrument of death by slow torture, a visible threat of state terrorism. When the Sparticist revolt of gladiators was put down in 71 BC, the Romans crucified captured gladiators for hundred of miles along the roads of southern Italy. To turn the cross into a symbol of a movement, and of its triumph, was a blatant in-your-face gesture of the moral revolution: we cannot be beaten by physical coercion, by pain and suffering, it says; we have transformed them into our strength. Martyrs succeed when they generate movements; and are energized by the emotional solidarity of standing together in a conflict, even in defeats.

That is why ancient cultural precedents of fertility gods who die by dismemberment but are resurrected like the coming of the crops in the following year do not contain the social innovation of Christianity.

Fertility gods may be depicted as suffering but their message is not moral strength, and their cult concerns recurring events in the material world, not otherworldly salvation.**

**Euripides is the nearest to an altruistic liberal in the Greek world; but his play The Bacchae-- depicting an actual contemporary movement of frenzied dancers that challenged older Greek religious cults-- breathes an atmosphere of ferocious violence and revenge, the polar opposite of the Christian message of forgiveness and charity. Euripides’ plays focus the audience’s sympathy on the sufferings of individual characters, but these are members of elite families who suffer in from shifting fortunes of the upper classes. There is not even a glance at the poor.

Martyrdom also becomes institutionalized in the repertoire of religious movements. In its early centuries, Christianity grows above all by spectacular and well-publicized martyrdom of its hero-leaders. (There is also a quieter form of conversion through networks attracted to its moral style, its care for the sick, and its organizational strength. Stark, The Rise of Christianity).

Martyrdom becomes a technique for protest movements, and movement-building.

“What does not kill me, makes me stronger,” Nietzsche was to write. Ironically: for all his attacks on the moral revolution of Christianity, this is a Christian discovery he is citing. Religious techniques set precedents for modern secular politics. Protest movements win by attracting widespread sympathy for their public sufferings, turning the moral tables on those who use superior force against them. This too is world-changing. It is little exaggeration to say that the moral forces of the modern world were first visible in the Jesus movement.

 

Appendix.

The Social Context of Miracles.

Some modern people think that Jesus never existed, or that the stories about him are myths. The details of how Jesus interacted with people in the situations of everyday life consistently show a distinctive personality. All texts about the ancient past are subject to distortion and mythologizing tendencies; but an objective scholar, with no axe to grind one way or the other, would conclude that what we read of Jesus is as valid as what Plutarch summarizes from prior sources about Alexander or Pericles, or what other classical writers reported about exemplary heroes. The gospels have an advantage of being written closer to the lifetime of their subject, and possibly by several of Jesus’ close associates.

What about the miracles? I will focus on what a micro-sociologist can see in the details of social interaction, especially what happens before and after a miracle. I will examine only those miracles that are described as happening in a specific situation, a time and place with particular people present. Summaries of miracles by Jesus and his disciples do not give enough detail to analyze them, although they give a sense of what kinds of miracles were most frequent.

Let us go back to a question that has been hanging since I have discussed the beginning of Jesus’ ministry. Jesus attracts big crowds, by his preaching and by his miracles. He preaches an overthrow of the old ritualism; an ethic of humility and altruism for the poor and disabled; and the coming of the true kingdom of God, so different from this rank-conscious world. He also performs miracles, chiefly medical cures through faith-healing; casting out demons from persons who are possessed; and bringing back a few people from the cusp of death. There are also some nature miracles and some apparitions, although these should be considered separately because they almost never occur among crowds.

The roster of miracles described in detail include:

22 healing miracles, all happening in big crowds;

3 logistics miracles, where Jesus provides food or drink for big crowds;

5 nature miracles, all happening when Jesus is alone with his inner Twelve disciples, or some of them;

2 apparitions: 1 with 3 close disciples; 1 in a crowd.

So is Jesus chiefly a magician? And as such, are we in the realm of wonders, or superstition, or sleight of hand tricks? I will confine the discussion to some sociological observations.

Which comes first, the preaching or the miracles? The gospels are not strictly chronological, and sequences vary among them, but clearly there are a lot of miracles early on, and this is one of the things that attracts excited crowds to Jesus. People bring with them the sick, the lame, blind, and others of the helpless and pathetic. This is itself is a sign of incipient altruism, since on the whole ancient people were quite callous, engaging in deliberately cruel punishments, routinely violent atrocities, and a propensity to shun the unfortunate rather than help them. Jesus’ emphasis upon the lowly of the earth meshes with his medical miracles; they are living signs of what he is preaching in a more ethical sense.

Jesus’ healing miracles always happen in the presence of crowds. If that is so, how did the first miracles happen? What brought the first crowds together must have been Jesus’ preaching. This is particularly likely since John the Baptist was attracting large crowds, and had his own movement of followers. John did not perform medical miracles or any other kind, and he preached the same kind of themes as Jesus at the outset: humility and the poor; repentance; the coming kingdom of God, except that John explicitly said someone else was coming to lead it.

The plausible sequence is that Jesus attracted crowds by his preaching, and it was in the midst of the crowds’ enthusiasm-- their faith-- that the healing miracles take place.* That miracles depend on faith of the crowd is underscored by Jesus’ failure in Nazareth, his home town. “And he did not do many miracles there because of their lack of faith.” (Luke 4: 14-30; Matthew 13: 53-58)

*Origins of the word enthusiasm are from Greek enthous, possessed by a god, theos.

Jesus’ healing miracles divide into: 4 cures of fever and other unspecified sickness; 9 events where he cures long-term disabilities (3 with palsy/paralysis, crippled, or shriveled hand; 2 blind, 1 deaf/mute; 1 with abnormal swelling; 1 leper, and later a group of 10 lepers); 6 persons possessed with demons; 3 persons brought back from death. The various types may overlap.

The 3 who are brought back from death include the 12-year-old daughter a rich man whom he thinks is dead, but Jesus tells him she is not dead, but asleep (Luke 8: 41-42,49-56); a widow’s son who is on his funeral bier, i.e. recently pronounced dead (Luke 7: 11-17); and finally Lazarus (John 11: 1-46).

Their illnesses are not described, but could have been like the cases of fever in Jesus' other miracles.

The disabilities that Jesus cured also overlap with the persons described as possessed by demons: one is “robbed of speech” and foams at the mouth (Mark 9: 14-29; Matthew 17: 14-21; Luke 9: 37-43); another has a mute demon and is also blind (Luke 11: 14-28; Mark 9: 32-34; Matthew 12: 22-37); another is vaguely described as a woman’s daughter possessed by an unclean spirit (Mark 7: 24-30; Matthew 14: 21-28).

At least one of these appears to have epileptic fits.

Another is a naked man who sleeps in tombs, and has been chained up but breaks his chains (Luke 8: 26-39; Mark 5: 1-20; Matthew 8: 28-34). Casting out demons appears to be one of the most frequent things Jesus does, mentioned several times in summaries of his travels “preaching in synagogues and casting out demons” (Mark 1: 39) “many who were demon-possessed were brought to him” (Matthew 8: 16). This is a spiritual power that can be delegated; when his disciples are sent out on their own they come back and report “even the demons submit to us in your name.” (Luke 10: 17; Matthew 10: 1).**

One of his most fervent followers, Mary Magdalene, is described of having 7 demons cast out (Luke 8: 2); possibly this means she went through the process 7 times. She is also described as a prostitute, one of the outcasts Jesus saves; we might think of her as having gone through several relapses, or seeking the experience repeatedly (much like many Americans who undergo the “born again” experience more than once).

**Sometimes the disciples fail in casting out a demon. In one case, the boy’s father says the spirit throws him to the ground, where he becomes rigid and foams at the mouth. When Jesus approaches, the boy goes into convulsions. The father says to Jesus, “If you can, take pity on us and help us.” Jesus replies: " 'If you can’? All things are possible for one who believes." Immediately the boy’s father exclaimed: “I do believe; help me overcome my unbelief.” When Jesus saw a crowd running to the scene, he commanded the spirit to leave the boy and never enter again. The spirit shrieked and convulsed him violently. the boy looked so much like a corpse that many said, “He is dead.” But Jesus took his hand and lifted him to his feet, and he stood up. (Similar to raising from the dead.) After Jesus had gone indoors, his disciples asked him privately, “Why couldn’t we drive it out?” Jesus replied, “This kind can come out only by prayer.” (Mark 9: 14-29) Jesus recognizes different kinds of cases and has more subtle techniques than his disciples.

What does it mean to be possessed by a demon? A common denominator is some serious defect in the social act of speaking: either persons who shout uncontrollably and in inappropriate situations (like the man who shouts at Jesus in a synagogue, “What do you want with us, Jesus of Nazareth? Have you come to destroy us? I know who you are-- the Holy One of God!” (Mark 1: 21-28); or who are silent and will not speak at all.

We could diagnose them today as having a physiological defect, or as mentally ill, psychotic, possibly schizophrenic.

But in ancient society, there was no sharp distinction between sickness and mental illness. There were virtually no medical cures for sicknesses, and religious traditions regarded them as punishments from God or the pagan gods; seriously ill persons were left in temples and shrines, or shunted onto the margins of habitation. Left without care, without human sympathy, virtually without means of staying alive, they were true outcastes of society.

Here we can apply modern sociology of mental illness, and of physical sickness. As Talcott Parsons pointed out, there is a sick role that patients are expected to play; it is one’s duty to submit oneself to treatment, to put up with hospitals, follow the authority of medical personnel, all premised on a social compact that this is done to make one well. But ancient society had no such sick role; it was a passive and largely hopeless position. Goffman, by doing fieldwork inside a mental hospital, concluded that the authoritarian and dehumanizing aspects of this total institution destroys what sense of personal autonomy the mental patient has left. Hence acting out-- shouting, defecating in the wrong places, showing no modesty with one’s clothes, breaking the taboos of ordinary social life-- are ways of rebelling against the system. They are so deprived of normal social respect that the only things they can do to command attention are acts that degrade them still further. Demon-possessed persons in the Bible act like Goffman’s mental patients, shouting or staying mute, and disrupting normal social scenes.*

*This research was in the 1950s and 1960s, before mental patients were controlled by mood-altering drugs. The further back we go in the history of mental illness, the more treatments resemble ancient practices of chaining, jailing or expelling persons who break taboos.

One gets the impression of a remarkable number of such demon-possessed-- i.e. acting-out persons-- in ancient Palestine.** They are found in almost every village and social gathering. Many of them are curable, by someone with Jesus’ charismatic techniques of interaction. He pays attention to them, focusing on them wholly and steadily until they change their behavior and come back into normal human interaction; in every case that is described, Jesus is the first person in normal society with whom the bond is established. Each acknowledges him as their savior and want to stay with him; but Jesus almost always sends them back, presumably into the community of Christian followers who will now take such cured persons as emblems of the miracles performed.

**A psychiatric survey of people living in New York City in the 1950s found that over 20% of the population had severe mental illness. (Srole 1962) It is likely that in ancient times, when stresses were greater, rates were even higher.

Notice that no one denies the existence of demons, or denies that Jesus casts them out. When Jesus meets opposition (John 10: 19-21; Luke 11: 14-20) the language of demons is turned against him. Jesus himself, like those who speak in an unfamiliar or unwelcome voice, is accused of being demon-possessed. The same charge was made against John the Baptist, who resembled some demon-possessed persons by living almost as a wild man in the wilderness. The difference, of course, is that John and Jesus can surround themselves by supportive crowds, instead of being shunned by them.

Similarly, no one denies Jesus’ medical miracles. The worst that his enemies, the religious law teachers and high priests, can accuse him of is the ritual violation of performing his cures on the Sabbath. This leads to Jesus’ early confrontations with authority; he can point to his miracles to forcefully attack the elite as hypocrites, concerned only with their own ritually proper status but devoid of human sympathy.

Jesus’ miracles are not unprecedented, in the view of the people around him; similar wonders are believed to have taken place in the past; and other textual sources on Hellenistic society refer to persons known as curers and magicians. Jesus works in this cultural idiom. But he transforms it. He says repeatedly that it is not his power as a magician that causes the miracle, but the power of faith that people have in him and what he represents.

A Roman centurion pleads with Jesus to save his servant, sick and near death. The centurion calls him Lord and says he himself is not worthy that Jesus should come under his roof. But as a man of authority, who can order soldiers what to do, he recognizes Jesus can say the word and his servant will be healed. Jesus says to the crowd, “I have not found such great faith even in Israel.” Whereupon the servant is found cured. (Luke 7: 1-10; Mark 8: 5-13)

In the midst of a thick crowd pressing to see Jesus, he feels someone touch him-- not casually, but deliberately, seeking a cure. It is a woman who has been bleeding for 12 years.

Jesus says “I know that power has gone out from me.” The woman came trembling and fell at his feet. In the presence of the crowd, she told why she had touched him and that she was healed. Jesus said, “Daughter, your faith has healed you. Go in peace.” (Luke 8: 43-48; Mark 5: 21-43; Matthew 9: 20-22.)

While passing through Jericho, a blind man in the crowd calls out repeatedly to Jesus, although the crowd tells him to be quiet. Jesus stopped and had the man brought to him, and asked what he wanted from him. “Lord, I want to see,” he replied. Jesus said, “Receive your sight, for your faith has healed you.” (Luke 18: 35-43; Mark 10: 46-52; Matthew 20: 29-34)

Failure to produce a miracle is explained as a failure of sufficient faith. In another version of the demon-possessed boy, the disciples ask privately, “Why couldn’t we drive it out?” Jesus replied, “Because you have so little faith. If you have faith, you can move mountains. Nothing is impossible for you.” (Matthew 17: 19-20) The message is in the figurative language Jesus habitually uses, the mastery of word-play which makes him so dominant in interaction.

The faith must be provided by his followers. When asked to perform a miracle-- not because someone needs it, but as a proof of his power, a challenge to display a sign-- Jesus refuses to do it. (Luke 11: 29-32; Matthew 12: 38-39; 16: 1-4)

As Jesus’ career progresses, he becomes increasingly explicit that faith is the great end in itself. The goal of performing miracles is not to end physical pain, or to turn it into worldly success. Jesus is not a magician, or conjurer.

Magic, viewed by comparative sociology, is the use of spiritual power for worldly ends. For Jesus it is the other way around.

Healing miracles have an element of worldly altruism, since they are carried out for persons who need them; but above all those who need to be brought back into the bonds of human sympathy. Miracles are a way of constituting the community, both in the specific sense of building the movement of his followers, and in the more general sense of introducing a spirit of human sympathy throughout the world. Miracles happen in the enthusiasm of faith in the crowd, and that combination of moral and emotional experience is a foreshadowing of the kingdom of heaven, as Jesus presents it.

Jesus’ logistics miracles consisted in taking a small amount of food and multiplying it so that crowds of 5,000 and 4,000 respectively have enough to eat and many scraps left over (Luke 9: 10-17; Matthew 14: 13-21; Mark 6: 30-44; Mark 8: 1-10). It has been suggested that the initial few fishes and loaves of bread were what the crowd first volunteered for the collective pot; but when Jesus started dividing them up into equal pieces and passing them around, more and more people contributed from their private stocks. (Zeitlin) The miracle was an outpouring of public sharing. Jesus does something similar at a wedding party so crowded with guests that the wine bottles are empty. He orders them to be filled with water, whereupon the crowd becomes even more intoxicated, commenting that unlike most feasts, the best wine was saved for last (John 2: 1-11). Possibly the dregs of wine still in the casks gave some flavour, and the enthusiasm of the crowd did the rest. Party-goers will know it is better to be drunk with the spirit of the occasion than sodden with too much alcohol.

Miracles show the power of the spirit, which is the power of faith that individuals have in the charismatic leader and his intensely focused community. Such experiences is to be valued over anything in the world; it transcends the ordinary life, in the same way that religion in the full sense transcends magic.

The significance of miracles is not in a particular person who is cured, but a visible lesson in raising the wretched of the earth, and awakening altruistic conscience. After the miracle of the loaves and fishes, Jesus says to a crowd that is following him eagerly, “You are looking for me, not because you saw the signs I performed but because you ate the loaves and had your fill. Do not work for food that spoils, but for food that endures to eternal life.”

They ask him, “What sign will you give that we may see it and believe you?” Jesus answered: “I am the bread of life. Whoever comes to me will never go hungry, and whoever believes in me will never be thirsty.” He goes on to talk about eating his flesh and drinking his blood, speaking in veiled language about the coming crucifixion. It causes a crisis in his movement: “From this time many of his disciples turned back and no longer followed him.” (John 6: 22-52) Those who wanted to take miracles literally were disappointed.

Jesus’ nature miracles differ from the others in not taking place in crowds, but among his intimate disciples. Here the role of faith is highlighted but in a different sequence. Instead of faith displayed by followers in the crowd, bringing about a healing miracle, now Jesus produces miracles that have the effect of reassuring his followers.

A storm comes up while the twelve disciples are on a boat in the weather-wracked Sea of Galilee. They are afraid of drowning, but Jesus is sleeping soundly. “Oh ye of little faith, why are you so afraid?” he admonishes them, after they wake him up and the storm stills. (Matthew 8: 23-27; Mark 4: 35-41; Luke 8: 22-25)

Jesus is imperturbable, displaying a level of faith his disciples do not yet have.

In another instance, he sends his disciples out in a boat while he stays to dismiss the crowd and then to pray in solitude on the mountainside. They are dismayed while Jesus is away and the water grows rough and they cannot make headway with their oars. After a night of this, just before dawn they are frightened when they perceive him walking across the water, and some think he is a ghost.

Jesus calms them by saying, “It is I; don’t be afraid.” He enters the boat and the wind dies down, allowing them finally to make it to shore. (Mark 6: 45-52; John 6: 16-21) In one account, Peter says, “Lord if it is truly you, let me come to you on the water.” Jesus says, “Come,” and Peter begins to walk. But he becomes afraid and begins to sink. Jesus immediately catches him with his hand: “You of little faith, why did you doubt?” (Matthew 14: 22-33)

The pattern is: for his disciples, who are supposed to show a higher level of faith, Jesus performs miracles when they feel in trouble without him.*

*Other miracles on the Sea of Galilee: when Jesus recruits Simon and Andrew, first by preaching from their boat, then pushing off from shore, whereupon they make a huge catch of fish. (Luke 5: 1-11); and when Jesus responds to a tax demand by telling Peter to fish in the lake, where he will catch a fish with a coin in its mouth to pay their taxes. (Matthew 17: 24-27)

At the end of the miracle of curing a demon-possessed man, Jesus sends the demons into a nearby herd of swine (presumably polluted under Jewish law), whereupon they rush madly off a cliff and drown themselves in the lake. (Luke 8: 26-39; Mark 5: 1-20; Matthew 8: 28-34) One nature miracle happens on dry land: on his way into Jerusalem to cleanse the temple he curses a fig tree which has no fruit for him and his followers; and when he returns in the evening, it has withered. (Mark 11: 15-19; Matthew 21: 18-21) The miracle is a living parable on the withered-up ritualists whom Jesus is attacking.

Apparitions, finally, are subjective experiences that particular people have at definite times and places. There is nothing sociological to question about their having such experiences, but we can notice who is present and what they did. The event called the Transfiguration happens when Jesus takes three close disciples up a mountain to pray-- a special occasion since he usually went alone. They see his face and clothes shining with light, see historic persons talking to Jesus and hear a voice from a cloud. The disciples fall on the ground terrified, until Jesus touches them and tells them not to be afraid, whereupon they see that Jesus is alone.

Jesus admonished them not to tell anyone about what they had seen. (Luke 9: 28-30; Matthew 17: 1-13; Mark 9: 2-13)

When Jesus' mission in Jerusalem is building up towards the final confrontation between his own followers and increasingly hostile authorities and their crowds. Jesus announces “the hour has come for the Son of Man to be glorified.” A voice from heaven says “I have glorified it.” Some in the crowd said that it thundered, others that an angel spoke. Jesus tells them that the vision is for their benefit, not his; and that “you will have the light only a little while longer.” When he finished speaking, he hid from them. (John 12: 20-36) The crowd was not of one mind; they disagree about whether Jesus is the Messiah who will rule and remain forever, while Jesus sees the political wind blowing towards his execution. The subjective feeling of a thunderous voice in the crowd, but variously interpreted, reflects what was going on at this dramatic moment.

Finally I will venture an interpretation of what happened when Jesus was tempted by the devil in the wilderness. (Matthew 4: 1-11; Luke 4: 1-13) This happens after hearing John the Baptist preaching about the coming Son of God, and Jesus must have decided he was the one. The next thing he does is to imitate John the Baptist by going to live alone in the desert. Here he has apparitions of the devil (which we read about because presumably he later told his disciples). Living in the desert for 40 days is a life-threatening ordeal, and at some point he considers that he has the power to turn stones into food. He rejects this as a thought coming from the devil, since his aim is not to be a magician; the internal dialogue ends with the kind of aphorism that Jesus would pronounce throughout his mission: "Man shall not live by bread alone." Up on the mountain cliffs, he considers whether he should jump down and fly, and rejects that too; another devil-temptation to use magic for trivial marvels like entertaining stories in the Arabian Nights. He envisions the devil showing him the whole world spread out below, and giving him the evil thought that the Kingdom of God would make him the mightiest of worldly kings. 

Mt. Temptation, traditionally where Jesus spent 40 days in wilderness

Modern research shows that internal dialogue takes place not only through talk but also visual images taking their turn in the argument. (Wiley 1994; Collins 2004) Through these apparitions, Jesus is thinking out what kind of power he has and what he will do with it. It is the power to inspire crowds, to recruit followers, to work a moral revolution, and reveal a life-goal that is not of the world as people hitherto knew it. It is, in short, the power of charisma.

 

Napoleon Never Slept: How Great Leaders Leverage Social Energy

Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs

E-book now available at

Maren.ink and Amazon

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

 

References

Randall Collins. 1998. The Sociology of Philosophies. (chapter 3 on ancient religious and philosophical movements)

Randall Collins. 2004. Interaction Ritual Chains. (chapter 5 on internal dialogue)

Randall Collins. 2010. “The Micro-sociology of Religion.” (research on prayer) http://www.thearda.com/rrh/papers/guidingpapers/Collins.asp

Emile Durkheim. 1912. The Elementary Forms of Religious Life.

Michel Foucault. 1965. Madness and Civilization.

Erving Goffman. 1961. Asylums; 1971 “The Insanity of Place,” in Relations in Public: Microstudies of the Public Order.

Paolo Parigi. 2012. The Rationalization of Miracles. (16th century Catholic Church focus on medical miracles when nominating saints)

Leo Srole. 1962. Mental Health in the Metropolis.

Rodney Stark. 1996. The Rise of Christianity.

Geoffrey de Ste. Croix. 1983. Class Struggle in the Ancient Greek World.

Norbert Wiley. 1994. The Semiotic Self. (research on internal dialogue)

Charles Tilly. 2004. Social Movements 1768-2004.

Irving Zeitlin. 1984. Ancient Judaism. Biblical Studies from Max Weber to the Present.

Lindsay Olesberg. 2012. The Bible Study Handbook. (very contemporary methods)

WHAT MADE ALEXANDER GREAT?

A previous post considered Napoleon as CEO. It focused on how he led organizations and structures in transition, and how networks intersected for a moment in time to pump up a central individual with huge emotional energy. It takes apart the genius/ talent/ ability cliché and shows what makes such careers happen. Alexander the Great is a good comparison: a chief contender to Napoleon, with an even better record of military victories, and similar historical fame. So: what made Alexander great?

Here is a preliminary checklist:

[1] His father’s army and geopolitical position

[2] Tiger Woods training

[3] The target for takeover

[4] Greek population explosion and mercenaries

[5] Alexander’s victory formula

We will also consider:

Was Alexander’s success because of or despite his personality?

Did Alexander really achieve anything?

Why did Alexander sleep well, but Napoleon never slept?

 

[1] His Father

That is to say, his father’s army and favorable geopolitical position. Alexander is famous for having conquered the Persian Empire. It was the greatest empire in the world at that time, covering 3000 miles from west to east, 1500 miles north/south. The expedition was planned and prepared by his father, Philip II of Macedon, and was ready to go when Philip was assassinated at the farewell party. The 20-year-old son took command, waited two more years to make sure the Macedonians and the Greeks were behind him, and then carried out the epic campaign of conquest in 10 years.

Instead of kicking the causal can down the road, we need to ask: how did Philip come to build this invincible army? The answer is in the organization and the opposition.

The Macedonian army was an organizational improvement on the Greek hoplite army. The Greeks had developed the practice of fighting in solid ranks, forming a combat block of shields, armor, and spears. The whole aim of battle was to keep one’s troops together in a rectangular mass; with their heavy armor, they could not be hurt by arrows, stones or javelins-- a Roman version was called a Tortoise because it was impervious to anything.

The Greek phalanx, developed in the 600s and 500s BC, was a huge shift from the traditional mode of fighting depicted in the Iliad (around 750 BC). The traditional form could be called the hero/berserker style.

An army consisted of noisy crowds of soldiers clustered behind their leaders, who didn’t really give orders but led by example. Heroes like Achilles, Hector, and Ajax would work themselves into a frenzy, roaring out onto the battlefield between the armies, sometimes fighting a hero from the other side, but more often going on a rampage through the lesser troops, cowing them into a losing posture and mowing them down with sheer momentum, i.e. emotional domination. This berserker style remained the way “barbarian” armies fought-- that is to say, armies that did not have disciplined phalanxes. The hero-berserker could never beat a Greek or Roman phalanx that stood its ground; the Greeks were always victorious over the barbarians to the north and east of them, and so were the Romans over their respective hinterlands.

On the other hand, when one Greek phalanx met another, the result was a shoving match. Unless one side broke ranks and ran away, few soldiers were killed. Most battles were stalemates, and city-states could avoid combat if they wished, sheltering behind their walls. The main purpose of cities all over the ancient Middle East, many of them just fortified towns, were these defensive walls, impervious to berserkers. Phalanxes only fought by arrangement, when both sides assembled on chosen ground for a set-piece battle.

Greek hoplite battle

The main weakness of the hoplite phalanx was that it was slow-moving. Hoplites were heavy troops, quite literally from the weight of armor they carried. An enemy that hit and ran away could harass a Greek phalanx but would be beaten if it stayed to fight head-to-head. This was brought home to the Greeks when Xenophon returned from a campaign in Persia during 401-399 BC, writing up their experiences in his famous Expedition of the Ten Thousand. A contender for the Persian throne had hired them as mercenaries; but once they reached the Mesopotamian heartland, the Persian leader was killed in battle, and the Ten Thousand had to fight their way back, first against the Persian army and then against primitive hill tribes on their path to the Black Sea.

The Persians troops were somewhere between the berserker style and the disciplined Greeks. They relied on large masses to impress their enemy into submission; typically these were grouped by ethnicity, each with their own type of weapons. Among these weapons of terror were rows of chariots with scythes attached to their axles; sometimes there were war-elephants. Troops recruited from tribal regions were used on the flanks, as clusters of stone-slingers, archers, and javelin-throwers; these were light troops, without armor since they fought from a distance. The Persian armies that Alexander fought had the same shape.

None of these troops could beat a disciplined phalanx that held its ground; the chariots could get close only if they ran onto the phalanx’s spears, which horses are unwilling to do; elephants, too, are hard to control and shy away from spears. The Greeks soon recognized they could beat armies of almost any size if they stuck together. A bigger problem was that enemy light troops, and attacks by tribal forces with arrows and slings, could be repelled by their armor and discipline, but hoplites were too heavy to chase them down and keep them from repeating the attack.

The solution was to add specialized units around the phalanx; hiring their own barbarian archers and slingers, and adding cavalry, mainly for the purpose of finishing off the enemy when they are running away. But in the Greek homeland, most battles were simply phalanx-on-phalanx; in the democratic city-states, this was as much a display of egalitarian citizenship as a military formation.

Philip’s Macedonian army, which he put together between 360 and 336 BC, incorporated all the most advanced improvements. Most importantly, he added heavy cavalry, operating on both flanks with the phalanx in the center. Philip’s cavalry were not just for chasing-down after the enemy broke ranks, but for breaking the enemy formation itself. Philip was one of the first to perfect a combined-arms battle tactic: the phalanx would engage and stymie the enemy’s massed formation, whereupon the cavalry would break it open on the flanks or rear.

This was one of the advantages of Macedonia’s marchland location; having only recently transitioned from tribal pastoralists to settled agriculture, it could combine military styles. Philip’s phalanx was recruited from the peasant farmers, his cavalry from the aristocracy, used to spending their time riding and hunting. Philip’s-- and thus Alexander’s-- cavalry were called the Companions; they were the elite, the carousing drinking-buddies of their leader. The Companion cavalry, usually on the right wing of battle, was complemented by another cavalry on the left wing, recruited from the Thessalian plains people, but commanded by Macedonian officers.

In addition to improving on the best-of-the-barbarians, Philip also borrowed from the most scientifically advanced Greeks, the colonies in Sicily, for techniques of attacking fortresses. These included catapults and engines, underground mining (to undermine walls), siege ladders and protected roofing to cover the de-construction engineers as they worked on the fortress.

The third of Philip’s innovations was to travel light. Greek city-state armies, if they went very far from home, traveled with huge baggage trains: servants carrying armor and supplies, personal slaves, women, camp followers, often doubling the size of the mass. Philip made every soldier carry his own equipment; he prohibited carts, since they are slow moving and clog the primitive roads; he kept pack animals to a minimum, since they add to the number of attendants. When an army has to engage in long-distance expeditions, overcoming the logistics problem becomes the number-one issue. As we shall see, Alexander followed his father exactly in this regard.

The Geopolitical Position, as Philip Left It

Macedonia was a late developer, a peripheral area north of the zone of city-states.

Moreover, it was essentially an inland state, not a maritime power; its strength was its extensive agricultural lands, and its access to the plains with their horses and pastoralists.

To the south was the Greek peninsula, broken by mountains and inlets of the sea, a land of walled city-states. Rarely able to expand their land frontiers, they engaged in maritime expeditions, lived by trade and booty, and by sending out colonies around the Mediterranean littoral. The same pattern held on the eastern shore of the Aegean Sea. The result was that Greek city-states could rarely conquer each other. Some did become more prestigious than others, and forced the others into coalitions. This Athens did when it became the center for the massed fleet that repelled the Persian invasions, subsequently becoming a quasi-empire in their own right collecting duties to support the fleet.*

But as land powers, the city-states were essentially deadlocked.

*The cultural prestige of Athens starts at this time. Before 460 BC, Greek poets, philosophers, mathematicians and scientists were spread all over; they concentrate in Athens when it becomes the biggest, richest, and most powerful city. The cultural fame of Athens is a result of its geopolitical rise. It became the place where all the culture-producing networks came together, and remained the place for centuries as the leading networks reproduced themselves.

Simultaneously, the Persian empire had reached the limits of its logistics and its administrative capacity for holding itself together. There was no longer any real danger of Persian expansion into Greece; it was just another player in the multi-player situation. The Persian invasions were in 490 and 480-79 BC; both failed because the Persians could not sustain an army across the water against navies equal to what they could raise. The last Persian forces on the European side of the straits were thrown out by 465 BC. The Athenians played up the Persian threat as the basis of their own power, down to about 400, when they lost a long domestic war of coalitions.**

**The defeat of Athens by Sparta was not the end of democracy, or anything of the sort. Greek history is dominated by Athenian propaganda, because the great historians of this period-- Herodotus, Thucydides, Xenophon-- were all Athenians or sympathizers. It helped that Socrates and Plato were Athenians, and their dialogues make the Athenian scene come alive, as do the comedies of Aristophanes. That is why we moderns, styling ourselves inheritors to Greek democracy and science, have such a narrow Athenian peep-hole view into the history of Greece.

The period from the 390s BC down to the rise of Macedonia in the 340s, is one where numerous powers and coalitions vie with each other. Sparta, Athens, Thebes, the Boeotian league, the Phocians, all have a try at becoming hegemon. The term itself is revealing: it means, not conqueror or overlord, but leader, preponderant influence. The situation has settled into a multi-sided, unstably ongoing set of conflicts.

Outside the deadlocked heartland, there was opportunity for a marchland state to grow. Since the major players had their attention locked in, a peripheral actor could grow in its own environment, becoming dominant through a local elimination contest. This is what the Macedonian kingdom did. First its settled agricultural zone expanded inland to incorporate nearby hill tribes, recruiting them into a victoriously expanding army; then growing north and east into Thrace (what is now Bulgaria and European Turkey) by beating barbarian kings and weak tribal coalitions. Philip, who grew up as a hostage in one of the civilized city-states, had an eye for what counted there; after returning to Macedon, he made a point of conquering barbarian land that had gold mines, as well as seaports as far as the straits, where the grain trade passed upon which Athens and the other Greek city-states depended. In short, he started by becoming the big frog in a small pond, while learning the military and cultural techniques of his more civilized neighbours, and combining them with the advantages he could see on the periphery. 

At a point reached around 340 BC, the city-states woke up to find that their biggest threat was not Persia, nor one of their own civilized powers, but a semi-barbarian upstart, whose armies and resources were now bigger and better than their own.

[2] Tiger Woods Training

Alexander was born in 356 BC, a time when his father was reforming the Macedonian army and beginning his conquests. Obviously Philip was away from home a lot and would not have taken a small child with him to the wars. But it is apparent that from an early age, he trained Alexander by informal apprenticeship, having him around him when he could. The famous incident with the horse Bucephalas happened when Alexander was 10 years old, and his father was buying warhorses. One magnificent horse was too shy and unruly to be ridden, and Philip was going to send it back until Alexander begged to have a try at taming it. The story goes on to say he had noticed the horse was frightened by its moving shadow, so he turned the horse’s face into the sun, soothed it by stroking, and finally jumped on its back and galloped off. Leaving aside the usual hero-foreshadowing and prophetic comments that went along with the story, we can note that Alexander was already a careful observer who figured out how to manage those around him; that he was both impetuous and calculating, biding his time for the moment to act. This was not just a colorful story of a boy and his horse; it shows a remarkably mature 10-year-old; and the qualities Alexander shows are much the same as his father’s.

Though father and son butted heads and engaged in mutual jealousy, it was clear that Philip regarded Alexander from an early age as the kind of officer he wanted to follow him. At age 16, when Philip was away on campaign, he left Alexander as regent, and he jumped right in, putting down revolts by leading the army in person. Thereafter, Alexander accompanied him on campaigns, commanding the key unit in battle, the Companion cavalry.

Philip had other sons he could have groomed for this role. Alexander’s mother was Philip’s fourth wife out of an eventual total of eight, and Alexander had a number of step-brothers (most of whom came to a bad end, since infighting over succession was common and bloody). We can infer Alexander had opportunities to show his aptitude early; which is to say, he picked up his father’s military art quickly and thus was given still further opportunities, in a self-reinforcing virtuous circle. He was already distancing himself from all rivals.

Famously, Alexander was taught by Aristotle, in a private school lavishly endowed by Philip. This happened between age 13 and 16. Alexander enjoyed the learning, largely in literature like Homer, rather than technical philosophy. But it could not have been too cloistered a period, since at its end, Philip gives the 16-year-old his own army command; and Alexander’s school-mates, sons of the Macedonian aristocracy, come along with him as companions and generals in his future exploits.

What is it that Alexander learned during his apprenticeship? Obviously, Philip’s tactics for leading an army in battle; also how to recruit and train it, since during Alexander’s 10-year expedition he replenished his army several times over. He must have learned how to travel with a light baggage train, since this is what Alexander did on his Persian campaign. Perhaps this was the province of his father’s generals, notably Parmenio, an older man of his father’s generation who gave cautious advice on some famous occasions. (“If I were Alexander, I would accept the offer...” to divide the Persian empire with the defeated Persian King. “And so would I,” Alexander retorted, “if I were Parmenio.”)

Parmenio was delegated ticklish problems like commanding non-Macedonian troops, arranging logistics and baggage trains; and it may well be that in the early part of Alexander’s campaign, officers like Parmenio took care of the essential grunt-work.

Even so, it would be an extended apprenticeship for the 22-year-old. What Alexander showed he had learned, when he left Parmenio and the old advisors behind for the Eastern part of his conquests, was the crucial combination of logistics and diplomacy.

Why do these two go together? We have already discussed the problem of baggage trains slowing down an army’s movement. On long-distance expeditions, the question is whether an army can get there at all. The basic problem, as modern researchers have figured out, is that the people and animals that carry food and water use them up as they go along, and the more mouths in the supply train, the less gets through to the army.*

*This had to have been understood by professional soldiers like Alexander, but ancient historians never wrote about it, since they concentrated on heroic personalities and dramatic incidents and ignored banal realities. They also exaggerated enormously the size of enemy armies, part of the hero narrative, claiming impossible numbers like 1,700,000 Persians invading Greece in 480 BC; and 1,000,000 on the battlefield at Gaugamela. These numbers are impossible because such troops would need huge empty spaces just to stand on; and stretched out marching on narrow roads they would have covered 300 miles, making it impossible to feed them.

Using animals to do the carrying doesn’t solve anything. A horse can carry three times as much as a man, but it consumes three times the weight in food and water; camels can go four days without water, but then they have to drink four times as much.

Solution: live off the land. But there are two problems. One, it only works in good agricultural land. But ancient agriculture was mostly around the cities-- to put it the other way around, ancient cities had to be adjacent to agricultural land or to water transport, or they would starve. Inland, cities and good agriculture were like oases, with poor land in between supporting at best a sparse population. So traveling across poor land, or worse yet, deserts like those in Iran or Egypt, posed a life-or-death problem for an army. The bigger the army, the more deadly it was to itself.

The second problem is that a big army would have to keep moving, because even in fertile places, food and fodder would be exhausted in a steadily widening circle. And agriculture gets exhausted as the army passes through. The bigger the army, the more it creates a path of no return, since if it comes back (or a rival army, or a reinforcing one tries to use the same route), it will find nothing to eat. At best, it must wait til next year, next harvest season-- assuming the army has not killed off the farmers by eating up all their food so that they starve.

How did Alexander’s army solve this problem? Essentially, by diplomacy. It would send scouts or messengers ahead, seeking out availability of food and water.* Local chieftans or government officials presented themselves at the camp as word got around about an approaching army. Typically they would surrender to the conqueror, whereupon he would usually confirm them in their positions, enlisting them as allies. This meant they were obligated to help his army pass through their territory. Diplomacy on the whole meant generosity and persuasion. Alexander didn’t have to conquer everybody; leveling one resisting city and selling the population into slavery would be enough to bring the others around. In places where there was distrust, the invaders would leave a garrison, or demand hostages. It was a mild form of conquest, which left everything locally as it had been.

*We see the same thing in the Bible, when Jesus and a growing crowd of followers travel from Galilee in northern Israel to Jerusalem, a distance of 100 miles. Jesus sends out 70 forerunners to find towns to host them. It is not a military expedition, but Jesus calls down religious sanctions on the villages that refuse to receive them. “Woe to you, Chorazin! Woe to you, Bethsaida! ... And you, Capernaum, will you be lifted to heaven? No, you will go down to Hades.” [Luke 10: 1-16] Logistical issues recur throughout Jesus’ career, since big crowds overstress local resources: hence the need for miracles of multiplying loaves and fishes, and turning water into wine.

The essential thing was that new allies or friendly natives were obligated to provide stores of food and fodder along the route; pack animals to replace those lost by malnourishment, or to marshal their own local pack trains.

For Alexander’s army, the method worked well. It also explains why it took 10 years to conquer the empire. Conquering the eastern part meant more marches through deserts and mountains, careful planning of when harvests were available, and more advance diplomacy.

Alexander fought relatively few battles. After each one, he would stay in a well-provided location, receive visits of capitulation, and arrange logistics for the way ahead. His father, building a mini-empire on the barbarian fringes of Greece, was ruthless when he needed to be, but on the whole Philip expanded by diplomacy. It all meshed together: his fast-moving army, his combined-arms victories, and his diplomatic agreements that solved problems of logistics. His son operated the same way.

Philip, too, had been a keen-eyed youth. His adolescent years were spent as a hostage in Thebes in the 360s. At the time, its famous general Epaminondas was dominating Greece by building a full-time professional army, inventing combined-arms tactics and the strategy of holding forces in reserve; instead of the one-shot shoving match between hoplite phalanxes, Epaminondas created a two-stage battle where after the intial melée had tied up the enemy, his fresh troops would hit them on the flank. Philip developed a version of this tactic, using heavy cavalry.

Philip took over as King at age 24, not much older than Alexander at 20. Both learned their craft young, from the best of the previous generation. Both hit the ground running.

[3] The Target for Takeover

Alexander's expedition 334-324 BC

Most importantly, Alexander’s success depended on the fact that the Persian Empire was there for the taking.

Let us unpack this. The Empire was already an organized entity. Cyrus, Darius I, and their successors had created a unified administrative structure out of what previously had been several major kingdoms (Media, Babylon, Egypt), plus lesser kingdoms, plus a vast area that never before had been a state in the strong sense of the term. Back in the time of Cyrus in the 500s BC, Mesopotamia and Egypt, the two great fertile river valleys of the Middle-East, had already gone through their elimination contests and winnowed down to a few strong states based on big populations held together by water transport. But Iran, the uplands of Asia Minor and Armenia, and the adjacent plains of Central Asia, were still areas inhabited by sparse populations. Some were moving pastoralists, who formed at most shifting tribal coalitions. Others lived in pockets and valleys where agriculture could support a mid-size population and therefore petty kingdoms; but they lacked the logistics to supply an army big enough to conquer anybody, by carrying enough food and water to get across the infertile areas between them.

Cross-section of mountain barriers to Iranian plateau

What Cyrus did was essentially what Alexander did later: starting from the major pockets of population and agriculture, he would win a few exemplary victories, then use his prestige to invite or overawe the outlying areas, with their lower level of production, to enlist as friends and allies. We could call this a system of tribute; the Great King, as Cyrus and his successors were known, was more than just an ordinary King, but overlord of lords.

He did not change much locally; the same chiefs and petty kings remained in place, but they had to pay tribute. Above all, they had to provide goods in kind, especially the animals and foodstuff so that royal armies could pass that way.*

*In this respect, the expansive emperors, Darius and Xerxes, regarded the Greek city-states of Asia Minor and the other side of the Aegean sea as just so many more candidates for incorporation into the system of overlordship. Greek historians, and some contemporary politicians, saw this as a life-or-death struggle between democracy or despotism, but this was an exaggeration. From the Persian point of view, the Greek city-states were a version of small remote kingdoms, too much trouble to be directly controlled. The city-states of Ionia under Persian overlordship were left to run their own internal affairs; some continued to be democracies, others were oligarchies but this was the same spectrum as the Greek mainland. On the whole tribute was light, in fact generally less than what the Athenians demanded to maintain the anti-Persian fleet.

This was a thin administrative system.

In some places, a tributary empire could be turned into a thicker, more intrusive system. Cyrus, Darius, and their stronger successors put their own administrators in place: high-level satraps, intermediate level governors, local garrisons. In richer places, older city-kingdoms like Babylon, taxes could be collected in money for the royal treasury. Paved roads were built, facilitating the faster movement ofarmies to keep things under control; messengers connected administrators and sent policy edicts throughout the Empire. With only moderate success, to be sure; satraps were often near-autonomous; and since they ruled over layers of locals most of whose traditional leaders were kept in place, they often had little effect except keeping the taxes or tribute coming in.

Under the stronger Persian regimes, regional power was divided among a civilian head of government, counterbalanced by a chief treasury officer, and a military commander. There also was a service called “Eyes and Ears of the King,” roving inspectors with their own military escorts.

To repeat: Alexander’s success depended on the fact that the Persian Empire was there for the taking.

Now for the second part.

That it was for the taking was a common observation in Greece from the 390s onwards. The success of the Ten Thousand in fighting their way back from the heart of the Empire convinced them that Greek forces (and Greek democratic spirit) could always beat the servile Asians.

Spartan generals and other military stars of following decades put themselves forward as prospective leaders of such a conquest. Such names were popular in the panhellenic movement, what was left of the Athenian anti-Persian crusade. The famous Athenian orator Isocrates proposed that the solution of Greece’s problems was just such an expedition: not because Persia was still a menace, but because it was an easy target.

Greece’s problem was there were too many poor men wandering around joining armies. In the past Greece had taken care of its excess population by founding colonies around the Mediterranean. But that area was getting politically filled up, with menacing states in the west like Carthage and Rome. The solution was to expand eastward, conquering land from the Persians. The most recent of these prospective saviours was named: Philip of Macedon. Left unsaid was the fact that Philip was becoming a threat to the Greeks; better get him off to Asia and out of the way. After Philip’s assassination, Alexander made a foray into Greece with his army, and got himself confirmed as commander in chief (hegemon) of the panhellenic army, which now really would set out on this task.

Why an easy target? The Greeks could see clearly enough that their military forces were tactically better than the Persians. Moreover, Persia had long since stopped expanding. It had become a familiar player in Greek geopolitics, much like any other contender, taking part in one coalition, then another. Most striking of all must have been the way the Empire was periodically roiled whenever a King died. The satraps would revolt, and several years went into getting them back under control.

And Persian succession crises were filled with betrayals and assassinations, decimating the royal families several times over. It last had just happened in 338 BC, and had not yet settled down when Philip was ready to launch his invasion two years later.

All this was true. Alexander was able to pick apart the Persian Empire in Asia Minor with ease. Beating one Persian army at Granicus soon after he landed, and besieging one holdout city (a Greek city, by the way, Miletus) was enough to make the rest of the polities, Greek cities, semi-Greek kings, and Persian satraps alike, all come over to his diplomacy, and to supply his logistics.

It wasn’t until next year that the newly installed Great King could muster troops to meet Alexander in Syria, already in reach of the Mesopotamian heartland. Darius III was a survivor, not a particularly vigorous ruler, who got the crown mainly because he was almost the last of the lineage still alive.

That Alexander’s takeover of the Persian Empire went off without a single defeat was less a result of his singular qualities as a general, than of the weakness of Persian administrative and military structure. Alexander spent 10 years on the takeover, not because it was difficult, but because it was so large. Logistically, he needed that much time to make a grand tour of his Iranian and Eastern possessions after, in the 4th year, he had occupied the major cities, defeated Darius, and assumed his crown.

But also, the Persian Empire had enough structure so that is could be taken over-- as opposed to crumbling to pieces. Even in the wars among Alexander’s successors, the central part remained intact, while the Macedonia/ Asian Minor segment and the Egyptian segment broke off, leaving the big state outlines more or less where they were.

The Persian Empire, under whatever name, had coherence as a network, and it didn’t matter who headed it. In this perspective, the bloody, protracted and treacherous 20-year fight among Alexander’s successors continued the pattern of succession crises whenever the Persian Great King died. And this is what Alexander, in title, had become.

Sheer military force cannot take over a territory before it has developed to an economic level at which the conquering forces can be sustained. At the cusp of civilization, large armies couldn’t even traverse such places if economic organization isn’t complex enough.

Conversely, a state with a strong enough infrastructure to support its military rulers also can support a conquering army.

No Greek general, like Alexander or anyone else, could have conquered an empire spreading into the Iranian plateau and beyond into Central Asia, in the 500s BC when those places were still isolated agricultural oases amidst tribes and pastoralists. It required the intermediate step such as Cyrus took, to build the logistics networks. 

A person-centered way of saying this would be: no Cyrus, no Alexander. I have already said something similar about Philip’s relationship to his son. But to focus on names is to miss the point about how structures change.

Alexander made no changes to the Persian administration. His methods of conquest were the same as those of Cyrus: he accepted surrenders, then usually reconfirmed the former official in office-- sometimes even after they had opposed him in battle. Perhaps he did this out of gallantry; or recognizing competence where he saw it. Also it was the easiest thing to do, much easier than trying to create an administration of his own. In some places he left garrisons, and in the heartland regions he installed his own officials as satraps, and tried to reinstitute the 3-official system (administrator, treasurer, military commander) where it had fallen into disuse. But the end result was essentially to put the organization of the Persian Empire back in working order.

The panhellenic prognosticators were right. The Persian Empire was ready for a takeover. But the end result was no different. Alexander was not great enough to make a structural change.

[4] Greek Population Explosion and Mercenaries

Greece had been in a population explosion from the 600s BC (when it had about half a million people) until 400 BC (when it reached 3 million). The city-states sponsored colonies in southern Italy, Sicily, North Africa, and around the Black Sea, without slowing down the population surge, so the overall growth must have been even larger. One big result of Alexander’s conquest was to open the Asian Middle-East and Egypt to colonization. This time it worked; Greece’s population started falling, down to 2 million by 1 AD, a loss of about one-third. During this same period, the Persian Empire and its successors (the Macedonian and Hellenistic successor states in Asia and Egypt), grew from about 14 million total in 400 BC to 17 million in 1 AD, with most of the growth in the Greek-dominated areas of Asia Minor, Syria, and Egypt.

One could say Alexander’s conquest solved the problem of Greek overpopulation relative to its resources, which Greek observers had seen as the result of growing concentration of wealth, dispossession of poorer farmers from their land, and the creation of a dangerous class of rootless warriors.

Reversing the gestalt, it was not so much Alexander who made possible the migration of the Greeks, as the other way around: the mobile Greek surplus population, already employed as warriors, made up the armies that carried Alexander to success. We see this in the growth of mercenaries, starting at least 100 years before Alexander.

Already at the time of Darius the Great, the Persian king was employing Greeks to command a fleet surveying the coast from India to Arabia, and even to survey the coast of Greece preliminary to invasion. Around 450 BC, Greek mercenaries were employed by Persia to put down an internal revolt by a satrap in Asia Minor. The famous Ten Thousand hired by the pretender Cyrus to overthrow his brother Artaxerxes in 401 BC were the first time Greek hoplites were seen on the plain of Mesopotamia. By the 350s, Artaxerxes III was hiring his own Greek mercenaries to regain Egypt. Thus it was no surprise when Alexander, in his first battle of conquest, at Granicus in Asia Minor 334 BC, fought an army made up of Persian cavalry, local infantry, plus a force of Greek mercenaries who fought longer and harder than anyone else, and were massacred by Alexander after the end of the battle. At the climactic battle of Gaugamela in 331 BC, King Darius III surrounded his personal bodyguard with Greek mercenaries.

In part this was the professionalization of warfare. Most fighters in the Greek wars of the 400s BC were part-time citizen-soldiers; the following century turned increasingly to full-time professionals. Greek hoplites acquired the reputation as the best brand, and their services were bought ever farther away in the international market. Greek generals offered to hire out to any city, tribe or kingdom that needed them. Political loyalty and nationalist ideology were nowhere near as important as the Athenian panhellenists made them out to be. Some soldiers were ideological, many were not. Already in Xerxes’ invasion of 480 BC, Arcadians (Greek soldiers from the interior) offered their services to the Persians, out of poverty. In the city-states, mercenaries were regarded with suspicion for precisely this reason; they were apolitical roughnecks, ready to fight for whoever had money to pay them; hence when the Ten Thousand completed their great escape from Persia, the Greek cities viewed them with distrust and refused to admit them. But even in ideological wars mercenaries were used; the Athenians for instance used them for the dirty work of massacreing enemy cities during the Peloponnesian war.

Eventually everybody used them; Alexander too had mercenaries in his army, although he preferred to rely on his native Macedonians, and reserved officer positions for them.

The panhellenic ideology of Greeks vs. Persians was never a widespread reality. Already at the time of Cyrus the Great, most of the Greek cities of Asia Minor were brought into his empire by subsidies, bribes, and treachery. Even Athens and Sparta wavered between pro- and anti-Persian positions; it was generally the democratic faction who favored Persian protection and peace, the conservatives who favored war. After the invasions were thrown back, there was a Sparta-Persia alliance during most of the 400s BC; Persian tribute in Ionia was less than the Athenian exactions, and some cities preferred Persian to Athenian imperialism; and Persian subsidies-- Persian gold-- financed Sparta to victory in the Peloponnesian war.

During the 300s, the fluctuating Greek powers were all willing, at one time or another, to make Persian alliances.

One thing that connected mercenaries with Persian foreign policy was money. Mercenaries, by definition, fought for pay; and they flourished in the same milieu where subsidies/ aid/ bribery were a weapon of statecraft. The entire Middle-East, and its peripheral zones like Greece, were becoming better organized: in infrastructure of roads, shipping and ports; in administration, travel and communications; in agricultural production to support larger populations, in trade, tribute, and taxation. Coinage and a layer of monetary economy above the subsistence sector existed by the 500s, and was widespread by the 300s. In that sense, the argument about the spread of mercenaries is the same as the argument that the advanced organization of the Persian Empire made it a candidate for conquest. Not only was there a population explosion in Greece, but also a market flowing towards the better-organized East, where there was money to buy the thing that Greeks were best at producing: top-flight military labor. From a higher level of analysis, the growth of mercenaries, the shift of Greek population to the East, and Alexander’s conquest were all the same process.

[5] Alexander’s Victory Formula

Besides diplomacy and advance logistics, how did he actually conduct a battle? Not quite what you’d think: not just a headlong attack, but a mixture of caution and impulsiveness.

A better word would be patience. Alexander took risks once battle began, but his strategy of when and where to give battle was the opposite of risk-taking.

Alexander recognized that a big Persian army could not stay in one place very long. The bigger it is, the less it can live off the land; and bringing in supplies generates the vanishing-point mathematics of pack animals and humans eating up the supplies they are carrying, not to mention clogging the available roads.

Facing huge armies, Alexander delayed accepting battle. Before Issus, Darius assembled several hundred thousands on a plain near the Syrian Gates, where the Macedonians would be expected to come out of the mountains of Asia Minor. The plain gave unrestricted maneuverability for a large army, and there had been time to stockpile ample supplies. Alexander, crossing them up, went on a 7-day campaign westward against the mountain tribes. Then he returned to a city where he was well supplied by sea, made elaborate sacrifices to the gods; held a review of the army; athletic and literary contests; even a relay race with torches. Finally Darius had to move, and went seeking Alexander in the narrow region of mountains and swamps, throwing away his advantage of open ground. After two weeks inland, no doubt hurting for supplies, Darius finally met Alexander at the Issus River, where the Persian army-- now down to about 150,000-- was packed in and unable to use superior numbers to outflank or surround him.

At Gaugamela 3 years later, Darius had an even bigger army, on a wide plain supplied by the main roads of Mesopotamia. They even cleared away bushes so that their scythe-bearing chariot wheels had room to roll.

Alexander brought his army, now grown to 45,000, to a hill overlooking the plain, where at night the torches seemed to go on forever. Since the Persians were not going to move, Alexander gave his army four days rest. Alexander was also playing psychological warfare, not letting the Persians fight in their first flush of enthusiasm (the adrenaline rush, we would say).

Their suspense grew even worse, since they began to expect a night attack, so after several sleep-depriving nights of this, Alexander chose to attack in the daylight.

Alexander always started the battle. His formula was to seize the initiative, establish emotional domination as quickly as possible. His open-field battles all became walkovers. The units of the Macedonian army—infantry phalanx, light troops, heavy cavalry on both wings—advanced at different times, but the key was always Alexander’s assault.

Once the Companion cavalry broke the Persian ranks in an intense but usually short fight, the Persians' advantage in numbers was turned against themselves.

At Issus, the Persians had large numbers of troops, realistically perhaps four times the size of Alexander’s, lined up along a river bank. But most of those tens or hundreds of thousands could never engage the Macedonians, because they couldn’t get close to them. Once their defense crumbled on the right, Alexander turned obliquely against the center; this threw the Persian army into a stampede, particularly disabling when so many men trample each other in a traffic jam. In every major battle, the Persians lost 50 percent or more, the Macedonians a small fraction, perhaps 1 percent or less. The disparity in casualties seems unbelievable, but it is commensurate with complete organizational breakdown of one side, making them helpless victims. In violence on all size-scales, emotional domination precedes most physical damage.

At Granicus, Alexander positioned himself opposite where the Persian commander was surrounded by bodyguards. He waited for the moment when he saw a wavering in the Persian line, and charged his cavalry at that point. Alexander led 2000 or so cavalry splashing through the water and up a steep bank. This might seem a risky thing to do. But psychologically, relying on favorable geography for defense is a weakness; once the advantage of terrain turns out to be ineffective, the defending side has set itself up to be emotionally dominated.

In every respect, Alexander aimed at the point of emotional weakness-- a point in time and space, visible to a good observer.

Alexander did not have to fight the entire Persian army; he picked a unit about his own size, and counted on the superior quality of his troops-- the superiority they created by generating emotional domination.

All three of Alexander’s fateful victories-- Granicus, Issus, Gaugamela-- ended the same way, with the enemy commander (in the last two, the King himself) running away in his chariot, setting off a general panic retreat.

At Gaugamela, the Persian forces were so large and spread out that Parmenio, commanding on the left, had a stiff fight with Greek mercenaries and other Persian forces who did not know the rest of their army was routed. It took longer but Parmenio, too-- the other cavalry commander-- emerged victorious without Alexander’s help. This shows that the Macedonian style was not personal to Alexander alone.

There is another respect in which Alexander attacked the weakness of the Persian army. It was an army of an empire, a polyglot of 50 different ethnic groups, with their own languages, each fighting in their own formation. The army that invaded Greece under Xerxes had 30 generals, all Persian aristocrats; the armies of Darius III were probably similar. We can surmise that central control of the army, once battle began, was minimal. We can also infer that morale and loyalty of each ethnic unit was shakey; they had been recruited by going over to the victor, and they were aware of the possibility of going over to the other side if things did not go well. * There was also the rigid hierarchy of the Persian army-- something all the Greeks commented on.

*This was the pattern of warfare in India before the arrival of European officers. Battlefields were displays of ferocious weapons-- chariots, elephants and so on-- but outcomes were decided mostly by side-switching in the midst of battle but arranged beforehand. (Philip Mason. 1976. The Indian Army.)

Why this would make a difference is illuminated by observations by Western troops serving in today’s Middle-Eastern wars. American and British forces in Afghanistan, for instance, have commented that local troops can be ferocious in combat, and like the action of getting into a fight. (I have this from personal accounts, and military publications.) Their main weakness is in their officers, especially the NCOs. Whereas American NCOs are trained to take initiative, especially when higher organization gets disrupted during the fog of battle, Middle-Eastern officers are wary of doing anything they might be criticized for.** Success as an officer is not necessarily a good thing. Outstanding success makes one a political threat; it also could be interpreted as showing up one’s superiors. Extrapolating backwards to Alexander’s time, there are numerous reasons why ethnic troops and their lower officers would not fight vigorously for their Persian commanders, if the battle started going against them. Generals who failed risked being executed; but generals who succeeded were potential rebels, and many of them got executed or assassinated in a few years anyway, in the distrustful politics of the Empire.

**In this respect, the Roman army was more like the contemporary American one. Centurions-- leaders of a company of 100-- were widely regarded as the backbone of the army, and treated as such by successful generals. Also similar were the widespread opportunities for upward mobility in the revolutionary French army at the time of Napoleon.

A puzzle: by the time of Alexander’s invasion, the Persian army had its own Greek hoplite mercenaries. At Gaugamela, King Darius deployed 15,000. Since the tactical quality of the troops was the same, why didn’t the Persians’ Greeks stymie Alexander’s? Most likely because of the organizational atmosphere of the Persian army. The Greek mercenaries were hemmed in by the status-conscious Persian command structure. Proof by comparison is in the wars that took place, in the same region, among the Hellenistic successor states after Alexander’s death. When the composition of the armies became the same on both sides, outcomes went back to pretty much even.

For Alexander, a few big battles were enough to make the loyalty structure crumble, setting in motion the massive side-switching and the diplomatic offensive at which Alexander was adept.

We should add a number of sieges, first on the Ionian coast, and then in the Levant, above all the sieges of Tyre, the harbor stronghold of Phoenician naval power, and Gaza, on the road to Egypt. Alexander’s sieges were no different than anyone else’s. It took patience, and he spent 7 months at Tyre, determined to break through its strong-walled defenses. His eventual victory came by employing the most advanced Greek engineering methods of the time; but also through a strategic move. The Phoenicians could not be starved out, since they were supplied by sea. Alexander found a way around this by making diplomatic deals with other seaport cities, to bring their fleets to attack Tyre from the water. This worked; a combined land and sea attack breached the city. When he got to Egypt (which simply surrendered), he founded his most important colonial city, Alexandria, as a new naval center. This had the effect of giving him secure sea routes at his back, and quicker resupply lines for reinforcements from Greece. In military perspective, it was a fine combination of strategic and tactical plans.

Finally, there are the stories about Alexander’s clever strategems where his advance was blocked by an extremely strong position, like a fort in a mountain pass. As always in such stories, someone discovers a little-used pathway over the dangerous mountainside, leading around to the rear of the enemy. Alexander leads a body of intrepid troops on this action-adventure, and all is well in the end. Using bad weather as a cover also helps take the enemy by surprise. I don’t doubt the truth of these stories; but they are commonplaces about generals throughout history (there are similar stories in Xenophon). Most of these battles were minor; none of them broke the back of the enemy organization.

Was Alexander’s Success Because of or Despite His Personality?

“Personality” is a noun, but that is merely how it operates in our grammar. What we mean by personality, what the word points to, is not a thing at all but a series of actions. Personality is the sum total of someone’s personal interactions.

Some incidents of Alexander’s personal dealings with others were scandalously famous. In the seventh year of his campaign, his army was in Samarkand, far away in what is now Uzbekistan. At one of their frequent drinking-parties, Alexander got into a dispute with one of the Companions of the elite cavalry. It was his oldest friend, Cleitus-- his foster-brother, since Cleitus’ mother had nursed and brought up the two of them together. Both were drinking heavily. Cleitus began badgering Alexander about introducing Persian customs, especially making everyone who approached him prostrate themselves on the ground, treating him as a god on earth. Cleitus said it was offensive to his old friends, that an army wins as a group but he was taking all the credit for himself, that he is forgetting who-- Cleitus-- saved Alexander’s life at Granicus.

Alexander grew angrier and angrier. Cleitus’ friends tried to pull him out of the room, but he barged back in through another entrance, shouting another insult. (This is the typical escalation of a bar-room quarrel; it is usually when one of the partisans in a face contest has been ejected and makes a return, that somebody gets killed.) What happens next is revealing in the way Alexander was treated by his personal companions and servants. Alexander called on his guards to sound the alarm-- the signal that would have roused the entire camp to arms. None of the guards obeyed the order; they must have been used to such quarrels, and defied their god-playing King to keep the situation from getting out of hand. Since no one obeyed him, Alexander grabbed a spear and hurled it at Cleitus, killing him.

Immediately he calmed down. He tried to kill himself with the same spear but his guards prevented it. He retired to his room, and stayed there berating himself for days. Finally his advisors prevailed on him to put the incident behind him. He resumed acting like a Persian king-god, at least in public. About this time began a series of plots, rumoured assassinations and real executions. Two of his favorite Companions had drawn swords on each other; Alexander settled the matter by telling them he would execute them both if they quarreled again. He also delegated them separate tasks, one to convey orders to the Greek-speakers, the other to the Persians and foreigners in his army.

A flashback reveals something deeper in the interactional style of Alexander, and the Macedonian court where he grew up. When Alexander was 18, his father had taken a new wife, and at the wedding party the girl’s uncle-- one of Philip’s generals--- gave a toast to a new heir. Alexander threw his drinking cup at the man’s head and shouted: “What do you take me for, a bastard?” Philip drew his sword to cut down his son, but failed because he was too drunk to stand up. Alexander and his mother had to go into exile, but eventually he was recalled. Not long after, Philip was assassinated by another intimate with a dagger, Alexander’s mother had the new wife and her baby killed, and Alexander became the new King.

Heavy drinking, brawls, plots and assassinations were common at the Macedonian court (as the latter were in Persia too, although it is not clear that drinking was involved.)

There are striking similarities between Alexander killing Cleitus, and Philip trying to kill Alexander. Philip was a tough, brawling fighter, years of violence having left him with one eye, a crippled hand, and numerous wounds. Alexander was wrecking his own body the same way. Both did heavy drinking with the aristocratic heart of their army. Both relied on the same battle tactics, leading the charge, inspiring the cavalry attack. There was no way Alexander could avoid keeping up these drinking bouts; he continued them until he died from one of them.

Drinking was the ritual of bonding among the group that won his victories. Alexander’s carousing seems to contradict his patience in arranging logistics and awaiting the proper moment for marching or battle. But these were parts of the same thing: having to wait around so much gave occasion for carousing, a way of keeping up morale during dead time.

Now Alexander is in a structural bind. As Persian King, and in constant diplomacy playing King of Kings to the chieftans around him, he is caught in the ceremonial that exalts him. As leader of the world’s best military, he needs to keep up the solidarity of his Companions. The ambiguity of that name-- more apparent to us than it would have been at the time-- displays the two dimensions that were gradually coming apart: his companion buddies, a fraternity of fellow-carousers, fighters who have each other’s back; and the purely formal designation, members of the elite with privileged access to the King.*

*Compare the protocol of King Xerxes (reigned 485-465 BC) described in the Old Testament Book of Esther. She is a beautiful Jewish woman who has become Queen, top rank in the harem. But she risks her life in leaving her house to enter the King’s presence uninvited. Fortunately for Esther, and for her people, the King is happy to see her, and she is able to countermand an order sent out by royal messengers that would have killed all the Jews in the Empire. The storyline in Esther hinges repeatedly on who is allowed into the royal presence; at the outset, the previous Queen is deposed because she refuses to come when the King wants to show her off at one of his all-male drinking parties. Which way the royal scepter pointed meant favor, or death. Similar protocol at Babylon is described in the Book of Daniel. Alexander was moving towards being that kind of Oriental potentate-- and the Greeks were the first to formulate the ideas of Orientalism.

Thus it is striking how much freedom from deference, how much equality existed in Alexander’s drinking parties. It is astounding that his guards refused to obey his orders, and even laid hands on him forcefully to prevent his suicide. They too were part of the team.

Philip and Alexander have the same double personalities.** Philip, though a bad-tempered brawler and ferocious battle leader, also is the master of diplomacy. We have seen that Alexander’s conquest would not have been possible without having learned to solve logistics problems by diplomacy. After some battles he could massacre the defeated; but also he could be magnanimous. With some conquered kings and other high aristocrats, Alexander not only would restore them to their position, but treat them with great courtesy.

Such magnanimity would also have been good for his diplomatic reputation, encouraging side-switchers to approach him. I am not suggesting it was simply a strategy Alexander played. Personality is made from the outside in; habitual styles of interacting with people become part of the way one is. Since Alexander’s daily life fluctuates among different kinds of situations, he has many personality facets-- to fall back on talking in nouns, an unavoidable but misleading feature of our language. His life consisted of situations when he played the hard-drinking fraternity boy, and when he played the diplomat; increasingly as he took over Persian organization, he took on the side of arrogant ruler, paranoid about plots.

**One respect in which they differ is that Alexander was not very interested in sex. He joshes his friends for their love affairs, but seems to have been a virgin until age 23, when Parmenio gave him a captive Persian woman. Plutarch records that the captive wife and daughters of the King and women of the court were “tall and beautiful”, but Alexander would say sardonically “What eyesores these Persian women are!” Nor does it appear that he was homosexual-- although that would have been normal in Greece-- since he forcefully rejected a present of two beautiful boys.

Alexander was a monomaniac about the army and dangerous physical action-- he preferred hunting lions. Very likely he regarded women as dangerous entanglements, sources of strife and assassination. Observing not just his father, but his mother, would have taught him that.

Here are some other facets, or episodes:

The Impetuous Leader

The only route from the royal city of Persepolis in southern Iran to Ecbatana, the old Median capital in the northwest mountains, led over a 8000 foot pass, often blocked from winter until April. But in March 330 BC, Alexander was eager to get through. The ancient historian Curtius describes it with a touch of melodrama: “They had come to a pass blocked with perpetual snows, bound in ice by the violence of the cold. The desolation of the landscape and the pathless solitudes terrified the exhausted soldiers, who believed they were at the end of the world, and demanded to return before even daylight and sky should fail them.” Alexander reacted by leaping from his horse, seizing a mattock from a soldier and furiously attacking the ice, chopping a path through. It was the same way he led the cavalry charge in combat, pulling his troops behind him.

In summer they are marching through a desert, suffering from heat and lack of water. One day, an advance party found a gulley stream, and were bringing water up on pack animals as Alexander marched by on foot with his soldiers, sharing their misery. A soldier filled a helmet with water and held it out to Alexander. As he was raising it to drink, he saw his cavalry soldiers looking at him thirstily. Alexander shook his head and dashed the water to the ground-- his cavalry shouted they could all go another day without water and they galloped off together. One wasted helmet of water, Plutarch comments, invigorated the whole army.

Flashforward four years. Alexander’s army is preparing to leave Bactria, in far-off Central Asia. The campaign has been successful; they are laden with booty, rugs, silks, luxuries, probably a throng of camp followers. Alexander looks at the loaded supply train, just the kind of thing that would drag them down. Burn it all! And he starts in with his own wagons and pack animals, tearing off the bundles and throwing them into a fire. There is a shocked moment: then his soldiers join in, one after another; soon they are yelling in contagious joy, throwing things into the fire.

It is a combination potlatch and display of military dedication, waking up from the soporific dreams of peace.

Why does he act so much better on campaign than he does in court or in camp? He is an action junkie; the soft life repels him. But it is part of being a great King, and that has been his life’s goal. As long as there is another battle to fight, another danger to brave, he in in tune with his men, his buddies, his Companions.

Two Mutinies

Eastward into India, crossing one tentacle of the upper Indus River after another, the army penetrates the exotic tropics. They win a great battle against a huge host, armed with elephants; Alexander receives the Indian King’s surrender, then returns his kingdom to him with an exchange of royal compliments. The war-plus-diplomacy formula is still working. Then: his troops refuse to go on. Not just the men; his officers come to explain what the soldiers are saying, they agree with it too. Alexander is devastated. He retires into his tent, refuses to talk with anyone. He announces the rest can go back; he will go on with whoever will accompany him. No one offers. It is like the days after he had murdered Cleitus. But Alexander is harder now, older too; he gets over it, reluctantly agrees to lead his army down the river to the sea, starting their return to the West.

But his mood has changed. Already, since the murders and suspicions and executions in Central Asia, Alexander had grown more personally violent. He shot with an arrow a barbarian chief brought to him for rebelling.** Later, reprimanding his administrators for corruption in his absence, he killed one with his own hand by the stroke of a javelin. When Parmenio’s son is implicated in one of the alleged plots, Alexander not only killed the son but sent orders to assassinate the father.

**Millenia later, in the same part of the world, this was still a style of the super-toughguy leader; in the Russian civil war around 1920 the chief of the partisans/bandits would personally execute a captive in front of a crowd, this time with a pistol. (Felix Schnell. 2012. Räume des Schreckens. Gewalt und Gruppenmilitanz in der Ukraine 1905-1933.)

Thus we should not be surprised at the following incident: Beginning the march home in the Indus valley, Alexander fought all the tribes that would not submit. In one city, the citadel held out. Growing impatient with the siege, Alexander himself mounted one of the ladders, fending off a shower of missiles from above with his shield. He reached the top with three others when the ladders broke. His friends called Alexander to jump down; instead he jumped into the fortress. His tiny group fought ferociously, but were almost overwhelmed in the midst of the enemy by the time the Macedonians had frantically driven pegs into the earthen wall to make the ascent. One companion was dead; Alexander had been pierced by an arrow in the chest and fainted from loss of blood. His infuriated troops killed everyone in the place down to the women and children.

Alexander was always heedless of himself in battle, but now one wonders if he cared whether he lived or died. His soldiers had betrayed him; if they wouldn’t follow him now, they would see!

There must have been some satisfaction as his litter passed by boat along the camp shore, the army shouting as he raised a hand to show he was still alive.

After a long and devastating march, the following year they were back in Mesopotamia.**

**The big obstacle was the Gedrosian desert, the dryest part of Iran. Alexander could have come back the way he had gone out, looping across the northern, more fertile edge of the Iranian plateau; but Alexander sent a subordinate with part of his troops that way-- he wanted to try something new, maybe something especially dangerous. Usually careful of logistics, he planned for his admiral Nearchus to sail parallel to his route along the coast of the Indian ocean, to supply him with food and water. It was a rare miscalculation: they did not know the monsoon winds blew the wrong direction that time of year, and Nearchus’ fleet was stuck in port while Alexander’s 150,000 were marching west. Three-quarters of them died in the desert. It was the worst loss of Alexander’s career, more men than all his battles put together. It was like Napoleon's retreat from Moscow.

Then came a second mutiny. He called an assembly of his Macedonian veterans, by now down to a fraction of his troops. He formally discharged those who were too old or wounded for further action, sending them home with ample rewards. The army’s mood was sullen. The cry went up: Discharge us all! And some yelled taunting insults about the Asian gods in whose name he would fight his further conquests. Alexander leaped down from the platform and pointed out the ringleaders to his guards, to seize them and put them to death.

In the silence that followed, Alexander remounted the platform and bitterly discharged the whole army. From now on Persian nobles would fill the high posts; names of Macedonian regiments would be transferred to the new army. For three days the Macedonian soldiers lingered, uncertain what to do; finally they laid down their weapons and begged to be admitted into Alexander’s presence. What followed was a tearful reconciliation. The quarrel was patched up, in the usual ritual, by massive drunken feasting.

Partying to Death

The triumphant return to the center of the Empire was one carousing celebration after another. There was a drinking contest with a prize; the winner drank 12 quarts of wine and died in three days; another 40 guests died because they were too drunk to cover themselves in a sudden storm of cold weather. At another great feast, featuring 3000 entertainers imported from Greece, Alexander’s closest friend Hephaestion fell ill after swallowing an entire flagon. When he died, Alexander went into a veritable potlatch of grief; he had the battlements of nearby cities pulled down, and massacred the entire population of a nearby tribe who had been causing trouble; the physician who failed to cure his friend was crucified. Hephaestion was more than a friend; he was his fellow Persianizer, the one who like himself wore Persian robes, the one who had fought the leader of the pro-Greek faction after the murder of Cleitus. Now Alexander was alone, the Persian King of Kings, without a friend. Someone stepped forward, one of the original Macedonian Companions, inviting him on an all-night drinking binge. They did it again the next night. Alexander woke up with a fever, steadily worsened, and died.

It was alcohol poisoning, of course-- literally drinking himself to death, like his companions.

Copy of a statue of Alexander regarded as good likeness

Are we surprised at how he looks? The statue made by his favorite sculptor is certainly not of a youth; probably from the last years of his life when he was back from campaigning. He stood out from his bearded contemporaries because he kept himself clean-shaven. Alexander was short but stocky, with something twisted looking in his face and neck. He was thirty-two years old when he died. Is this dying young? Think of him as an aging athlete, engaged in the roughest action for 16 years; about the time professional athletes start to retire, beat up from injuries. Alexander had been wounded in almost every battle, sometimes severely; wounded in the leg, bludgeoned in the head and neck, arrows that shattered bones and had to be painfully removed from shoulder, thigh and chest. It accumulates; and there were no steroids to prolong an athletic career.

Alexander did not die of disappointment, or for want of places yet to conquer. His fatal drinking binge took place days before another expedition was to be launched, the conquest of Arabia, preliminary to Carthage and the western Mediterranean. But the atmosphere was different. The court was swarming with priests and soothsayers, making all manner of sacrifices and oracles for the upcoming expedition. Alexander was conventionally religious for his time-- i.e. giving ample display of rituals before and after battle, no doubt enjoying the Durkheimian center of attention. But there is no indication he ever let the oracles tell him what to do. Flashback one last time, to Alexander in Greece, 21 years old, getting ready for his Persian expedition. Following good form, he visits the oracle of Delphi. But the oracle is closed; it is not a propitious day. Alexander forcefully drags the priestess to her shrine. “My son, you are invincible,” she protests. It is all he wants to hear.

Did Alexander Really Achieve Anything?

He took over the Persian Empire. He did not change its structure or even its extent. The mutiny in India happened when his army passed the Persian frontier; it was just too far, by everyone’s sense of what the Empire could hold. I take this to mean a logistics sense. There is a silly conjecture that if Alexander had not died, he would have conquered Carthage and Rome, and created a true world empire. This is hero-rhetoric of historians. As if anyone had the administrative capacity at the time: Italy had still not gone through the elimination contest that would have made it the kind of target Persia had become. Even 500 years later, when the Romans shifted from a thin tributary overlordship to a degree of bureaucratic penetration, they never could get beyond the western edge of Iran.

Could anyone else have done as much as Alexander did? Very likely. His father Philip was all set to do it; and he probably could have carried it out, if he didn’t get killed at some other drinking party along the way. They used the same army, the same tactics, the same diplomacy of rule. Perhaps the only difference was that Alexander was somewhat better, after all, at holding his liquor at drinking parties.

The main structural innovation that Alexander attempted was to promote mutual assimilation between Greeks and Persians.* The god-king protocol was what his Greeks objected to; but it was a necessary form of rule in the tributary overlord structure of the Persian Empire, depending on impressiveness and ceremonial obeisance that left local potentates in place. Greek city-state democracy (and even the version of egalitarian equals inside the Macedonian aristocracy) was structurally incompatible with the vertical hierarchy of an oriental empire. What Alexander’s innovation came down to specifically was an academy to train Persian noble youths, by making them, in effect, into Persian-speaking Macedonian officers.

This is what the second mutiny was about. He couldn’t integrate the Empire; the best he could try was integrate the office corps. Even this didn’t take.

*Alexander was not so “Greek” as Greek-centered historians have assumed. Certainly he was not a panhellenic anti-Persian. When he left Macedon, he gave away all his property, acting like he expected never to come home. His heroes were the Persian empire-builders, Cyrus and Darius I, even though the latter invaded Greece. In fact, Macedon became a client state of Persia at the time. Macedon was a buffer zone between two culture areas, and such locations can go either way.

The biggest consequence of the Macedonian conquest was creating a zone on the eastern and southern edges of the Mediterranean in which the dominant language and culture were Greek; and thus a zone where travel was facilitated, and social movements could spread. The main results were two: when the Romans started being drawn into Greek coalition-wars, starting with Epirus (on the Adriatic side of Greece-- near present-day Albania, and the place where Alexander’s mother came from), they were drawn onwards until they were interfering in the alliance system of the Greek-speaking states, all the way around to Syria and Egypt. And when Rome interfered, it never withdrew.

The second result can be seen in where Christianity spread: exactly these Greek-speaking places. Paul the great missionary to the Gentiles is a native of Tarsus in Asia Minor, near where Alexander fought the battle of Issus. The letters that make up the New Testament (itself written in Greek) are almost entirely to Christian congregations in Asia Minor and Greece. The subsequent great centers of the church, and of the monastic movements, were Greek-speaking Antioch and Alexandria. The inadvertent consequence of Alexander’s conquest was to create the conditions for the linguistically unified networks that became the great universalistic religion of the West. The panhellenic Greek spokesmen who in the 300s BC advocated colonizing land won from the Persian Empire thought they were exporting Greek democracy.

This did not happen. What got created, instead, was a cosmopolitan network structure, with Greek as its lingua franca. In it the very idea of universalism-- of a religion free from worldly entanglements and local loyalties-- could take hold.

Why did Alexander Sleep Well, but Napoleon Never Slept?

The preceding blog observed that Napoleon was so energized that he worked 20 hours a day, and on campaign never slept for more than 15 minutes at a time. Alexander was not at all like this. Alexander bragged that he never slept better than the night before a battle; that Parmenio had to shake him three times to wake him up before they went out to fight at Gaugamela. This is entirely plausible. Alexander was much more physical than Napoleon, a muscle man who tired himself out with vigorous exercise.

They both had high emotional energy, pumped up with confidence and pumping up others around them. But they did it by different means. Napoleon got his energy in center-of-the-network rounds of meetings, taking care of all the many branches of administration and moving around the pieces that had to assemble for battle. Things were simpler in Alexander’s day; administration was a thin ceremonial hierarchy; battle preparations were simple, and he did not so much direct his forces as launch the attack and create a bolt of energy that would stream behind him into the heart of the enemy army.

Who was the greater general? Consider this a way of seeing how much had changed from ancient organizational structures to incipient modern ones. If we imagine Napoleon going up against Alexander, it would have to be either in one time or the other. On an ancient battlefield, Napoleon would have been too small to play much part. On a modern battlefield, Alexander would have been one of the wild barbarians whose cavalry charge got mowed down by Napoleon’s artillery. Maybe he was, in the form of one of the native armies Napoleon annihilated in Egypt or Syria. Alexander won all his battles, Napoleon lost at least one big one. But Alexander fought perhaps a third as many battles, all of them one-sided, the most advanced military organization of its day against inferior ones. Napoleon fought armies much like his own, and towards the latter part of his career, his enemies caught up with his best techniques. It is foolish to attribute their respective records to such transcendental impossibility as sheer decontextualized talent.

Bottom line: Heroic leaders, if we unpack the designation of what we are talking about, have to be energy stars. They are persons in the center of gatherings, where they recycle emotions into group action. It can be done in different ways. Napoleon did it mainly by turning enthusiasm into speed; Alexander by spreading a reputation mixed of domineering, sudden anger, and magnanimous generosity.

They lived on opposite sides of a moral divide. Alexander was far more personally cruel than Napoleon, or other modern people, could be. Getting into Alexander’s world makes us realize how different are human beings under different social circumstances. Today someone like Alexander would be on death row. Napoleon one could have liked. As Durkheim explained, morality, as well as emotional energy, are products of social morphology.

 

Napoleon Never Slept: How Great Leaders Leverage Social Energy

Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs

E-book now available at Maren.ink and Amazon

 

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

 

References

Arrian. History of Alexander.

Plutarch. Life of Alexander.

Xenophon. Anabasis.

Old Testament Bible. Book of Esther; Book of Daniel.

J. B. Bury. 1951. A History of Greece to the Death of Alexander the Great. chapters XVI- XVII.

Donald W. Engels. 1978. Alexander the Great and the Logistics of the Macedonian Army.

Peter Green. 1970. Alexander of Macedon, 356-323 BC.

R. Ghirshman. 1954. Iran: From the Earliest Times to the Islamic Conquest.

Colin McEvedy and Richard Jones. 1978. Atlas of World Population History.

H.W. Parke. 1933. Greek Mercenary Soldiers.

Geoffrey de Ste. Croix. 1983. Class Struggle in the Ancient Greek World.

Randall Collins. 1998. The Sociology of Philosophies. chapter 3.

Randall Collins. 2008. Violence: A Micro-sociological Theory.

NAPOLEON AS CEO: A CAREER OF EMOTIONAL ENERGY

SANDY HOOK SCHOOL SHOOTINGS: LESSONS FOR GUN-OWNING PARENTS

The aftermath of the school shootings at Newtown Connecticut has focused on searching for a motive. Why did 20-year-old Adam Lanza kill 20 children and 6 adults at the school, after shooting his mother in her bed? The report of the State’s Attorney, released in November 2013, concluded the motive will never be known. Searching for a motive is misguided; what we want to know are causes. Whatever went through the head of Adam Lanza in the hours or years leading up to the shooting on December 14, 2012 is less important than what were the conditions that made this happen. Above all, what was the chief cause, or causes, that if it had been different, the shootings could have been avoided?

Clandestine Back-stage Cult of Weapons

In a previous post [Clues to Mass Rampage Killers: Deep Backstage, Hidden Arsenal, Clandestine Excitement; posted Sept. 1, 2012], I argued that the most distinctive clue that someone is planning a rampage killing is that they lead a secret life of amassing weapons and scripting the massacre. The point is not that they acquire a lot of guns; many people do that. But mass killers keep them secret; their life becomes obsessed with plans and fantasies of the attack, and energized with the excitement of being able to dupe other people about their secret life. Foremost among those who are duped is their family.

The Sandy Hook shooting had all these traits, with a few additional twists. The would-be shooter kept the bedroom windows of his room taped over with black trash bags; so were the windows of the nearby computer room where he spent most of his time. No one was allowed into his room, not even to clean, not even his mother. In fact, no one was allowed into the house; all workers and deliveries had to meet the mother outside in the yard or at the end of the driveway.

What did he keep hidden in there? A semi-automatic rifle; two semi-automatic pistols; a semi-automatic shotgun; another rifle;  a large amount of ammunition.  Also, many weapons such as knives, swords, and spears. He also had books and newspaper photocopies about shootings of school children and university students;  a spreadsheet of mass murders; a large amount of information especially about the Columbine shootings; Hollywood movies about mass shootings; a video dramatization of children being shot; and a computer game “School Shooting” in which the player enters a school and shoots students.

He also had in his secret chambers: photos of a dead person covered in blood and wrapped in plastic; images of himself with rifle, shotgun, and pockets stuffed with ammunition magazines; images of himself holding a pistol or rifle to his head; videos of suicide by gun. His fantasy life apparently centered not only on a large number of violent video games which he played at home-- like millions of other boys-- but on famous school shootings of the past, on killing children, on portraying himself armed to the teeth, and on a scenario that we can infer was to end in suicide.

Violent video games are so ubiquitous that he did not have to keep them hidden. But for this young man they had a special meaning, private enough that he played shooting games only at home; at local theatre he played video games 4 to 10 hours a day, but only non-violent ones in the presence of other people. His clandestine excitement would have come from his ultra-private fantasies and preparations, never spoken about to anyone including his game playing compatriots at the arcade. Even things that he innocently could have talked about--  such as different kinds of weapons, or shooting at gun ranges-- were never mentioned, although he was obsessed with them in his closely-guarded rooms at home. He was very guarded about his on-line activities too, frequently reformatting his computer hard drive to minimize his Internet trace.  As I argued in the earlier post, it is this kind of secret life centered on weapons that indicates the pathway to mass killing, where more normal gun-owners keep their weapons above-board.

The Killings: Superfluous Arsenal and Emotional Domination

When Adam Lanza shot his way into the Sandy Hook Elementary School by breaking through the locked glass entry doors, he was wearing a black shirt over a black T-shirt, black cargo pocket pants, black socks, black sneakers, black fingerless gloves.  This all-black costume-- probably depicting a fantasy-culture image of the outlaw or avenger-- was supplemented by some practical items: a green pocket vest to carry ammo; a combat camouflage holster; and yellow earplugs of the kind used on shooting ranges. As I pointed out in other mass shootings, covering oneself with layers of gear and shutting out sounds of firing has the effect of insulating the shooter from ordinary human contact, letting him descend into the deep emotional tunnel of self-propelled violence.

Like other rampage shooters, he brought far more weapons thanhe actually used. He had four of his five guns-- only lacking was the rifle he used to kill his mother, which he left on the floor by her bed, three bullets still loaded. (Why did he leave it?  Was it specific to that particular fantasy scenario, the gun he had planned to kill her with?) Altogether, the shooter carried over 30 pounds of guns and ammunition-- a significant weight for a bean-pole of a young man, six feet tall and weighing 112 pounds. After he committed suicide, police found he still had over 250 live rounds on his body, with more in his car. He had used up about 150 rounds in breaking in and killing 26 people.  He could have kept on firing, but he stopped. He could have fought it out with the police, but he did not-- rampage shooters virtually never do, either killing themselves or giving up when real opposition arrives.

Two implications:  Much of the weaponry he carried was not for practical purposes. It was his symbolic accoutrement, like his black costume and his earplugs, his fantasy surrounding him in the real material world, his comfort zone hugging his own body-- even the weight of the ammo he didn’t need. It was a continuation of the clandestine playacting that had filled his life for the months leading up to the attack. Ordinary street fighters don't wear this kind of gear, and they don't wear earplugs-- they have other kinds of social support for their violence. Loners need more symbolic support.

And secondly: He fired only when he had emotional domination over the people around him. We are revolted at someone killing small children. He choose them as victims precisely because they are defenseless, because they would be afraid of him, because they were the only people he could emotionally dominate. The adults-- the teachers and aides, the principal who tried to stop him in the hallway-- were shot because they were in the way.  His fantasy materials hidden at home were all about shooting children.

He had no fantasies, it appears, about shooting it out with the cops. After his 11-minute rampage, and within one minute of the police arriving, he shot himself. He was dead before the police reached the classroom; he never had to confront them.

Jack Katz, in a conference presentation at University of Giessen (October 2013), pointed out that rampage shooters never have an escape plan. This is very unlike most other criminal plans-- armed robberies, revenge murders, hit-man assassinations, guerrilla attacks-- where getting away is a major part of the prepared scenario. Nor can the rampage killer use anonymous weapons, like bombs or poison; his problem is a spoiled self, and he can only correct his social image by appearing in person, confronting the scene of his humiliation, and make others see him as the powerful figure he has now transformed himself into. And that is the whole aim of the project. Getting away, escaping-- back into ordinary life?  As what?  As a fugitive, a clandestine shadow-- would be to fall back into the spoiled self he wants to transform. That is why the rampage shooting is a dramatic climax, end of story.

Katz’s analysis is correct, as far as it goes. I would add there is always the interactional problem of all violence: confrontational tension. Straight-on face-to-face confrontations threatening violence are hard for everybody. Professional criminals, hardened tough guys and military combat experts are a minority who learn how to master their adrenaline and keep down their heartbeat to the level where they can actually shoot straight (at least some of the time-- Adam Lanza, who fired about 140 shots at persons in close range, missed with about half of them). It is easier to carry out violence at a distant target, especially one that is never seen personally; but a rampage shooter has to confront, because his aim is to get a social acknowledgement of his new self. The solution is to find a weak target. And the weakness is not just physical: in violence of all kinds, a close micro-analysis of the event in time shows that emotional dominance is what precedes and allows physical violence to happen.

For Adam Lanza, targeting small children was the only way he could gain emotional dominance. He was described, by everyone who knew him, as timid, compliant, never aggressive, never threatening. The only persons he could converse with were other computer nerds and video gamers.

Is there a puzzle about why he chose Sandy Hook Elementary School?  He had attended grades 1-5 there; reportedly he wasn’t bullied or teased, and he is said to have liked the school. It was in middle school and high school that he had more trouble, becoming more withdrawn; he developed an aversion to sports-- the popularity-setting and attention-dominating collective ritual of those age groups-- and to noisy crowd activities in general.  So if his self-image problem was with the higher schools, why didn’t he take out his revenge there? The answer seems clear enough if we try to imagine Adam Lanza going into a school basketball game-- which presumably he would have hated-- and shooting up the cheerleaders and players.  In fact, turn-the-tables shooters never confront their opposition on its territory of greatest emotional strength; they always seek to catch their enemies in a down moment.  This is what Adam Lanza did by attacking elementary school children-- and in fact the weakest of them, the first graders. It was the only target he could manage.

Mental Illness

Of all the cases of rampage killings, the Sandy Hook case is most clearly characterized by mental illness. Does that settle it? Hardly.

Adam Lanza was diagnosed with Asperger’s Disorder at age 13, with social impairment, lack of empathy, rigid thought processes, literal interpretation of communications, and extreme anxiety about noises and physical contact with others. At age 14, the diagnosis added Obsessive Compulsive Disorder.  His mother adopted this characterization of her son, and collected books on Asperger’s syndrome. She said he was “unable to make eye contact, was sensitive to light, and couldn’t stand to be touched...” He wouldn’t touch door knobs and had somebody else open doors for him, or else pulled his sleeves over his hands to touch objects. He repeatedly washed his hands and changed his clothes during the day-- although at school and in the video parlor he always wore the same clothes, so it wasn’t style he was concerned about.

But these features of mental illness were not directly connected to the shooting. Asperger’s syndrome is considered a mild form of autism, and is not related to violence. The report mentionsthat in preschool(before the family moved to Connecticut from New Hampshire) “his conduct included repetitive behaviors, temper tantrums, smelling things that were not there, excessive hand washing and eating idiosyncracies.” That is to say, obsessive compulsive behaviors were allegedly seen early, although this is something different from autism; and obsessive compulsives are typically not violent.

His social behavior was not constant over time. Some remembered him in elementary school as participating in play groups and parties, enjoying music and playing saxophone (not yet sensitive to noise, touching, etc.) In middle school (i.e. the years when he was diagnosed with Asperger’s), he became more of a loner, began to dislike sports; although for a while he performed in concerts, he dropped out of the school band and stopped playing soccer. In high school (10th grade) he went to meetings of a “Tech club” and even hosted a party for it at his house. (He wasn’t yet hyper-secretive.) This looks like the typical adolescent status system split between jocks and nerds.

He did not like the noise and confusion of having to walk through school halls to change classes-- exactly the occasions when the sociable kids are greeting each other, jokes and snubs are made, and the non-regulated teen status system is in full display. The noise he disliked was the chatter of the other students; he could sit through a teacher’s lecture. In 9th and 10th grades, he stopped riding his bicycle and climbing trees and mountains, and began to shut himself in his bedroom and play video games all day long. At school he was excused from physical education. He was labeled a Special Education Student (in teen culture, the lowest of the low). His mother began to home school him, and combined with individual tutoring at the high school, and classes at a local college, he was able to graduate at age 17, and escaped from the teen status system.

It was at the time of transition from the relative comfort of elementary school to the competitive world of adolescent status ranking that his school writing became obsessed with the topic of violence.

So what does the mental illness analysis help explain? Nothing about violence. At most, it adds to the difficulty of negotiating passage through the social system of adolescence.

Mental illness, although it is a noun, is not a thing; it is a behavior pattern that is seen when persons interact. Some of these patterns may have a physiological component. But interaction is a two-way, indeed multi-sided process; it isn’t just ruled by something in one person’s physiology.

This comes out most clearly in Adam Lanza’s way of interacting with his mother.  She regarded him as mentally ill, and had said she did not work because she needed to care for her son, and worried about what would become of him without her.  In his school years, she drove him everywhere. Only after he had been out of school for a year, ostensibly doing nothing but playing video games, that he finally began to drive (did she put her foot down on this, for once?) At home, he ruled her life. Because of his obsessive changing of his clothes, she had to do the laundry for him every day.  No help was allowed into their home; she had to meet all deliveries outside. Although he could cook for himself, he demanded his mother make very specific combinations of food, which had to be served in just the right position on the plate, and certain dishes were prohibited for certain foods. One could call this obsessive compulsive; you could also see it as using mental illness as a form of control. He made her get rid of her cat because he did not want it in the house. He vetoed celebrating birthdays and holidays, and would not allow her to buy a Christmas tree. Since these are festive family occasions, he was attacking any effort she might make to celebrate their family.

Freud referred to taking advantage of the effects of neurotic behavior on other people as “secondary gain.” Goffman goes more deeply into the process by which something happens in people’s lives, the particular kind of trouble that can’t be resolved and wrecks everyone’s life until it ends up by labeling someone “mentally ill.”

“Mental symptoms... are neither something in themselves nor whatever is so labeled; mental symptoms are acts by an individual which openly proclaim to others that he must have assumptions about himself which the relevant bit of social organization can neither allow him nor do much about... Havoc will occur even when all the members are convinced that the troublemaker is quite mad, for this definition does not in itself free them from living in a social system in which he plays a disruptive part.”  (Erving Goffman, “The Insanity of Place,” in Relations in Public, 1971, p. 356.)

Mental illness in the home, then, is a conflict, a struggle over control; and the strongest weapon on the side of the wrecker of conventional amenities is that the others love him or her, or at least want to keep the peace.

The Mother as Facilitator: Folie à Deux

Adam Lanza not only ordered his mother around in all sorts of trivial but insistent ways.  He also got her to buy into his violent fantasy.  All five guns that he possessed were bought by her, along with the large supply of ammunition. She also bought him all the weapons in his cult collection of swords and such. She was the perfect buyer, a respectable citizen, no criminal record, possessor of a pistol permit. She kept on buying him guns up to the very end. Even though tension had been building up, in December 2012, just before he shot her, she wrote a check for him to buy a pistol as a Christmas present. Was this her way of trying to get around his prohibition of Christmas? If so, it was clever in a delusional sort of way, since she obviously knew how much he liked guns.

Police investigators in the aftermath of the murders spent much time looking for an accomplice, anyone who hadaided Adam Lanza in his plan. They missed the main accomplice, perhaps out of respect for the dead, the long-suffering devoted mother.

How could she be so blind? Everything her son did, she interpreted as a manifestation of his illness. The windows taped shut with black plastic were to her just a sign of sensitiveness to light-- even though he could go outdoors when he wanted to. The possibility that he was hiding something in the rooms she was forbidden to enter was masked in her own mind by the feeling that she must do everything possible for her son. He had drawn her into his mental illness, building up a family system where he was in complete control. She may have felt something was wrong, wronger even than having an mentally ill son she loved. Though it seems unlikely that they quarreled in an overt way, some signs of tension came through. According to the report, “a person who knew the shooter in 2011 and 2012 said the shooter described his relationship with his mother as strained” and said that “her behavior was not rational.” He told another that he would not care if his mother died. As usual, when one person loves the other much more than is reciprocated, the power is all on the side of the less loving.

The mother entered into and supported his obsession with weapons, while carefully staying out of his clandestine world. In this, as in the rest of their arrangements, they tacitly cooperated.  The mother lost her capacity to make independent judgments. This is very close to the classic model of the mental illness shared among intimates, the folie à deux.

Shooting Together: the Only Family Ritual that Worked

One feature of the mother’s background accidentally facilitated her complicity with her son’s violent plot. She had grown up in rural New Hampshire, in a culture where hunting and shooting were popular pastimes. For her, an interest in guns was normal, and the fact that her son began collecting them was a good thing. It appears that his gun collecting developed after age 14, when he had already been diagnosed as mentally ill, and started becoming obsessed with violent fantasies. His mother saw his guns as a healthy sign, since it was something the family could do together.

During the time when his older brother was living at home (i.e. up until 2006 when he went off to college, when the Adam Lanza was 14), the three of them would go to a shooting range, the mother and her two sons. The father, who had been separated and divorced when Adam was around age 9-11, would come and visit him until he was age 18; besides hiking together, they sometimes went shooting.

Altogether, it appears that as family relationships deteriorated and Adam withdrew more into video games and seclusion, guns were the one thing that mother and son positively had in common. It was the one interaction ritual that worked, where they focused on something they both liked. For her, it must have been the last remaining marker of mother-son solidarity.

The Precipitating Process

Why did it all come to a head on December 14, 2012? Most of the surrounding social relationships for Adam were disappearing-- whether to call them supporting relationships may be questionable, but his world was shrinking down to little more than video games and his violent fantasies. His brother, 4 years older than himself, went away to college when Adam was 14.  When his brother first left for college, Adam began to think about joining the military, but this never happened.  After college (Adam was now 18), the brother moved out of state; though he tried a few times to maintain contact with Adam, they had not spoken for 2 years at the time of the shooting. At the time of their break, Adam was out of high school already for a year, but had no plans to go to college or get a job. The gap between his brother’s status and his own was widening; his brother was no longer a role model for his own future, if he ever was.

Adam’s father had visited regularly since the separation, but he remarried in 2011. Adam apparently reacted negatively, and they never saw each other again after the end of 2010; though the father tried to reach him by email and proposed places they could meet, Adam stopped responding.

His sole remaining link was his mother. Relationships were strained in fall 2012. She worried because he had not left the house for 3 months. He was treating her worse and worse; he would no longer talk to her directly, and communicated with her only by Email. In November she notified him she planned to sell her house and move to another state, where Adam could go to a special school or get a computer job. Adam at least overtly agreed to the move. But there were conditions; he refused to sleep in a hotel during the move, so the mother planned to buy an RV where he could sleep.  The issue came up in October 2012, when Connecticut was hit by Hurricane Sandy, but Adam refused to leave the house even when electricity was out. Now his familiar place of refuge, his backstage guarded against all comers, the place where he kept his plots and weapons, was being taken away from him.

On December 10, the mother went on a 3-day trip to New Hampshire. She arrived home late in the evening of December 13, and went to bed without seeing her son, who was still incommunicado.  He had had 3 days to finish his plan. Apparently she was part of it. Next morning, before 9 a.m., he went into her bedroom and while she slept killed her with 4 shots to the head. Now he was on a roll, emotionally taking the initiative, starting with the easiest target of all, the one person he could dominate. At 9.30 a.m. he was shooting in the front door of Sandy Hook Elementary School.

Can We Learn Anything That Will Head off Mass Shootings?

The single outstanding cause of the murders that took place in Newtown, Connecticut that day-- the condition without which the murders would not have happened-- was the behavior of Adam Lanza’s mother. He had neither the contacts nor the interactional competence to acquire the guns and ammunition on his own. Without her complicity, he would have been just another alienated nerd, sunk in the world of computer games and violent fantasies. None of the other conditions-- his mental illnesses, the problems of adolescent transition, the ubiquitous entertainment culture of fantasy violence-- in itself is strongly correlated with mass killings; the latter two conditions in particular affect tens of millions of youths, but only a miniscule fraction of them turn it into a program of murder.

As Katherine Newman and colleagues have pointed out, in virtually all rampage killings the plot leaks out somewhere; clues are evident, although missed at the time. Newman et al. are particularly concerned with clues missed by teachers, and hushed up by the teen peer culture. What we have here are clues that are strongly visible in the home. Anyone without this mother’s particular way of relating to her son could have seen that something was being concealed, and that it had something to do with stockpiling firepower.

Given that most school shootings are perpetrated by students or recent ex-students, the home is where most clandestine preparations are made. And this is no sudden episode; in every case we know, there was a long period of build-up. The deep tunnel of self-enhancing motivation is dug for months at least.  And that means that parents and other members of the household are in the best position to read the cues for what is going on.

The lesson should be taken to heart above all by parents who own guns. Almost all school shootings happen in communities where gun ownership is widespread, where guns are part of the local culture. The vast majority of gun owners in such communities are respectable and non-criminal (for the statistics, see my Sept. 2012 post). Nevertheless, teens on the path of alienation, with the underground culture of prior mass killings to guide them, find it easiest to get guns when their parents and neighbors have them.

To avoid misunderstanding, let me repeat my previous conclusion. It is not the possession of guns that is the warning sign; it is hiding an arsenal, and clandestine obsession with scenarios of violence. When clues like this appear in one’s own home, the gun-owning parent should be in the best position to recognize it.

It is not simply a matter of teaching one’s children proper gun safety. One can be well trained in an official gun-safety course-- as Adam Lanza was, along with his mother-- and still use the gun to deliberately shoot other people.

What is needed, above all, is a commitment by gun-owners to keep their own guns completely secure, and not to let them fall into the hands of alienated young people, including one’s own children or their friends.

My recommendation is to gun-owners themselves. The issue of gun control in the United States has been mainly treated as a matter of government legislation. That pathway has led to political gridlock. That does not mean that we can do nothing about heading off school shootings. Simply put: keep alienated youths from building a clandestine arsenal where they nurture fantasies of revenge on the school status system, or whatever problems they have with their personal world. Gun-owning parents are closest to where this is most likely to happen. We need a movement of gun-owning parents who will encourage each other to make sure it doesn’t start in our home.

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

 

----------

References:

Report of the State’s Attorney on the Shootings at Sandy Hook Elementary School, Newtown Connecticut, December 14, 2012

http://cbsnewyork.files.wordpress.com/2013/11/sandy_hook_final_report.pdf

Katherine S. Newman, Cybelle Fox, David Harding, Jal Mehta, and Wendy Roth. 2004. Rampage: The Social Roots of School Shootings. Basic Books.

Jack Katz. 1988.  Seductions of Crime. Basic Books.

Jason Manning.  2013.   "Suicide and Social Time."  [unpublished, Department of Sociology & Anthropology, West Virginia University]

Randall Collins, "Clues to Mass Rampage Killers: Deep Backstage, Hidden Arsenal, Clandestine Excitement."

GOFFMAN AND GARFINKEL IN THE INTELLECTUAL LIFE OF THE 20th CENTURY



PART 1.   DOWNSTREAM FROM GOFFMAN: SOME RESEARCH TRENDS

Erving Goffman pioneered numerous intellectual trends in the close analysis of everyday life. Of course he didn't do it all by himself.  Intellectual life is prone to retrospective hero-worship; yet this can be defended, practically speaking, as a convenient simplification whereby a particular person becomes an emblem for a broad front of intellectual advance.  In talking about Goffman and his following, we are talking about an extended family, and a somewhat quarrelsome one at that. Goffman happens to be the most memorable representative of this family; his work abounds in crisp formulations: frontstage and backstage, facework, total institutions, interaction rituals, frames, and more. I will sketch a select few of the downstream channels opened up by Goffman's students and successors.

Micro and macro 
           
The terms were not used by Goffman and others of his generation. * But the theme was in the air. Already in the 1950s, George Homans was declaring-- against Talcott Parsons-- that society is no more than the actions of individual persons; ergo sociology reduces to the explanations of behaviorist psychology. Herbert Blumer was spearheading a militant version of symbolic interactionism: attacking  statistics and "the variable" because they do not actually do anything.  In class at Berkeley, he used to say things like: where is social class? Where do you see it? In another part of the battlefield,  Garfinkel developed his position (not yet called ethnomethodology) in the 1950s, but he was a shadowy figure until the mid-1960s, when he acquired creative followers such as Harvey Sacks, Emmanuel Schegloff, David Sudnow and others. But they were at Berkeley, not UCLA; Harold was notoriously difficult to work with, and the young radical ethnomethodologists were Goffman's PhD students, even though Goffman was teaching a different line.

* At any rate, not until the last year of Goffman's life.  I was among the first to use this pair of terms (in the sociological sense) in a 1981 article in Amer. J. Sociol. "The micro-foundations of macro-sociology." This has nothing to do with the economists' sense; micro-economics studies the behavior of the firm in a market; macro-economics studies the movement of entire economies over time. From a sociological point of view, both of these are macro. As Greg Smith pointed out (at the Cardiff University symposium, 2013) Goffman's comment in his 1982 ASA presidential address was a rejection of my claim about the micro-foundation of macro-sociology, published a year before. "...some... argue reductively that all macrosociological features of society, along with society itself, are an intermittently existing composite of what can be traced back to the reality of encounters-- a question of aggregating and extrapolating interactional effects. This position is sometimes reinforced by the argument that whatever we do know about social structures can be traced back to highly edited summaries of what was originally a stream of experience in social situations.... I find these claims uncongenial." [Goffman, 1983. "The Interaction Order." Amer. Sociol. Rev. 48: 9]

The common denominator of these figures is that all pointed to everyday life, where the action is. Having said this does not settle what the research program would be.  Homans and his followers said it's the actor's calculus of maximizing rewards over costs; at the time it was called exchange theory, later renamed rational choice, after economists got on board. Blumer's symbolic interactionism emphasized actors' definition of their situation, stressing the possibility of reinterpreting the situation, and thus giving volatility to social life. This line of analysis was brilliantly developed by Norbert Wiley in his 1994 book, The Semiotic Self, with its empirical elaboration of the process of internal dialogue (AKA verbal thinking) and internal rituals inside the mind.  Garfinkel's ethnomethodology locates the key in commonsense everyday reasoning. The model is conservative in just the opposite of the sense in which Blumer is radical; Garfinkel's famous breaching experiments show that persons do not like to have their taken-for-granted assumptions upset, and they try to restore order as quickly as possible.

Garfinkel raised a lot of hackles by declaring sociology does not exist, and should be replaced by ethnomethodology. A more acceptable development came from Sacks and Schegloff, who invented a new research method and field-- Conversation Analysis-- using tape recorders to capture exactly what people say to each other in real situations, getting at the local production of everything. And since the data are recorded and minutely inspectable, this led to discoveries such as the importance of rhythms in talk-- points not necessarily brought out by CA theorists themselves, but by the broader movement of micro-sociologists-- which show the micro-mechanisms by which solidarity is manifested, as well as alienation and conflict. This is the pattern no gap, no overlap (originally stated as a fundamental rule of conversation, by Sacks, Schegloff, and Jefferson, 1974); and the ways this can be violated: no gap/no overlap generates a strong rhythm which is the ultra-micro mechanism establishing solidarity; long gaps evince alienation; persistent overlap is conflict.

In this array of theoretical possibilities, Goffman took an ostensibly modest position. Society exists and is primary, he said; I'm just studying the social in everyday life, sideshow though it may be. What kind of intellectual impression management was this? In one his few explicit references to his own intellectual ancestry, in an early paper Goffman cited Durkheim on the point that society constructs the individual self and makes it a sacred object; hence minor rituals like saying hello, goodbye, handshakes and kisses, mirror the grand rituals of religion, but in this case they give obeisance to the self. More precisely, they give ritual respect to the other's self, in return for reciprocity in upholding one's own. Moreover, these everyday rituals are obeisance to the collective definition of the situation-- adding a touch of symbolic interaction, even though Goffman was generally critical of that theory.

Although Goffman said society is primary, he never studied it in the large. He shifted the center of gravity to the situation itself; social life becomes a string of situations. This is not an ontological claim: it is a research strategy. If you want to understand mental illness, go into the schizophrenic ward and see how people so labeled interact with each other and with their guards and medics. If you want to see social class, go in and out the kitchen doors of a resort hotel and see how the waiters put on a face for their upper-middle class clientele; and then see how these proper Brits dress for dinner in their own private backstage regions, putting on costume and manner to act out their frontstage identities. Goffman had such a powerful influence because he led by dramatic example; he provided both a research strategy and a theoretical mechanism for what causes what and with what consequences.

What is the value of reframing all this as micro and macro?  In my 1981 article, I argued it is not a matter of what kind of research people should do, or be prohibited from doing; if you want to study revolutions à la Marx and Skocpol, or world-systems à la Wallerstein, go ahead and do it; by their fruits you shall know them and if they come up with something important let's learn it. By the same token, the followers of Goffman, Garfinkel et al. were making discoveries and opening up frontiers. Pragmatically it is pointless to demand that we should all do this instead of that.  But many sociologists at the time said exactly that, the positivist methodologists on one side putting down researchers on everyday life as either unscientific or trivial; and Garfinkel at the other extreme saying society is nothing but a gloss on commonsense reasoning. Calling the choices of what to study micro-and-macro was a way of saying that all sociologists occupy the same empirical universe; some of us are looking at it up close, through ever-more-sharply-focused micro-scopes; others were widening the vision, to larger swaths of time and space.

Ontologically micro and macro are not distinct realms. The macro can be zoomed in on everywhere. A young ethnomethodologist at San Diego (Ken Jennings) once convinced me of this by saying: since you want to do historical sociology, if you could get in a time machine, exactly what would you want to see when you got there? This made me realize not only that you always enter the macro at some micro point; but that what macro-historical sociologists are doing is grappling with a scale where stretching out beyond every micro situation are other situations in the past and future; other situations spread out horizontally, contemporaneously populated by other interactions, as far out to the horizon as we have the methods to see. Macro is not different from micro, it is just more micro-situations, viewed as they are clumped together in larger slices of time, space, and number. 

Pragmatically there are always going to be different kinds of sociologists doing research at different scale, not to mention anthropologists, linguists, historians etc. So what is the point of saying that micro is the foundation of macro, while at the same time saying let everyone approach the micro-macro continuum from their own angle?   I proposed a bet on the micro-researchers: because macro-sociologists deal with things that they refer to by nouns-- states, world-systems, societies, organizations, classes-- they can be led astray when they write as if these are entities that don't depend on the people whose actions make them up. The more positivistic act as if statistics are more real than the actions that they summarize. In this respect, micro-sociologists have made a dent in how macro-sociology is now perceived. There has been a shift from looking at societies as entities, to viewing them as networks-- and networks of varied and changing shape and extent, with ties of differing strengths.*  It is not accidental that the era during which network sociology has risen in importance has also been the era of militant micro-sociology.

* Two representative works are Michael Mann, The Sources of Social Power, 1986-2013, demonstrating across world-history there are no such things as unitary societies, especially as conventionally glossed by the names of states or ethnicities, but a shifting and overlapping mesh of networks of economic, political, military, and cultural exchange and power; and John Levi Martin, Social Structures, 2009, which derives social change as well as dead-ends from the transformative possibilities of various kinds of networks. For a summary of the latter, see my June 2011 post, "Why Networks Change their Shape, or Not."

Macro-words such as "the government of France," or "the Wall Street stock exchange", may be convenient terms for referring to networks that tend to hang together and reproduce themselves from year to year; but if macro is really composed of the linkages of micro-situations in time and space, the dynamics of macro should be found in the micro. And this means, when government rise and fall, or markets go into booms and busts, we should zero in, get into the streets and palaces of Paris on February 23rd 1848, or the streets of Cairo on a sequence of notable days between 2011 and the present, and look for the mechanisms by which micro processes drive macro events.

Sociology of emotions

This becomes clearer with the theory of emotion work, developed by Arlie Hochschild, one of Goffman's students at Berkeley in the mid-60s. Emotions are often performed rather than simply experienced, Hochschild notes (The Managed Heart, 1983). There are professions whose chief skill is putting on a particular emotional tone; she studied airline flight attendants and bill-collectors; her students studied lawyers and strip-tease dancers. If one wants to take on the core dynamics of macro political and economic power, one could focus on those professional mood-spinners, politicians and investment counselors. Hochschild's inspiration is Goffmanian; people work on themselves to project emotions that fit the situation, or that serve to control other people in a situation. In short, emotions are performed on a frontstage, they are impression management, dramaturgy.

Hochschild has been criticized for ignoring emotions which are spontaneous, but not so; persons have to do emotion work precisely in situations where their spontaneous emotions don't fit what is expected of them. There is an emotional backstage, but here emotions are not just spontaneous, but scrutinized, strategized as to how they can be transformed into frontstage emotions. Arlie has a wonderful argument that, contrary to stereotype, men are more emotional then women. At least in their relations towards each other, where men are more powerful, they can express what they feel or at least what they lust, while women have more to lose if they let their romantic emotions carry them away. Women do more emotion work than men, precisely because they talk more about their emotions-- especially in backstage privacy with their girlfriends, trying to talk each other into calculating which man is a good choice to let one's emotions roam free with. Thus women are considered to be more emotional than men but this really means women talk about emotions more, in backstage situations; men talk less about their emotions, but simply act on them. The evidence is that men are more likely than women to fall in love at first sight. The whole question of who is more emotional is simplistic, unless one considers the front and backstage dimensions of emotion work.

From Hochschild and other contemporary researchers there developed the field of emotion research-- Tom Scheff  (another of Goffman's former protégés), Theodore Kemper, Jonathan Turner, Jack Barbalet and many others. What makes this more than just another subspecialty?  It has a driving theoretical significance because emotions are the glue of the social order, and the energy of social change. Perhaps the social change aspect is more visible, with the anger, enthusiasm, and exalted self-sacrifice found in political and religious movements; but there are also the quiet emotions that sustain the social structure when it does not change, i.e. when it repeats itself day after day and year after year. Garfinkel and the ethnomethodologists had a blind spot for emotions, but they are apparent in the breaching experiments: when commonsense assumptions are breached, the reaction is bewilderment, shock, even outrage. One can reformulate Garfinkel as holding that the merest glimmer of these negative emotions-- these breaching emotions-- cause people to recoil and put back normal social order as fast as they can.  Tony Giddens picked this up and turned it into the existentialist formulation, that ontological anxiety is what holds the social order together. How far can we go with this? Recall, Garfinkel has a conservative view of social institutions, where other micro-sociologists are more inclined to a volatile view.  Garfinkel's world, in my summary exposition, rests on a crude exaggeration, since people don't always succeed in putting social routine back together; and "as fast as they can" refers to just the temporal magnitude of the breaching experiments, more or less a few minutes or an hour at most. A frontier area of research now is time-dynamics, how long emotional sequences take, and what happens to the emotions after a few days, a few weeks, a few months. The shifting moods in Tahrir Square over 30 months give some indication of the kind of pattern we are trying to capture.

Here again a path looks more promising that comes via Goffman (and behind him, from a combination of Durkheim and Blumer)-- more promising in giving us the mechanism, the switch that shifts social situations between reproduction and change. This brings me to my third point, the micro-sociology of stratification.

Interaction Rituals as the mechanism of stratification
           
Let us go back to 1967.  Goffman had just published his book Interaction Rituals, composed of papers which he had published in the 1950s in relatively offbeat places. Goffman was the subject of much discussion and gossip among Berkeley graduate students; and not just for his quirky personality, such as why his wife committed suicide by jumping off the San Rafael Bridge.  Interaction rituals opened our eyes to what we now could see all around us: everything that people were doing, minute by minute, was not natural but socially constructed; it was all social rituals, and they all operated (more or less unconsciously) to enact a certain kind of social order. And that social order was power, it was class, it was organization and authority.  (This was a few years before it occurred to us that it was also gender and sexuality.) And we jumped to the conclusion-- not necessarily shared by Goffman himself-- that if social order was constructed it could also be de-constructed  (not that we used that term), it could be challenged, it could be torn down.  Whether something else would be put in its place was an open question, since this was the age of the cultural revolution, AKA the psychedelic revolution, and some in the counter-culture proclaimed that artificially constructed social order would be blown open just by turning in on how it is done, and dropping out from it. The utopian phase of this revelation was relatively short-lived.

Goffman and Garfinkel became intellectual heroes of a generation that had experience challenging the taken-for-granted institutions of macro-power. Some of us had taken part in campaigns against racial discrimination in the South, and in the North as well, and had found that institutions of deference and demeanor that supported white dominance could be broken. At Berkeley, and many other places, students found that the traditional authority of university administration could be successfully challenged, by collectively bringing the organization to a halt.  In the student movement of the 1960s, sociologists were prominent-- not because they were the most alienated, but because they had the intellectual tools to see what they were facing. Herbert Blumer took no part in the university demonstrations, but in his classes he would refer to students taking over the administration building as an example of his point: an organization does not exist just because it is a thing-like noun, but exists only to the extent that people act it out; when they stop interpreting it as existing it stops existing.  This is not idealist solipsism-- it's all in your mind-- but rather it is situationally real or not depending on how a group of persons act together to change a collective definition of a situation.

In the event, universities did pull themselves back together, although in ways that incorporated some of the newer definitions of what they should be doing. My purpose is not to trace the activist counter-culture politics of the 1960s and its permutations in following decades; but to note that many then-young sociologists saw Goffman and Garfinkel as having apocalyptic implications. Its political fate is not the crucial point for this intellectual development; the New Left did not win in the end; there were swings to the New Right, the Neo-Liberalism of the 1990s, the revival of religious activism, and so on. The fact that the counter-culture did not win in the long run is no disproof of Blumer's radical symbolic interactionism; the social order of any given time is the result of all the various groups of people who mobilize themselves to define what social institutions are.  The student Left had no monopoly on mobilization; religious mobilization shifted techniques for stirring up collective effervescence from Left to Right; political movements were mobilized not just in the name of oppressed races, ethnicities and sexual preferences but also in tax revolts by self-defined economically oppressed middle-class; Western techniques of revolt spread to the Soviet bloc and many other places, with results that may seem paradoxical in macro perspective but which show the power of micro-techniques to transform so-called big structures.

The story I am telling is about Goffman's downstream; the main point is that Goffman's theory of interaction rituals became radicalized, used in the service of stripping existing practices of their legitimation. And then, when politics settled down and it became apparent radical definitions were not going to go unchallenged by opposing definitions, Goffman's micro-analysis came to be seen as a tool for seeing how the dominant order makes itself dominant.

My take on it was as follows. What makes classes strong or weak happens in micro-situations. Some people get more out of their micro-interactions than others. Why? Goffman had already given some clues: polite rituals like introducing oneself, leaving calling cards, gentlemen taking off their hats to ladies, were means by which stratified groups constitute themselves; persons who didn't carry out these rituals properly were left outside their boundaries. Goffman's own examples were historically rather backward-looking, and it seemed ironic that he was taking them from old etiquette books at the time when a massive shift towards informalization was going on-- at the time I used to think of it as the "Goffmanian revolution". I attempted to generalize the model by expanding on its Durkheimian basis, the theory of religious rituals which constitute religious communities. I emphasized that rituals succeed or fail; some gods are deserted because their worshipers no longer find their ritual attractive, or because a rival ritual draws them away. The ingredients that go into rituals must be seen as varying in strength: bodily assembly (i.e. opportunities to mobilize as a group); techniques for generating a mutual focus of attention, and for stirring up a shared emotion; when these ingredients are favorable, they accelerate through mutual feedback, generating collective effervescence, which seen through a micro-sociological lens, is visible in rhythmic entrainment of people's bodies. Goffman helps us see that little micro-rituals are going on all the time, varying in strength.  Where these interaction rituals are strong, they generate feelings of group membership-- in this case, membership in a social class; feelings of moral solidarity-- the belief that their group is right, what Weber would call legitimate, and should be defended against rival ways of life. On the individual level, a successful interaction ritual gives what I called, modifying Durkheim, emotional energy: feelings of confidence, initiative, enthusiasm; conversely, failed rituals are emotionally depressing.

All these are elements in what makes some social classes dominate others. Dominant classes are better at rituals, or monopolize the successful rituals; dominated classes are weak in ritual resources, because they have no opportunity to assemble for rituals of their own. This fitted well Goffman's analysis of the British resort hotel (in The Presentation of Self in Everyday Life, 1959), where servants are underlings in higher class rituals. But ritual resources can shift; sometimes subordinated groups get more ingredients for themselves to mobilize; in the 50s and 60s there were vivid examples  before our eyes in the black mobilization for civil rights, turning themselves into a committed, energized, self-sacrificing group that gathered supporters by staging massive public rituals which swung legitimation to themselves and away from the segregationists.

One could go on with a histoire raisonnée of dramatic social movements,  analyzing their micro-techniques for successful interaction rituals; as well as the decline of movements as they are undercut by other ritual mobilizations  (e.g. the proliferation of gang rituals after the 1950s, which made black people appear threatening, at just the time the civil rights movement was making them respectable).  To drop to the level of spare analytical abstraction, let me list a number of ways in which researchers in Goffman's wake have explained who situationally dominates whom:

-- One version is that higher classes are frontstage personalities, while lower classes are backstage personalities.  Higher classes appear in the center of attention, on the stage of big organizations and networks, where they get to formulate the topics and set the emotional tone.  The lower classes are audiences for the higher. It does not necessarily follow that lower classes are taken in by upper class rituals; insofar as lower classes have enough privacy to gather on their own backstages, they can carry out little interaction rituals among themselves, complaining and satirizing their bosses. The result is a difference in class cultures: the higher classes portray themselves in lofty ideals, the lower classes are cynical.

-- Another version is higher classes have more refined manners, and spend more time policing their boundaries. They generate refined rankings, some persons being judged more polite or sophisticated than others; persons who might challenge class domination become drawn into elite tournaments of micro-interactional skill. Goffman wrote incisively about techniques like the aggressive use of face-work, coolly insulting others in ways that those of lesser sophistication cannot respond to except by losing emotional control and damning themselves by their own outbursts. Such techniques, Jennifer Pierce (Gender Trials: Emotional Lives in Contemporary Law Firms, 1995) has shown, are part of the interactional skill that make a successful courtroom lawyer.

--- A related argument is in research by Lauren Rivera (Northwestern University) on how some candidates become successfully chosen in job interviews for elite financial and consulting firms; the key is not so much how the interviewer rates the candidate's technical background or skills, but whether they have emotional resonance.  Manners, an easy flow of topics to talk about, all serve as ingredients that make for successful interaction rituals, which act as gatekeepers to the elite.

-- Another argument is that the higher classes have more emotional energy-- they have gone through a sequential chain of interaction rituals where they have been successful; this gives them a store of confidence and enthusiasm that enables them to dominate the next interactions in the chain. Conversely, the lower classes have less emotional energy; they are less confident of themselves, take less initiative, are poorer at impressing others with their emotional tone. Successful entrepreneurs and financial manipulators are not just cold calculators but energy centers with investors chasing at their heels. Careers of  successful politicians, seen through the microscope of interaction rituals, shows them developing the techniques that make others into followers; but the trajectory of the chain can shift; rising politicians can become overmatched, undergoing crises where they can no longer control the emotional tone of situations, and lose their charisma. (Instances to ponder include the rise and fall of Gorbachev, and the ongoing vicissitudes of Obama.)

These mechanisms of class domination are not mutually exclusive; together they may give an overwhelming impression that class domination is impregnable. Nevertheless, class orders do shift historically; individuals do move up and down in their lifetime; and in the very short run of daily situations, interactional dominance can fluctuate. I have referred to the latter as situational stratification. Without trying to summarize all the ways micro-mechanisms can change stratification, let me move to my fourth and last point, violent conflict.

Violence and conflict as impression management

Conflict is the main way the caked-on sediment of custom is broken; not that the challengers always win, but conflict is volatile, and can rearrange the resources that make stratification, if it is renewed in the aftermath, different than it was before. I will concentrate on violence, which is both the most extreme and perhaps the best studied form of conflict. To say that violence hinges on impression management is to say that the success or failure of violence is based on micro-mechanisms.

Elijah Anderson in Code of the Street (1999) gives an ethnography of the most violent zone of the inner-city black ghetto, in such cities as Philadelphia and Chicago. His most striking argument is that most people are faking it. The code of the street is a style of presenting oneself: tough, threatening, quick to take offense. But Anderson shows, by years of careful observation, that most people in the ghetto consider themselves to be "decent"-- pursuing normal middle-class goals of a job, education, family; but under conditions of the ghetto, where policing is non-existent or distrusted, decent people-- this is a folk term in Philadelphia--  have to be ready to defend themselves.  "I can go for street, too," they would say, referring to a different category of people who are called "street"-- people who are committed to violence and crime as a way of life.  These are two different styles of presenting oneself, and most people can code-switch. The switch is micro and patently visible: a young man walking on an empty street at the edge of the ghetto appears relaxed and happy, but at the sight of another male drops into a hard demeanour, muscles tensed, shoulders swinging and torso dancing with a nonverbal message emphasizing ownership of his personal space. 

This is situational impression management. Anderson developed his analysis under the influence of Goffman, who was his colleague at University of Pennsylvania. It is putting on a public face, don't mess with me.  Anderson goes on to argue that performing the street code is an attempt to avoid violence, and in two different ways. One is to protect oneself from being a victim by looking tough.  But this can backfire on occasion, since two men (or two women) can become locked into a contest of escalating face-work that leads to violence.*  Anderson gives a second technique: when both persons show that they know the street code, they can establish membership, and both can pass through the situation with honor without violence. 

* Shown also by research on homicides arising from escalated face contests. Luckenbill, David F. 1977. "Criminal Homicide as a Situated Transaction." Social Problems 25, pp. 176-186.

More detailed field observations on this point are given by Joe Krupnick's research on the streets of Chicago.** When gun-carrying gang members approach each other, their concern is generally to avoid violence, since they are experienced enough to know its cost. They use a micro-interactional technique when getting into hailing distance-- brief visual recognition, no prolonged stares, studied nonchalance, brief formulaic greeting, moving on past without looking back. Failing to play this particular interaction ritual can result in getting rolled on, with indignant charges-- "who he think he is, act like he rule the street!"  The proper performance of interaction rituals is fateful in the most violent neighbourhood.  Similarly detailed participant observation of Philadelphia street gangs has been done by Alice Goffman, Erving's daughter, in a dissertation rich with all the ways that performing the impression of violence is more important than the violence itself.

Krupnick, Joe, and Chris Winship. 2013.  "Keeping Up the Front: How Young Black Men Avoid Street Violence in the Inner City." In Orlando Patterson (ed.) Bringing Culture Back In: New Approaches to the Problems of Disadvantaged Black Youth.

In Violence: A Micro-sociological Theory, I argued that violence is difficult, not easy; the micro-details of a threatening confrontation show that persons come up against a barrier of confrontational tension and fear that makes them hesitant and incompetent even if they consciously want to commit violence. This shared emotion will inhibit violence from happening, unless one side or the other finds a way to get around the emotional barrier. Violence is successfully carried out when one side establishes emotional dominance over the other. And this is done chiefly by a dramatic performance; emotional dominance precedes physical dominance. A typical way to seek dominance is bluster: threatening, angry gestures, loud voice, an attempt to dominate the communicative space. If the two sides are in equilibrium, violence does not usually come off. The crucial technique of violent persons, then, is violent impression-management: subtle micro-moves in gesture and timing to get the other side into a passive, de-energized stance, thereby allowing them to be subjected to violent force. Violence is a learned skill, rather than merely a lack of self-control or an outburst of past resentments; and what is learned is a specific way to manage the micro-cues of self and other in violence-threatening situations.

Violence is an area where we can palpably demonstrate that the micro makes a difference. This is a counter to the position often taken in the micro-macro debate that micro merely reflects macro-- that micro behavior just reproduces the macro structure.  Several versions of the argument have been recently popular: the Bourdieu version is that habitus is the individual's disposition to act in accordance with their sense of position in the structural field. Another version is that people act out cultural scripts, that they know what to do in any particular micro-situation because they follow a cognitive script or schema. The trouble with both these types of argument is that they are static; nothing ever changes because the same habitus, the same script constantly operates; social structure simply reproduces itself. Against this is the dynamism of symbolic interactionism, bolstered by Goffman's tools for looking at the micro-details that determine what happens in interactional situations. It is not a matter of looking inside the individual for what habitus or script each happens to carry, but the interaction between those individuals in the emotions and rhythms of the situation itself.  We see this most clearly in the micro-contingencies that determine whether violence will break out or not.  And violence so often is a cutting point, spreading through escalations and reactions, a micro-point that gets dramatic attention and permeates the macro-space, sometimes setting off major structural changes.  At such times, micro really does causally determine the form of the macro.

Goffman, by Goffman

In conclusion, a few words about Goffman himself. I said at the outset that Goffman is an emblem, a figurehead for the moving front of researchers who explored the micro-sociology of everyday life. Can we turn a Goffman lens on Goffman himself? I once asked him what he thought would be the micro-sociology of the intellectual world, but he brushed it aside with a characteristically sarcastic remark.  Perhaps he did think about it, in his own backstage. He did seem to operate with a strategy as if designed to make himself the leader and emblem. He rarely cited any predecessors; he did not put himself in the lineage of those who went before-- whereas we ourselves rule out the possibility of claiming to be emblems, precisely because we talk so much about our predecessors. Goffman occasionally criticized his rivals, but only in dismissive footnotes: trashing Schutz and his followers (which is to say Garfinkel's movement, which Goffman never mentions) in a few lines appended to Frame Analysis; occasional lines about taking the role of the other that only the cognoscenti would recognize as a putdown of Blumer and Mead. Goffman never gave his rivals even the attention that comes from direct attack.  Some prominent micro researchers have complained that Goffman never cited them when it would have been appropriate, citing instead more obscure sources. No other intellectual loomed up on Goffman's pages.

To be sure, he himself did not loom up overtly; he affected a modest manner (in his writing, that is-- his behavior in everyday life is legendary for his aggressive face-work), but with an artfulness, indeed archness to his words. Goffman is difficult to connect from book to book because he never used the same terminology over again, nor explained how newer concepts might have improved from the older ones-- how frontstages related to rituals and then to frames. This may be part of the impression management that Goffman engaged in about his own intellectual career. He was always reincarnating himself as an innovator, covering his own tracks. As sociologists downstream from Goffman, we have learned to see some of the tricks. He rested more on a larger movement than he himself ever admitted. In that sense, he was an ordinary intellectual, closer to ourselves. But as a personality-- his life was a masterpiece of singularity.

Then again-- can't we say the same about Harold Garfinkel?


PART  2.  GARFINKEL:  RIDING TWO WAVES OF INTELLECTUAL REALIGNMENT 

Harold Garfinkel’s work can be located in the two great waves of realignment that took place during the 20th century, the first in the 1920s and 30s, the second in the 1960s and 70s. Garfinkel, I am going to argue, was one of a very few sociologists who centered oneself on the realignment in philosophy in the 1920s-30s, when he was growing up. But his reputation did not take off until the 1960s and 70s, when a school-- indeed a cult-- formed of ethnomethodologists who made up the radical wing, in the Anglophone world, of the second big realignment to hit the human sciences.

The First Realignment: From Neo-Kantians and Idealists to Phenomenologists and Logical Positivists

Let us start with the realignment that took place in philosophy when the dominant positions at the turn of the 20th century gave way to a new set of oppositions in the 1920s and 30s. At the beginning of the century the major schools were the Neo-Kantians, along with vitalists and evolutionists (the latter two sometimes combined, as in Bergson). In the Anglophone world, the center of attention remained the Idealists:  the most famous philosopher in Britain was F.H. Bradley—who performed a dialectical dissolution of all concepts as incapable of grasping Absolute reality, with a capital A. Idealists included Bertrand Russell’s teacher McTaggart, and Whitehead, who published an Idealist system as late as the 1920s. In the US, Idealism as even more dominant, and was part of the worldview of persons we otherwise think of as pragmatists: William James, Charles Sanders Peirce, the early John Dewey, and George Herbert Mead on down to his death in 1931.  Earlier, the most famous Idealist was Josiah Royce, whose name is on Royce Hall next door to the sociology building at UCLA.  Idealism was partly a defense of religion in rationalized form--  one reason why Idealism was so important in the transition of American higher education from Bible schools to research universities. Idealism was also a sophisticated epistemology that holds that no one ever sees the so-called real world, but only through the eyeglasses of one’s categories. The only sure reality is the mind.

Neo-Kantianism dominated on the European Continent, led by such figures as Dilthey, Windelband, and Cassirer. Unlike the older Idealists, it was no longer concerned with defending religion and no longer built metaphysical systems, and had made its peace with natural science. Neo-Kantians took their topics from investigating the constitutive logics of the various disciplines; Dilthey distinguished Geisteswissenschaft from Naturwissenschaft, each valid in their own sphere, but using distinctive methods of hermeneutic interpretation or seeking causality. In Windelband’s view, they wore different eyeglasses, idiographic or nomothetic, seeing particulars or general laws.  The newly organized social sciences were especially good territory for Neo-Kantian meta-theorizing. Economics might seem naturally to be in the Naturwissenschaft camp, but in Germany, economics had been historical, not mathematical, and the so-called Methodenstreit-- the battle of methods that Max Weber took part in the early 1900s-- concerned what approach should govern economists’ work. Weber’s ideal types were a Neo-Kantian solution, designed to allow bifocal eyeglasses, so to speak. Psychology was a favorite Neo-Kantian hunting ground; sociology and anthropology also became targets. In the founding generation of sociologists, Weber, Simmel, and to a degree Durkheim were all Neo-Kantians.

Vienna Circle positivists of the following generation rejected the Neo-Kantian way of drawing borders, and launched an imperialist campaign for unification of all the sciences, including the social sciences. But for the moment we need to focus on the earlier positivists, figures like Ernst Mach in the late 19th century. In our own day, positivism has become a term of abuse, for number-crunchers, dogmatic materialists and naive objectivists who regard natural science as the only true reality. But positivism at the time of Mach meant almost the opposite. Mach held that scientists do not observe reality, but only construct it out of readings of laboratory instruments; hence the reality of science might as well be abolished, replaced by instrument readings, which are always provisional. Machian positivism was close to Neo-Kantianism, and a popular expression of the position was published by Vaihinger in 1911 as The Philosophy of As-If.   

The 1920s and 30s swept away the dominance of the Neo-Kantians and their allies, and replaced them with a new opposition: phenomenology, and the much more radical logical positivism of the Vienna Circle. At first glance, phenomenologists like Husserl and Scheler seem similar to Neo-Kantians: the same search for the conceptual eye-glasses through which we see the world. One difference is that the Neo-Kantians were much more concerned with academic disciplines, whereas the phenomenologists shifted towards everyday life. Was phenomenology, then, the stream of consciousness, just then in the early 1920s breaching the literary world in the novels of James Joyce, Marcel Proust, and Virginia Woolf? An interesting question, which I will pass by, with the remark that only Proust had much philosophical input, and that was from Bergson. Phenomenology seems closer to psychology, and one might be tempted to link it with the Freudian movement, or with the Gestalt psychology that was being developed in Germany in the teens and twenties. But no, phenomenology was militantly anti-psychological; psychology was merely the phenomenal level of experience, governed by causal laws on the level of the natural sciences; phenomenology was deeper-- in Husserl’s famous epochê, bracketing the phenomenal contents of consciousness in order to seek the deep structures, the forms in which consciousness necessarily presents itself.

The roots of phenomenology in the foundations of mathematics

The pathway into the phenomenology movement was not from psychology, but from elsewhere: its predecessors in the previous generation were in the foundational crisis in mathematics. (There is an echo of this in Husserl’s 1936 title  The Crisis of the European Sciences and Transcendental Phenomenology.) Issues had arisen in late 19th century because the new highly abstract mathematics had invented concepts with no counterpart in ordinary 3-dimensional reality, concepts impossible to grasp intuitively: imaginary numbers, non-Euclidean geometries and alternative algebras, higher orders of infinities called transfinite numbers, etc. Some mathematicians declared these monstrosities, products of illegitimate operations and lack of rigor; others held that higher mathematicians had broken into a Platonic paradise where they could create new objects at will. The dispute eventually would become organized, around 1900, into the camps of formalists and intuitionists, each with a program for how to logically carry through the foundations of all mathematics. The most important moves were made around the 1880s  and 90s by Gottlob Frege, a German mathematician who distinguished between sense and reference in the manipulation of symbols. In the verbal expression, “The morning star is the same as the evening star,” this is a mere tautology because the two stars are both the planet Venus; but the statement is not meaningless [Venus is Venus], because the two star-names are being used differently in the syntax of the sentence. Frege was concerned above all with mathematical symbolism, for instance the meaning of the equals sign [=] at the center of a mathematical equation, or the plus sign [+] used in addition, which is not simply the word “and” used in ordinary language. The various mathematical symbols are not on the same level, but are different kinds of operations, place-holders, and pointers. In short, mathematics is a multi-level enterprise; things we had thought were clear, such as numbers, have to be reanalyzed into a much more meticulous system of formal logic.

Husserl was in the network of Frege’s allies, and simultaneously connected with its most hostile critics; he eventually left mathematics for philosophy and generated the phenomenological program with an aim to provide secure foundations not only for mathematics and science but for all knowledge. This proved to be an endlessly receding finish line, as Husserl launched one program after another down to the 1930s; its chief results, as far as we are concerned, were offshoots such as Schutz and Heidegger. But for a moment, let us pursue Frege’s connections in a different direction. In 1903, Bertrand Russell, who had been working on a program deriving basic mathematics from a small number of concepts and axioms of symbolic logic, began to correspond with Frege over a paradox in his attempt to build a system of numbers out of the logic of sets. The conundrum is the set of all sets that are not members of themselves; is it a member of itself? If yes, no; if no, yes. The point is not trivial, on the turf of set theory, since this was Frege’s way of defining zero and beginning the ordered number series which is the basis of all mathematics. Frege threw up his hands, 20 years of work down the drain! -- but Russell worked out a solution, in the spirit of Frege’s distinctions between levels of operations, and what is allowable on each level.  Russell’s theory of types led to further controversy; and at this point, Ludwig Wittgenstein, a young German engineer who had interested himself in Frege’s work, arrived at Cambridge and took up Russell’s problem. Published in 1921 as the Tractatus Logico-Philosophicus, Wittgenstein’s argument hinges on the distinction between what is sayable  and what is unsayable, which we can see as a widening of the kind of distinction among incommensurable operations such as Frege’s sense vs. reference, or later what linguistics would call use  vs. mention, and echoed still later in ethnomethodology as resource  vs. topic.

Within the realm of the sayable, however, Wittgenstein’s approach is like that of the mathematical formalists, building a system on a logically perfect language, starting with simple elements (proper names with purely internal properties, logical atoms unaffected by external connections), out of which all meaningful elementary sentences can be constructed, and so on until giving complete knowledge of the world. By the late 1920s, the Vienna Circle  logical positivists were welcoming Wittgenstein’s methods as a means of unifying all science on a secure basis. Not that the Vienna Circle’s leaders were followers of Wittgenstein; Schlick, Neurath and others were already well-launched, with their own networks coming from physicists like Planck and Einstein, from leading mathematicians of both the formalist and intutionist schools, from neo-Kantians like Dilthey, and late pupils of Frege such as Carnap. I will skip over the internal struggles of the Vienna Circle, including such explosive developments as Gödel’s undecidability proof and Popper’s falsifiability criterion, and only note that the outcome of the Vienna Circle for social science, above all in America, was a kind of militant positivism that declares meaningless anything that cannot be put into the strict methodology of empirical measurement, statistics, and derivation of testable observation statements from covering laws. (Carnap, the most militantly reductionist of the old-line logical positivists, become a professor at UCLA in the 1954 [and died at Santa Monica in 1970]—apparently he and Harold had nothing to do with each other.) The people who wrote sociology methods textbooks around 1950, like Hempel, laid it down that what sociology needed to become a true science was the guidance of Vienna Circle positivism.

Wittgenstein and Ordinary Language

This sounds like the triumph of the Evil Empire. But I can only expound one side of things at a time. The intellectual world operates by rivalries and conflicts; and I should mention here the movement in England of ordinary language philosophy. By the 1930s G.E. Moore was reacting against the tendency of mathematically-inspired philosophers to move further and further from the world of ordinary experience, into a realm of abstract sets and meta-rules about what is permissible or impermissible in operations upon them. Moore began to argue for simple statements of ordinary language as incontestable truths (“Here is one hand, and here is another...” 1939)-- and therefore as a better standard of epistemology than convoluted systems of logical axioms. Wittgenstein, distancing himself from his Vienna Circle admirers, switched over to the anti-formalist side, repudiating much of what he had written in the Tractatus-- but retaining the key distinction of sayable  vs. unsayable. His own later comments describe mathematics as an everyday practice that one can observe in detail, stressing that the key to all the foundational disputes are to by found by this method (that we now would call micro-observations of situated practices), rather than elaborating long hierarchical derivations from concepts of sets. This emphasis on the ordinary practice of language was made into an organized program by John Austin, whose 1956 book  How to Do Things with Words, resonates with Frege’s use vs. mention, now elaborated as speech acts and illocutionary forces. (And in fact Austin had begun by publishing, in 1950, a translation of Frege’s Foundations of Arithmetic.)

Husserl's followers branch into everyday life

I need to fill in one more pathway, and we will have arrived at Garfinkel. This is the pathway of Husserl’s followers. I will single out two: Alfred Schutz and Martin Heidegger. Schutz set out to examine Max Weber’s notions of verstehen, and the ideal types of rational and non-rational action that Weber proposed as tools for the analyzing the social bases of modern capitalism. But Weber had operated as a typical Neo-Kantian, more or less inventing these ideal types out of his own head; whereas Schutz applied the more rigorous phenomenology of Husserl. The result was Schutz’s 1932 book, The Phenomenology of the Social World, which attempts to lay out some basic rules of the everyday construction of reality, such as the reciprocity of perspectives. Garfinkel encountered Schutz teaching at the New School for Social Research around 1950.

Heidegger was a pupil of Hussserl who had been given the task of making a phenomenological analysis of the experience of time. His Sein und Zeit, in 1927, is the first famous statement of what became existentialism. What is striking about Heidegger is the religious dimension, perhaps not surprising for a former Catholic seminary student, but one who had thrown off religion. In effect, Heidegger propounds a theology for atheists, where God is dead and there is no afterlife and no transcendence of the world. Nevertheless, the human individual is Dasein,  being-there, thrown into the world at a particular time and place, with no fundamental reason for the arbitrariness of why we are here; more broadly, in the background, no reason why anything should exist at all rather than nothing. This is like the sheer arbitrariness of why God created the world in the first place, a question that is no more answerable if one translates it into the naturalistic language of the Big Bang or some other scientific cosmology. Dasein is being-towards-death, the conscious being that projects itself towards the future but knows it is going to die. Hence the underlying motive, or at least deepest human experience, is existential angst.  Heidegger resonates with the most sophisticated positions of philosophical rivals: with the paradoxes plaguing the foundations of mathematical logic, with Gödel’s soon-to-be-discovered incompleteness theorem; with Wittgenstein’s unsayability and the inability of language to encompass practice. Heidegger well dramatizes the philosophical realignment: a long way from the comfortable world of the Neo-Kantians, as well as making the strongest possible opponent to the science-is-all viewpoint of the Vienna Circle positivists. Above all, Heidegger’s existential phenomenology holds that meaningfulness does not exist in any objective sense; it has to be created and posited, at every step of the way. He doesn’t say, created collectively, as an interactional accomplishment; Heidegger was not yet a sociologist. But the step was there to be taken.

Garfinkel as existentialist micro-sociological researcher

Garfinkel, in my view, is largely a combination of Schutz and existential phenomenology. Of course there are other strands: Garfinkel at Harvard was impressed with his teacher Talcott Parsons’ argument [in The Structure of Social Action, 1937] that the basic problem of sociology is how is society possible in the first place, given the Hobbesian problem of order, and Durkheim’s argument that society is held together not by conscious, rational contracts but by pre-contractual solidarity. But what is this tacit level and how does it operate? Garfinkel set out to discover this by phenomenological methods, just as Schutz had done for Weber’s categories of action. Moreover, by the 1950s, Garfinkel was operating in a milieu in which studies of everyday life were growing, with or without philosophical impetus: in France, Henri Lefebvre, a Marxist philosopher who published in 1947 a Critique of Everyday Life; Fernand Braudel and the Annales  School grounding history in the details of ordinary activities and things; Jean-Paul Sartre, doing phenomenology of everyday life with the eye of a naturalistic novelist; in America, Goffman’s early ethnographies, and those of Howie Becker and other symbolic interactionists; George Homans and others abjuring grand abstractions in favor of studying behavior in small groups. Some of this incipient micro-sociology was done in the laboratory, but so were a number of Garfinkel’s breaching experiments, under grants from the US Air Force office of research. One might say Garfinkel exploded the American research establishment from within, breaching the walls of the laboratory and making the entire world of everyday life a laboratory for experiment on the order-making and meaning-constructing methods of folk actors.

I was struck, visiting Harold’s home library in the 1980s, by how few sociology books he kept there-- mainly a few of Durkheim, but shelves full of the philosophy and literature of phenomenology and existentialism. Here I want to suggest how much Garfinkel resonated with Heidegger, even translating existentialist concepts into findings of ethnomethodology. All action is situational, arbitrarily thrown into a context. The human reality constructor projects towards the future, assuming that ambiguities will eventually be resolved in retrospect. But ambiguity lurks everywhere, as key aspects of communicative action, with others and with oneself, are indexical, not capable of translation into an objective system of references; Dasein is by definition indexical, inhabiting the thus-ness of the world in those exemplary indexicals, here and now. But actors avoid questioning what they tacitly feel, hiding from the unsayable. Human practical actors assume meaning, take it for granted, and interpret even the most contrived or accidental events as if they had meaning.

In Heidegger’s terms, persons strongly prefer to inhabit the world of the inauthentic, what Sartre called ‘bad faith’; the primary ethnomethods are all about keeping up comfortable appearances, a gloss of normalcy. Why? Breaches are highly uncomfortable; we rush to restore order, especially cognitive order, first by socially acceptable accounts, and if these fail, by labeling, exclusion, and attack. The most striking detail for me, reading Harold’s breaching experiments, is the reaction of the victims of the breach: bewilderment, shock, outrage. And not just because of momentary embarrassment, but because the arbitrary foundations of the social construction of reality have been temporarily revealed. What breaching reveals is Heidegger’s world of Dasein, thrown-ness, Being-towards-death, existential anxiety. Ethnomethods for finding and restoring order look like a way of pasting over Heidegger’s world lurking just below the surface.

If you want more evidence for the crucial important of Heidegger in opening the way for Garfinkel, bear in mind that Heidegger overturned the primacy given to mind by both Idealists and phenomenologists. Existential phenomenology is embodied, inhabiting the material world in the sense of the here-and-now Umwelt; this means physicality not as a theory or philosophy about matter—which Neo-Kantians could easily dissect as a dogmatically asserted Ding-an-sich—but as the primary existential experience of Dasein. This conception is central in Garfinkel’s repeated admonitions on how to do ethnomethodology, always focusing on “incarnate, embodied  activity—not the primacy of mind (the mistake of superficial critics who called ethnomethodology mere subjectivity) but the mind/body doing something practical in the lived bodily world. And “incarnate” also has a religious resonance, since Jesus is incarnated, not transcendental; and mystics—especially in many lines of Zen—emphasize that Enlightenment is not elsewhere but in grasping the here and now as such.

I am aware that my existentialist reading of Garfinkel is not the only one. There is also a Wittgensteinian reading, rather more optimistic in tone, which marvels at the ongoing creativity of human actors in creating order out of situations, again and again, “for another first time.” Here, the tacit, unsayable processes are all to the good. This has been ably argued by John Heritage, and may be more characteristic of the Conversation Analysis branch. Nevertheless, I am inclined to think an existentialist theme is more central to Harold himself, in his own intellectual biography, in the distinctive emotional quality of his work. This aspect of his personality struck those who encountered him personally, and made up an important part of his charisma.

The second realignment: from existentialism to language-centered structuralism and deconstruction
           
I am near the end of my account, and so far have arrived only at the doorstep of the second “great turn” in the human sciences, that of the 1960s and 70s. Harold, born in 1917, and spending a number of years in World War Two, is an intellectual product of the post-war years, beginning graduate work at Harvard in 1946.  The early 60s found him still crying in the wilderness.  What made Garfinkel famous was not merely the publication of Studies in Ethnomethodology  in 1967, and the emergence of a network of former students with a program of ethnomethodological research, but another great realignment in the larger intellectual world.

The shift of the 1960s and 70s is still too close to us for unpolemical analysis. It has no generally agreed-upon name. Most famously, it was the rise of the counter-culture, attacking the academic and every other Establishment, throwing off traditional manners, politics, and the hegemony of science. It was a time of political radicalism, spearheaded this time by student movements rather than workers or peasants. But although radicalism penetrated the intellectual world in a revival of Marxism and in other politically engagé stances, these had little direct influence on ethnomethodology, with its resolutely high-intellectual outlook. A major component of the reception of ethnomethodology comes from winds blowing from a very different direction. Above all was the rise of linguistics, as a formal discipline, harking back to earlier mathematical formalisms.

In America, the new-found prestige of linguistics centered on the program of Chomsky [1955 Univ. of Pennsylvania PhD; 1957 Syntactic Structures].  This had started by the late 1950s  (and got its first fame in polemical opposition to the behaviorist-reductionism program of B.F. Skinner), but the Chomskyian movement became a beacon for other disciplines only in the 60s and 70s. Anthropological linguists, of course, had long been cataloguing languages, but the field was dispersed and lacked widespread interest, until the Chomskyian program of generative grammar, proposing to unify all language studies around layers of deep structures and transformative rules. This resonated with the burgeoning of cognitive psychology and the incipient field of cognitive science fed by the computer revolution, and gave new prestige to anthropologists who took a linguistic-theory approach to their materials.

In Europe, above all in France, American developments were paralleled by movements that even more strongly gave the linguistic model a kind of hegemony over the human sciences. Structuralist linguistics had existed since the 1910s in Saussure’s work, although not recognized as widely important for another half-century. In 1949, Lévi-Strauss produced The Elementary Structures of Kinship, a formalist comparison of kinship structures as systems of rules that might be combined in various ways and result in distinctive sequences; an appendix by the mathematician André Weil tied this to mathematical theory of groups. In the 50s and 60s, Lévi-Strauss rose to fame as figurehead of a structuralist program; his method was to compare tribal myths for their underlying combinations, oppositions and sequences of formal elements, thereby proclaiming a universal code of the human mind.

The structuralist movement was widened by the influx of Russian Formalism into France in the 1950s. The origins of the Russian Formalists goes back to the early 1900s, among literary critics and folklorists; their accomplishments included Vladimir Propp showing the basic elements from which folk tales are produced, and Viktor Shklovsky’s analysis of literary texts as a combination of devices that migrate from one text to another. One result was to radically downplay the author: if Cervantes had not lived, nevertheless Don Quixote would have been written. This alliance of literary theorists and folklorists combined into a distinctive school of linguistics, migrated to Prague with Roman Jakobson, and eventually to Paris. By the 1960s, Foucault, Lacan, Barthes and others were using Formalist methods to decode texts of all kinds, focusing on the textuality of the entire world, and declaring the death of the author, now seen as mere conduit for a stream of intertextual rearrangements.

On the whole, the structuralists were rivals of the existentialists, attacking their subjectivism and focus upon the individual consciousness. But a tie-in with phenomenology was made by Derrida, whose earliest book, Edmund Husserl’s Origins of Geometry, in 1962, brought textually oriented structuralism into connection with the deep roots of the early 20th century intellectual transformation, the mathematical foundations crisis and the grappling between formalist and intuitionist programs. Derrida thus became the philosophical heavyweight of the late structuralist, or deconstructionist movement.  Derrida’s own texts are famously multi-leveled and self-ironicizing, but one could also say this is in keeping with the whole tendency of philosophy from Frege to Wittgenstein: having emphasized what is unsayable, then going on if not to say it, at least finding devices for talking around it.

The anti-positivist wars in America and the fame of ethnomethodology

Back in the USA, Garfinkel’s ship was finally floating on the crest of a flood tide.  A more adequate metaphor might be, a multiplicity of raging rivers overflowing their banks:  The relatively quiet stream from Husserl to Schutz, and the more dramatic one from Heidegger and the existentialists; the purely academic growth of Chomskyian linguistics, and the growing community of anthropologists and cognitive scientists who became the best audience for ethnomethodologically-inspired work in conversation analysis. Then also the raging flood of the academic political revolts, and the anti-establishment psychedelic revolution, with its slogans “It’s all in your mind” and “Blow their minds.” Back inside the serious work of scholarship, came a growing academic stream of micro-sociologists and ethnographers of daily life-- it is not coincidental that the first round of young ethnomethodological stars-- Sacks, Schegloff, Sudnow-- were Goffman’s protégés at Berkeley. (According to Manny Schegloff, they were all brought there by Philip Selznick to staff his new Institute for Law and Society, but quickly rebelled.) And finally, by the 1970s, all this was enveloped in the European tide, the prestige of structuralist/deconstructionist literary theory, although in America  largely confined to departments of literature and anthropology, and the institutionalized insurgencies of feminist theory and ethnic studies.

I have tried to contain this whirlwind tour of the 20th century in two metaphorical bags, a sequence of two big realignments in philosophy and the human sciences. It would be misleading to think of this as a shift from one gestalt to another, a simple Kuhnian paradigm revolution. One way this is inadequate is that there is never a single dominant Zeitgeist  or “turn”, but always rival positions; and each of these big camps is always a mixture and jockeying among various campaigns. The first big realignment, in the 1920s and 30s, replaced Neo-Kantians with the opposition between phenomenologists and logical positivists, plus an ordinary language movement revolting against both. Garfinkel combined much of the impetus in phenomenology, coming from the foundational crisis in mathematics and therefore in theory of symbolism and symbol-use, with an increasing micro-focus of research on the details of everyday life. The second big realignment, the 1960s and 70s, was in some respects an advance by a later generation of phenomenologists, with further support from literary formalists and code-seeking structuralists, in a fairly successful attack on the logical positivists. But of course other versions of positivism survive and prosper, in the worlds of statistics, biology, economics and rational choice, und so weiter.

A turn or realignment is not the end of history. All the other movements of the huge contemporary intellectual world do not go away; they are there is the absences I have not mentioned, in psychology and political science, in all the branches of sociology that go their own way, not very surprisingly, and maintain their own research programs. My title turns out to be merely rhetorical, if it is taken to imply one big triumphal turn in the later 20th century.  But even with all the caveats, it has been a big movement, a major part of the action. Unpacking Garfinkel’s trajectory and influences connects him to much of the most serious and profound intellectual life of the 20th century. Unlike so many others this side the Atlantic, Garfinkel was not merely transplanting Francophone influences. He got there first, in his own way. And he launched a research program, one that announced most dramatically the presence of militant micro-sociology under sail.

Downstream from Garfinkel and Goffman, we are beginning to appreciate the channels they ripped open, and the flood-plains on which we float today, towards the always approachable but never-attained sea.

---------

 Napoleon Never Slept: How Great Leaders Leverage Social Energy  
 Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs
E-book now available at Maren.ink and Amazon
----------

For sources on movements of 20th century philosophy see: Randall Collins. 1998.  The Sociology of Philosophies. A Global Theory of Intellectual Change. Harvard University Press.
To witness the continuing insights of ethnomethodological research: Kenneth Liberman, 2013. More Studies in Ethnomethodology.  SUNY Press.

TIPPING POINT REVOLUTIONS AND STATE BREAKDOWN REVOLUTIONS: WHY REVOLUTIONS SUCCEED OR FAIL

In the last few years, many people have come to believe they have a formula for overthrowing authoritarian governments and putting democracy in their place. The method is mass peaceful demonstrations, persisting until they draw huge support, both internally and internationally, intensifying as government atrocities while putting them down are publicized by the media. This was the model for the “color revolutions” (orange, pink, velvet, etc.) in the ex-Soviet bloc; for the Arab Spring of 2011 and its imitators; further back it has roots in the US civil rights movement. 

Such revolutions succeed or fail in varying degrees, as has been obvious in the aftermath of the different Arab Spring revolts. Why this is the case requires a more complicated analysis. The type of revolution consisting in the righteous mobilization of the people until the authoritarians crack and take flight may be called a tipping point revolution.  It contrasts with the state breakdown theory of revolution, formulated by historical sociologists Theda Skocpol, Jack Goldstone, Charles Tilly and others, to show the long-term roots of major revolutions such as the French Revolution of 1789 and the Russia Revolution of 1917, and that I used to predict the 1989-91 anti-Soviet revolution. Major revolutions are those that bring about big structural changes (the rise or fall of communism, the end of feudalism, etc.). I will argue that tipping point revolutions, without long-term basis in the structural factors that bring state breakdown, are only moderately successful at best; and they often fall short even of modest changes, devolving into destructive civil wars, or outright failure to change the regime at all.

Tipping Point Revolutions with Easy Success

Tipping points revolutions are not new. Some of the early ones were quick and virtually bloodless. For instance the February 1848 revolution in France: There had been agitation for six months to widen the very restrictive franchise for the token legislature. The government finally cracked down on the main form of mobilization-- a banqueting campaign in which prominent gentlemen met in dining rooms to proclaim speeches and drink toasts to revolutionary slogans. The ban provided a rallying point. The day of the banquet, a crowd gathered, despite 30,000 troops called out to enforce the ban. There were minor scuffles, but most soldiers stood around uneasily, unsure what to do, many of them sympathetic to the crowd. Next morning rumours swept through Paris that revolution was coming. Shops did not open, workers stayed home, servants became surly with their masters and mistresses. In the eerie atmosphere of near-deserted streets, trees were chopped down and cobble-stones dug up to make barricades.  Liberal members of the national legislature visited the king, demanding that the prime minister be replaced.  This modest step was easy; he was dismissed; but who would take his place? No one wanted to be prime minister; a succession of candidates wavered and declined, no one feeling confident of taking control. 

Mid-afternoon of the second day, just after the prime minister’s resignation was announced, a pumped-up crowd outside a government building was fired upon. The accidental discharge of a gun by a nervous soldier set off a contagious volley, killing 50. This panicky use of force did not deter the crowd, but emboldened it. During the night, the king offered to abdicate. But in favor of whom? Other royal relatives also declined. The king panicked and fled the palace, along with assorted duchesses; crowds were encroaching on the palace grounds, and now they invaded the royal chambers and even sat on the royal throne. In a holiday atmosphere, a Republic was announced, the provisional assembly set plans to reform itself through elections. 

In three days the revolution was accomplished. If we stop the clock here, the revolution was an easy success.  The People collectively had decided the regime must go, and in a matter of hours, it bowed to the pressure of that overwhelming public.  It was one of those moments that exemplify what Durkheim called collective consciousness at its most palpable. 

This moment of near-unanimity did not last. In the first weeks of enthusiasm, even the rich and the nobility-- who had just lost their monopoly of power-- made subscriptions for the poor and wounded; the conservative provinces rejoiced in the deeds of Paris. The honeymoon began to dissipate within three weeks. Conservative and radical factions struggled among the volunteer national guard, and began to lay up their own supplies of arms. Conservatives in the countryside and financiers in the city mobilized against the welfare-state policies of Paris.  Elections to a constitutional assembly, two months in, returned an array of conservatives and moderates; the socialists and liberals who led the revolution were reduced to a small minority, upheld only by radical crowds who invaded the assembly hall and shouted down opponents. In May, the national guard dispersed the mob and arrested radical leaders. By June there was a second revolt, this time confined to the working-class part of the city. The Assembly was united against the revolution; in fact they had provoked it by abolishing the public workshops set up for unemployed workers. This time the army kept its discipline. The emotional mood had switched directions. The provinces of France now had their own collective consciousness, an outpouring of volunteers rushing to Paris by train to battle the revolutionaries. Within five days, the June revolution was over; this time with bloody fighting, ten thousand killed and wounded, and more executed afterwards or sent to prison colonies.

The tipping point mechanism did not tip this time; instead of everyone going over to the victorious side (thereby ensuring its victory),  the conflict fractured into two opposing camps. Instead of one revolutionary collective consciousness sweeping up everyone, it split into rival identities, each with its own solidarity, its own emotional energy and moral righteousness. Since the opposing forces, both strongly mobilized, were unevenly matched, the result was a bloody struggle, and then destruction of the weaker side. In the following months, the mood flowed increasingly conservative. Elections in December brought in a huge majority for a President-- Napoleon’s nephew, symbol of a idealized authoritarian regime of the past-- who eventually overturned the democratic reforms and made himself emperor. The revolutionary surge had lasted just four months.

Tipping Point Revolutions that Fail

The sequence of revolts in 1848 France shows both the tipping point mechanism at its strongest, and the failure not so far downstream to bring about structural change. Modern history is full of failed revolutions, and continues to be right up through the latest news. I will cite one example of a tipping point revolution that failed entirely, not even taking power briefly. The democracy movement in China centered on protestors occupying Tiananmen Square in Beijing, lasting seven weeks from mid-April to early June 1989. Until the last two weeks, the authorities did not crack down; local police acted unsure, just like French troops in February 1848; some even displayed sympathy with the demonstrators. 

The numbers of protestors surged and declined several times. Initially, students from the prestigious Beijing universities (where the Red Guards movement had been launched 20 years earlier) set up a vigil in Tiananmen Square to mourn the death of a reform-oriented Communist leader. This was China’s center of public attention, in front of the old Imperial Palace, the place for official rituals, and thus a target for impromptu counter-rituals.   Beginning with a few thousand students on April 17, the crowd fell to a few hundred by the fourth day, but revived after a skirmish with police as militants took their protest to the gate of the nearby government compound where the political elite lived. Injuries were slight and no arrests were made, but indignation over police brutality renewed the movement, which grew to 100,000-200,000 for the state funeral on day 5. Militants hijacked the ritual by kneeling on the steps of the ceremonial hall flanking Tiananmen Square, in the style of traditional supplicants to the emperor. The same day rioting broke out in other cities around China, including arson attacks, with casualties on both sides. Four days later (day 10) the government newspaper officially condemned the movement-- the first time it had been portrayed negatively; next day 50-100,000 Beijing students responded, breaking through police lines to reoccupy the Square. So far counter-escalation favored the protestors.

The government now switched to a policy of conciliation and negotiation. This brought a 2-weeks lull; by May 4 (day 18) most students had returned to class.  On May 13 (day 28), the remaining militants launched a new tactic: a hunger strike, initially recruiting 300; over the next 2 days it recaptured public attention, and grew to 3000 hunger strikers. Big crowds, growing to 300,000, now flocked to the Square to view and support them. The militants had another ritual weapon: the arrival on May 15 of Soviet leader Gorbachev for a state visit, then at the height of his fame as a Communist reformer. The official welcome had to be moved to the airport, but the state meeting in the ceremonial hall flanking Tiananmen was marred by the noisy demonstration outside. On May 17, as Gorbachev left, over one million Beijing residents from all social classes marched to support the hunger strikers. The militants had captured the attention center of the ceremonial gathering; the bandwagon was building to a peak. Visitors to Tiananmen were generally organized by work units, who provided transportation and sometimes even paid the marchers. A logistics structure was created to fund the food and shelter for those who occupied the Square. The organizational base of the Communist regime, at least in the capital, was tipping towards revolution. Around the country, too, there were supporting demonstrations in 400 cities. Local governments were indecisive; some Communist Party committees openly endorsed the movement; some authorities provided free transportation by train for hundred of thousands of students to travel to Beijing to join in.

The tipping point did not tip. The Communist elite met outside the city in a showdown among themselves. A collective decision was made; a few dissenters, including some army generals, were removed and arrested.  On May 19, martial law was declared. Military forces were called from distant regions, lacking ties to Beijing demonstrators. The next four days were a showdown in the streets; crowds of residents, especially workers, blocked the army convoys; soldiers rode in open trucks, unarmed-- the regime still trying to use as little force as possible, and also distrustful of giving out ammunition-- and often were overwhelmed by residents. Crowds used a mixture of persuasion and food offerings-- army logistics having broken down by the unreliability of passage through the streets-- and sometimes force, stoning and beating isolated soldiers. On May 24, the regime pulled back the troops to bases outside the city. But it did not give up. The most reliable army units were moved to the front, some tasked with watching for defections among less reliable units. In another week strong forces had been assembled in the center of Beijing.

Momentum was swinging back the other way. Student protestors in the Square increasingly divided between moderates and militants; by the time the order to clear the Square was given for June 3, the number occupying was down to 4000. There was one last surge of violence-- not in Tiananmen Square itself, although the name became so famous that most outsiders think there was a massacre there-- but in the streets as residents attempted to block the army's movement once again. Crowds fought with stones and gasoline bombs, burning army vehicles and, by some reports, the soldiers inside. In this emotional atmosphere, as both sides spread stories of the other’s atrocities,   something on the order of 50 soldiers and police were killed, and 400-800 civilians (estimates varying widely). Some soldiers took revenge for prior attacks by firing at fleeing opponents and beating those they caught. In Tiananmen Square, the early morning of June 4, the dwindling militants were allowed to march out through the encircling troops.

International protest and domestic horror were to no avail; a sufficiently adamant and organizationally coherent regime easily imposed its superior force. Outside Beijing, protests continued for several days in other cities; hundreds more were killed. Organizational discipline was reestablished by a purge; over the following year, CCP members who had sympathized with the revolt were arrested, jailed, and sent to labor camps. Dissident workers were often executed; students got off easier, as members of the elite. Freedom of the media, which had been loosend during the reform period of 1980s, and briefly flourished during the height of the democracy protests in early May, was now replaced by strict control.  Economic reforms, although briefly questioned in the aftermath of 1989, resumed but political reforms were rescinded.  A failed tipping point revolution not only fails to meet its goals; it reinforces authoritarianism.

If the Chinese government had the power to crack down by sending out its security agents and arresting dissidents all over the country, why didn't they do so earlier, instead of waiting until Tiananmen Square was cleared?  Because this was the center of the tipping-point mechanism. As long as the rebellious assembly went on, tension existed as to which way the regime would go. If it couldn't meet this challenge, the regime would be deserted. This was in question as long as all eyes were on Tiananmen. Once attention was broken up, all those security agents could fan out around the country, picking off suspects one by one, ultimately arresting tens of thousands. That is why centralized and decentralized forms of rebellion are so different: centralized rebellions potentially very short and sudden; decentralized ones long, grinding and much more destructive.

We like to believe that any government that uses force against its own citizens is so marred by the atrocity that it loses all legitimacy. Yet the 1990s and the early 2000s were a time of increasing Chinese prestige. The market version of communist political control became a great economic success; international economic ties expanded and exacted no penalty for the deaths in June 1989; domestically Chinese poured their energies into economic opportunities. Protest movements revived within a decade, but the regime has been quick to clamp down on them.  Even the new means of mobilization through the internet has proven to be vulnerable to a resolute authoritarian apparatus, which monitors activists to head off any possible Tiananmen-style assemblies before they start.

The failure of the Chinese democracy movement, both in 1989 and since, tells another sociological lesson. An authoritarian regime that is aware of the tipping point mechanism need not give in to it; it can keep momentum on its own side by making sure no bandwagon gets going among the opposition. Such a regime can be accused of moral violations and even atrocities, but moral condemnation without a successful mobilization is ineffective. It is when one’s movement is growing, seemingly expanding its collective consciousness to include virtually everyone and emotionally overwhelm their opponents, that righteous horror over atrocities is so arousing. Without this, protests remain sporadic, localized and ephemeral at best. The modest emotional energy of the protest movement is no rushing tide; and as this goes on for years, the emotional mood surrounding such a regime remains stable-- the most important quality of “legitimacy”. 

A Contested Tipping Point: The Egyptian Revolution

Egypt in January-February 2011, the most famous of the Arab Spring revolutions, fits most closely to the model of 1848 France. Egypt took longer to build up to the tipping point-- 18 days instead of 3; and there were more casualties in the initial phase---  400 killed and 6000 wounded (compared to 50 killed in February 1848) because there was more struggle before the tipping point was reached.  Already from day 7, troops sent to guard Tahrir Square in Cairo declared themselves neutral, and most of the protestors’ causalities came from attacks by unofficial government militias or thugs. By day 16, police who killed demonstrators were arrested, and the dictator Mubarak offered concessions, which were rejected as unacceptable. On the last day of the 18-day revolution, everyone had deserted  Mubarak and swung over to the bandwagon, including his own former base of support, the military. This continuity is one reason why the aftermath did not prove so revolutionary.

Again, honeymoon did not last long.  By day 43, women who assembled in Tahrir Square were heckled and threatened, and Muslim/Christian violence broke out in Cairo. Tahrir Square continued to be used as a symbolic rallying point, but largely as a scene of clashes between opposing camps. Structural reforms have not gone very deep. The Islamist movement elected in the popular vote relegated to a minority the secularists and liberals who had been most active in the revolution. President Morsi bears some resemblance to Louis Bonaparte, who rose to power on the reputation of an ancestral movement-- both had a record of opposition to the regime, but were ambiguous about their own democratic credentials. The analogy portends a reactionary outcome to a liberating revolution.

Bottom line: tipping point revolutions are too superficial to make deep structural changes.  What does?

State Breakdown Revolutions

Three ingredients must come together to produce a state-breakdown revolution.

(1)  Fiscal crisis/ paralysis of state organization. The state runs out of money, is crushed by debts, or otherwise is so burdened that it cannot pay its own officials. This often happens through the expense of past wars or huge costs of current war, especially if one is losing. The crisis is deep and structural because it cannot be evaded; it is not a matter of ideology, and whoever takes over responsibility for running the government faces the same problem. When the crisis grows serious, the army, police and officials no longer can enforce order because they themselves are disaffected.

This was the route to the 1789 French Revolution; the 1640 English Revolution; the 1917 Russian Revolution; and the 1853-68 Japanese revolution (which goes under the name of the Meiji Restoration). The 1989-91 anti-Soviet revolution similarly began with struggles to reform the Soviet budget, overburdened by military costs of the Cold War arms race. 

(2) Elite deadlock between state faction and economic privilege faction. The fiscal crisis cannot be resolved because the most powerful and privileged groups are split. Those who benefit economically from the regime resist paying for it (whether these are landowners, financiers, or even a socialist military-industrial complex); reformers are those who are directly responsible for keeping the state running. The split is deep and structural, since it does not depend on ideological preferences; whoever takes command, whatever their ideas, must deal with the reality of organizational paralysis. We are not dealing here with conflict between parties in the public sphere or the legislature; such partisan squabbling is common, and it may also exist at the same time as a state crisis. Deadlock between the top elites is far more serious, because it stymies the two most powerful forces, the economic elite and the ruling officials.

(3) Mass mobilization of dissidents. This factor is last in causal order; it becomes important after state crisis and elite deadlock weaken the enforcement power of the regime. This power vacuum provides an opportunity for movements of the public to claim a solution. The ideology of the revolutionaries is often misleading; it may have nothing to do with the causes of the fiscal crisis itself (e.g. claiming the issue is one of political reform, democratic representation, or even returning to some earlier religious or traditional image of utopia). The importance of ideology is mostly tactical, as an emotion-focusing device for group action. And in fact, after taking state power, revolutionary movements often take actions  contrary to their ideology (the early Bolshevik policies on land reform, for instance; or the Japanese revolutionary shifts between anti-western antipathy and pro-western imitation). The important thing is that the revolutionary movement is radical enough to attack the fiscal (and typically military) problems, to reorganize resources so that the state itself becomes well-funded. This solves the structural crisis and ends state breakdown, enabling the state to go on with other reforms.  That is why state breakdown revolutions are able to make deep changes in institutions: in short, why they become “historic” revolutions.

Reconciling the Two Theories

Tipping point revolutions are far more common than state breakdown revolutions. The two mechanisms sometimes coincide; tipping points may occur in the sequence of a state breakdown, as the third factor, mass mobilization, comes into play. In 1789, once the fiscal crisis and elite deadlock resulted in calling the Estates General, crowd dynamics led to tipping points that are celebrated as the glory days of the French revolution. In 1917 Russia, the initial collapse of the government in February was a crowd-driven tipping point, with a series of abdications reminiscent of France in February 1848; what made this a deep structural revolution was the fiscal crisis of war debts, pressure to continue the war from the Allies who held Russian debt, and eventually a second tipping point in November in favor of the Soviets. But state breakdown revolutions can happen without these kinds of crowd-centered tipping points: the 1640 English Revolution (where fighting went on through 1648); the Chinese revolution stretching from 1911 to 1949; the Japanese revolution of 1853-68.  Conversely, tipping point revolutions often fail in the absence of state fiscal crisis and elite deadlock; an example is the 1905 Russian Revolution, which had months of widespread enthusiasm for reform during the opportunity provided by defeat in the Japanese war, but nevertheless ended with the government forcefully putting down the revolution.

A tipping point mechanism, by itself, is a version of mass mobilization which is the final ingredient of a state paralysis revolution. But mass mobilization also has a larger structural basis: resources such as transportation and communication networks that facilitate organizing social movements-- sometimes in the form of revolutionary armies-- to contend for control of the state. If such mobilization concentrates in a capital city, it may generate a tipping point situation.  But also such mobilization can take place throughout the countryside; in which case the revolution takes more the form of a civil war.

Tipping Point Revolutions and Imitative Revolutions

At times, waves of revolution spread from one state to another; the success of one igniting enthusiasm for another. It is the mass mobilization of the tipping point, the huge crowds and the widespread feeling of solidarity in the pro-revolutionary majority, that encourages imitations. We can see this because some of the famous ignition-revolutions were not very effective in making changes, but they were still imitated. One such wave was in 1848, spreading from  Switzerland and Sicily to the fragmented states of Italy, and most spectacularly to France. Soon after news propagated of events in Paris, Europe’s most famous city, crowds demanded constitutional reforms in Vienna, Berlin, and most of the German states and in the ethnic regions of the Austrian Empire. Some rulers temporarily fled or made concessions; troops mutinied; parliaments and revolutionary assemblies met. All of these were put down within a year and a half. Some were extirpated by the intervention of outside troops, as conservative rulers supported each other in regaining control. Of these revolutions, hardly any had a permanent effect.

The wave of Arab Spring revolts began with a successful tipping point revolution in Tunisia, imitated with temporary success in Egypt; but failed in Bahrain; had little effect on an ongoing civil war in Yemen; led to a full-scale military conflict in Libya that was won by the rebels only through massive outside military intervention with airpower; in Syria generated a prolonged and extremely destructive civil war sustained by outside military aid to all factions. The lesson is that if tipping point revolutions themselves are not very decisive for structural change, further attempts to imitate tipping points in other countries have even less to go on. Regimes may or may not be removed but the downstream situation does not look very different, although there may be a prolonged period of contention amounting to a failed state.

The major exception would appear to be the wave of imitative revolts from 1989-91, as the Soviet bloc fell apart. The states of eastern Europe overthrew their communist regimes one after another; some with relatively easy tipping point revolutions as in Czechoslovakia, Hungary, Poland, and East Germany, and bloodier battles in Romania and eventually Yugoslavia. A second round of revolts began in 1991 as the USSR disintegrated into its component ethnic states. Here was indeed a structural change, dismantling communist political forms and replacing them with versions of democracy (some continuing control by ex-communist elites), and shifting the property system to capitalism. But this series of revolutions were not mere tipping points alone; they were all effects of a deep structural crisis in the lynch-pin of the system, the Soviet empire, that underwent a state breakdown revolution. Revolts can spread by imitation; but what happens to them depends on what kinds of structural conflicts are beneath the surface.

The Continuum of Revolutionary Effects, from Superficial to Deep

If we use the term “revolution” loosely to mean any change in government which is illegal-- outside the procedures provided by the regime itself-- there are many kinds of revolutions. They range from those with no structural effects at all, through those which change the deepest economic, political, and cultural institutions.

Coup d’etat is the most superficial; there is no popular mobilization, only a small group of conspirators inside the circles of power, or in the military, who replace one ruler with another. Often there is not even the pretence of structural change or appeal to the popular will.

Tipping point revolutions are more ambitious; emotional crowds who are at the center of the mechanism for transferring power are enthusiastic for grand if often vague ideological slogans. But such revolts often fail, if the government is not itself paralysed by a structural crisis. When tipping points succeed, the new regime often has only ephemeral support, and may peter out in internal quarrels, civil war, or reactionary restoration.

State breakdown revolutions have a less ephemeral quality. The state cannot come back into equilibrium until its own organizational problem is solved; and since this means its fiscal, military, and administrative basis, reforms must go deep into the main power-holding institutions. Whether or not the same ideological brand of revolutionaries continues in office, these structural changes lay down a new order that tends to persist-- at least until another deep crisis comes along.

Today: the Era of Tipping Point Revolutions

After the fall of the Soviet Union and its empire, there have been many repetitions of tipping point revolutions (Ukraine 2004, Georgia 2003, Kyrgyzstan 2005, Serbia 2000) mixed with personal power-grabs that are little more than coups masked as popular revolutions. The Arab Spring revolts have relied heavily on the tipping point mechanism. Where the government has had a strong faction of popular support, tipping point attempts have brought no easy transition; the result has been full-scale civil war (Syria), or defeat of the revolutionary mobilization by a mass counter-mobilization (the Green uprising in Iran 2009). The popularity of tipping-point revolts, as in the anti-Islamist uprisings in Turkey and Egypt, appear to have all the weaknesses of their genre.

Napoleon Never Slept:

How Great Leaders Leverage Social Energy

Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs

E-book now available at Maren.ink and Amazon

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

 

References

Cambridge Modern History, Vols. 4 and 11. 1907-1909.

Collins, Randall. 1999. “Maturation of the State-Centered Theory of Revolution and Ideology;” “The Geopolitical Basis of Revolution: the Prediction of the Soviet Collapse.” chapters 1 and 2 in Macro-History: Essays in Sociology of the Long Run. Stanford Univ. Press.

Collins, Randall. 2012.  “Time-Bubbles of Nationalism.” Nations and Nationalism 18: 383-397.

Goldstone, Jack. 1991. Revolution and Rebellion in the Early Modern World. Univ. of California Press.

Harris, Kevan. 2012. “The brokered exuberance of the middle class: an ethnographic analysis of Iran’s 2009 Green Movement.” Mobilization 17: 435-455.

Tocqueville, Alexis de. 1987 [1852]. Recollections of the French Revolution of 1848. Transaction Publishers.

Weyland, Kurt. 2009. “The diffusion of revolution: ‘1848’ in Europe and Latin America.” International Organization 63: 391-423.

Zhao, Dingxin. 2001. The Power of Tiananmen. University of Chicago Press.

CLUES TO MASS RAMPAGE KILLERS: DEEP BACKSTAGE, HIDDEN ARSENAL, CLANDESTINE EXCITEMENT

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

DRUG BUSINESS IS NOT THE KEY TO GANGS AND ORGANIZED CRIME: WITH A PROGNOSIS FOR THE MEXICAN CARTEL WARS

It has become conventional to refer to all types of gangs and organized crime as if they were synonymous with the illegal drug business. Popular terms like "drug gangs" and "narco-cartels" obscure a fundamental point: crime organizations are political organizations. They are rivals to the legitimate state, and their fundamental asset is their capacity for wielding violence. Their rise and fall, and the amount and kind of violence they perform is explainable by political sociology.

It is easy to demonstrate that drug business is not central, since most crime organizations historically have had little to do with it.

The Mafia in the U.S., in its most flourishing period between 1930 and 1980, was primarily involved in extortion/protection rackets. It drew most of its income and spent most of its time on rake-offs from illegal gambling, prostitution, loan-sharking, debt-collecting, as well as extortion of legal business such as construction, waste collection, wholesale food markets, restaurants, and corruption of labor unions and local government. One of the five families of the New York Mafia, the Lucchese Family, was a major conduit of heroin smuggling and wholesaling in the 1940s-60s. But this was a small part of the overall operation; the prosperity of the Mafia did not rise or fall with the heroin business, and the occasional power struggles in the five families were not over the drug business. One might argue, analogously, that alcohol prohibition during the 1920s was the stimulus to the formation of the Mafia; but Mafia organization dates from 1931 when Lucky Luciano set up the Commission, and its period of greatest dominance was in the 40 years after the end of Prohibition.

The Mafia families in Sicily, from their reestablishment after the fall of the Fascist regime in World War II through their demise in the 1990s, were involved in what Diego Gambetta [1993] calls the business of protection. In the absence of government regulation and in at atmosphere of pervasive distrust, all economic transactions needed a protector or guarantor, and this was provided by traditional secret societies of "men of honor" who received payoffs in return. The scope of businesses under Mafia protection was even wider than in the U.S., and consisted more of legal activities than illegal ones-- gambling was not prominent in Sicily, though an illegal source of income was smuggling cigarettes to avoid taxation, and turfs for pickpocket gangs were enforced in Palermo. Sicilian Mafia families did become important pipelines for heroin processing and smuggling from the Middle-East across the Atlantic. But this was not the key to the Sicilian Mafias; they existed long before the heroin trade, and their protection business remained centered on the local economy.

The Russian crime organizations of the 1990s arose during the period of privatizing the Soviet economy. Their main focus was selling protection/extortion to licit businesses, ranging from small street market vendors to commercial and industrial companies, and they had a large role in taking over former state enterprises. The biggest criminal protection businesses amalgamated with the crumbling government bureaucracy and became legitimate as the economic "oligarchs". By the early 2000s, Russian crime-orgs had ploughed back so much of their illegal gains into the above-ground economy that they abandoned illegal force and became a major part of normal business. Even during their outlaw period of the mid-1990s-- when they were most violent-- they were relatively little involved in running illicit businesses such as drugs. Why focus on drugs when you can take over oil and gas?

The Japanese Yakuza originated historically in gangs allocating stalls in outdoor markets. During the US occupation after WWII, Yakuza families expanded to provide ordinary consumer goods on a black market-- i.e. making scarce goods available outside of government rationing. Since that time, Yakuza activities have resembled those of the NY Mafia, including strong-arm debt collecting, labor disciplining for manufacturers, and protecting illicit businesses such as strip clubs and prostitution. Drugs were at most a minor part of their activities.

Similarly with smaller crime organizations: historically most gangs concentrated on other activities than drugs; even today, when many street gangs are involved in some aspect of drug distribution, for most gangs it is not their central activity. For instance, some motorcycle gangs (i.e. white gang members) are involved in distributing methamphetamine, but the primary purpose of these gangs is showing off and fighting against rival motorcycle gangs. Local gangs are sociable groups in it for action and fun, but they have a political aspect because they organize violence to back each other up-- and with organized violence, we enter the realm of politics and the state.

There are several kinds of crime organizations, and their differences are explainable by their politics. This means their internal politics-- how they control violence inside their own ranks-- and their external geopolitics-- how they deal with rival organizations. Drug business affects the structure and violence of crime-orgs only as it feeds into their politics.

What makes some gangs small, while others grow into large but loose alliances, and a few become underground governments? What makes some crime-orgs centralized and hierarchic, while others are localized and egalitarian? Why do some engage in lengthy wars of expansion and extermination, while others confine themselves to local crime and skirmishing at the borders of their turf, and still others-- the most successful ones-- make their violence terrifying but limited and stealthy? Let us compare.


Types of Crime Organizations:

Mafia-style syndicates -- protection/extortion, govt. corruption;
underground govt.
New York, Sicily, Russia, other ex-Soviet republics, Japanese Yakuza,
Chinese Triads

Local neighborhood turf gangs (10-20-50-100 members)

Symbol-based alliances
Crips (30,000) , Bloods (15,000),
Mara Salvatrucha (MS-13 & other factions: 50,000 in US )

Multi-gang alliances with political / religious ideologies
People Nation (comprised of Blackstones, Vice Lords, Latin Kings, Bloods)
Folk Nation (comprised of Gangster Disciples, Dieciocho, Hoover Crips)


Large Corporate Gangs ( Chicago) – hierarchic, mainly drug business
Blackstones / Black P. Stone Nation (60,000), Vice Lords (60,000)
Latin Kings (30-45,000)



Mafia-style syndicates

After a crime war among rival Italian, Irish, and Jewish gangs in the New York area, the victorious alliance in 1931 set up a Commission consisting of the heads of five Mafia families, with all other crime families around the USA under the jurisdiction of the one of the New York families. It was called the Peace Commission because its aim was to prevent civil wars. It kept a low profile. All killings inside the Mafia had to be authorized by the Commission; if not, the perpetrator himself would be killed. Killing policemen and government officials was strictly prohibited, and anyone found out plotting such a killing was reported to the Commission and executed. The policy of the Commission was to corrupt local government by reaching accommodation, not to fight with it and draw down the force of the Federal government. Executions were ordered as a matter of organizational discipline, targeting specific individuals as a warning to the rest to stay in line. This was strictly Mafia business; women and personal relatives of those punished were to be left alone and even supported after their man was dead. The aim was to cut off chains of revenge killings by imposing a political solution. And it was all done through layers of secrecy: the technique of murder was not a risky gunfight but a stealthy shot at point-blank range or a bomb under foot, usually set up by betrayal of one's closest friends.

The system worked but not perfectly. Occasionally aging Mafia chiefs were killed or forced out by the younger generation, and some families had brief civil wars. The Commission usually legitimated the outcomes of these wars and carried on. It was able to rule with a high degree of secrecy for 30 years, with near immunity from the corrupt New York government. Publicity began to expose the Mafia in the 1960s, and in the 1980s and 90s Federal prosecutors using RICO (Racketeer Influenced and Corrupt Organization) law reduced the Mafia to a trivial presence in a few American cities.

The Yakuza also operated more with a show of toughness than widespread use of violence. There were some periods of war between rival organizations, but for the most part the Yakuza stabilized in separate territories. They achieved a modus vivendi with Japanese government officials, and even operated openly out of offices with their names on it. Unlike the US, where Federal law enforcement was a counterpoise to corrupt state government, the Japanese national government monopolized police functions and rarely made much effort to eliminate organized crime, probably because of its long-standing alliance with the ruling party against leftist labor organizations.

Russian crime orgs in the 1990s began with a large number of small gangs, with shoot-outs peaking during 1993-95. They never achieved a centralized governing body like the New York Mafia. But they evolved a relatively peaceful (if threatening) technique of negotiating rival claims over who was protecting which business; as the gangs winnowed down to an oligopoly, their above-ground organization increased their staff of business and negotiation specialists, and eventually reduced their reliance on force as they acquired legitimate property and made alliances with government officials. Here too the path to success was by limiting violence and substituting political arrangements. The Russian crime organizations were the world's most successful Mafia. They had unusual opportunities-- the chance to take over state properties during a time when socialism was privatizing-- and relatively weak opposition-- a crumbling government bureaucracy without a legal tradition setting boundaries between crime and the new capitalist system. Ordinarily, big crime-orgs expose themselves to government attack because their sheer size becomes hard to keep under cover; in the case of the ex-Soviet Union (here we should include most of the successor states besides Russia), they were able to move above ground and become the new Establishment.

In contrast to these successful Mafias, the Sicilian Mafia fought a series of bloody civil wars among themselves in the early 1960s, and again in the late 1970s-1980s, killing up to 1000 Mafia members including many bosses. There were over 100 separate Mafia families, and they were never able to achieve stable alliances. Under the advice of lordly American mafiosi visiting in the late 1950s, at attempt was made to set up a Peace Commission, but it broke down immediately when it was unable to resolve disputes. A split in the Commission over money owed in a heroin smuggling case set off the first Mafia War. This was a political war, not a drug war; the issue was the suspicion that the Commission was just a means for one faction to impose its domination over the others. (A similar issue would break up the Crips, when that attempt at a black-unity gang alliance fell apart in accusations of a power grab.)

Spiraling Mafia violence attracted government attention from the mainland; when investigating officials were assassinated (chiefly by bombs in roadways and cars), the Italian government escalated its crackdown, with mass arrests and maxi-trials. The Sicilian mafias fought not only with each other but engaged in war with the Italian government, probably because they were made arrogant by long-standing local political control and failure of national state penetration into Sicily. But the Italian government mobilized military and police forces far exceeding the strength of the divided Sicilian groups; mafiosi under death threat from rivals began to break their oaths of silence and provide evidence. By the end of the 1990s, the Sicilian mafia was largely destroyed; on the mainland, it was displaced by Calabrian and Neapolitan crime orgs. Fighting against the government proved to be a fatal mistake. As the New York Mafia and the Russian crime-orgs demonstrate, the most successful do best by corrupting the state rather than fighting with it.

Local neighborhood turf gangs

Turn now to the other end of the scale. Gangs are relatively small groups, controlling a few blocks of a city. Structures vary; some have elected leaders, others are informal. Their origins are in children's play groups, dedicated to having fun and getting into mischief. They may engage in crimes, but this is not necessarily an organized activity of the group. The clubhouse or hangout is a place where gang members can form ad hoc crews to carry out car thefts, burglaries, robberies, or extortions; i.e. the gang is more of an umbrella under which its members come together to commit their own crimes. Some gangs claim a percentage of their criminal profits, others collect dues, chiefly for the clubhouse and for parties. Neighborhood gangs are often like social clubs, fraternities for poor people, who support themselves to an extent with crime.

The identity of the gang is based on its violence, or more precisely, ostentatious bluster and threat. Elijah Anderson argues that the code of the street in the black inner-city is mainly projecting a tough demeanor; when this is done correctly, it establishes membership and deters violence. But the gang has to demonstrate some violence, both in initiation fights, and in threats against rivals who enter their territory, and occasional incursions into enemy territories. Gang "rumbles" existed since violent teen-age gangs appeared in the 1950s, first with Puerto Rican gangs in New York City. The early gangs were not fighting over drug business, nor even protection rackets; they existed largely for the prestige of fighting, and their main concern was "action".

Since the 1970s and 80s, it has become common for local street gangs to be connected with the drug business, usually at the low retail end, and violence can take place over prime vending locations. But this is not a necessary gang activity; gangs have existed without the drug business, either as an umbrella for crime crews, or just for sociability and the excitement of fighting. In the 1950s, gangs were often consumers of heroin, but not dealers. It is still the case now, when gangs sell drugs, either as a collective enterprise or as individuals protected by the gang, that gang members consume much of the drugs themselves; such gangs do not derive much profit from the drug business. Gangs with a lavish, hedonistic lifestyle accumulate very little wealth from crime-- including non-drug crimes. Here is another way gangs differ from successful Mafias, which are much more disciplined in their lifestyle. The most self-disciplined of all were the Russian crime-orgs, which reinvested their income in buying up legitimate businesses, and eventually raised themselves out of the crime world.

Symbol-based Alliances and Multi-gang Alliances: Diplomatic Peace Treaties

These are loose, horizontal alliances between gangs who adopt the same symbols. Symbol-based gangs, with their distinctive gang colors and signs (hand gestures, ways of strutting, etc.) are diplomatic peace treaties among those who belong. Instead of every local gang being the enemy of every other, half the gangs they encounter may be their allies. It is suggestive that the cities with the highest homicide rates--- Detroit, Baltimore, Philadelphia-- have no little structure beyond small gangs fighting each other every few blocks.

The Crips were created in Los Angeles in the early 1970s, during the Civil Rights movement, in an attempt to stop fighting among Black gangs and concentrate on their racial enemy-- hypothetically whites, but in practice Hispanic gangs. The Crips alliance did not hold together, and a split produced the Bloods, a rival gang who spend their time chiefly in hostility to the Crips, thus producing a grand division between two different segments of black gangs. In areas where there are only Crips and no Bloods, the alliance does not work, since its outside enemy does not exist; in these places Crips divide into local factions who fight each other. As is typical of gangs, most violence is inside their own ethnic group; black gangs usually fight each other, Hispanic gangs are against each other, with white gangs in yet a third arena (such as rivalries between motorcycle gangs). Racial segregation of violence in the gang world is evidence that their violence is largely honorific; despite occasional ideological claims of rebellion against dominant white society, gang violence is a local game in a racial arena, a way of claiming status simply by being in the action. * Compared to Mafias, who are after real power and use violence strategically to uphold their organization, gang alliances and their violence are largely adolescent-style bravado.

* Crips-vs-Bloods conflict was called off during the major race riot in Los Angeles in 1992, following the Rodney King verdict. For the moment, racial alliance was reestablished while there was open violence against whites and Korean store-owners.

Symbol-based alliances have no hierarchy. There is no national leader of the Crips or the Bloods. They are umbrellas joining together all those who display the same symbolism. They are simply dividing lines, telling you who to fight, and who not to fight. The component gangs inside these alliances may engage in various crimes, including the drug business. But the symbolic alliance per se does not engage in the drug business, or any other collective activities. An analogy in political history would be wars between Catholics and Protestants, with further subdivisions of radical Protestants (Calvinists, Evangelicals) against conservative Protestant churches (Anglicans or Lutherans).

A stronger form of alliance has been attempted in multi-gang alliances such as the People Nation (comprised of Blackstones, Vice Lords, Latin Kings, Bloods) and the Folk Nation (comprised of Gangster Disciples, Hoover Crips, Dieciocho). These mega-alliances brought together already large gang structures, and were created by gang leaders in prisons-- by imitation one right after the other in November 1978. These too were offshoots of the Civil Rights movement; they had explicit political or sometimes religious ideologies, keeping the peace among gangs to concentrate against white society. But criminal gangs are not well equipped to contest political power on the state level, and in practice the People Nation and Folk Nation operated as diplomatic alliances grouping many of the big gangs into two blocs, fighting each other. For the most part these mega-gangs just concentrated on displaying their identities by flashing symbols, and performed no coordinated action; leaders like Larry Hoover (who held the honorific title of Chairman) were figures of prestige rather than authorities giving orders.

Some mega-gangs pushed further. The Latin Kings began as an organization of Puerto Rican gangs in Chicago, with a breakaway eventually transferring its center to New York. The Latin Kings attempted an political hierarchy, with top leaders adopting titles like the Inca-- identifying with non-Western symbols, much as some of the big Chicago gangs identified with Islam. In the Puerto Rican neighbourhoods of New York, Latin Kings ventured above ground, taking part in Puerto Rican nationalist rallies and political movements. Their move into legitimate politics was not successful, since it did not deter government officials from arrest sweeps and RICO prosecutions. This points up an important difference between even very large gang alliances and the various Mafias: Mafias prospered where they were able to corrupt government, acting surreptitiously and keeping their identity secret; whereas all of the American-style gang alliances have been very ostentatious, flaunting their colors and gang signs in public. This left them in no position to corrupt the government secretly; and it made them easy to identify when the government cracked down.

Multi-gang alliances were not organizational bases of the drug business. Their members might be in the drug business, but the mega-gang gave them little help in this respect. In New York in the 1990s, the Latin Kings became antagonistic to Dominican drug wholesalers who supplied Puerto Rican street dealers. With their increased political consciousness, the leaders of the Latin Kings recognized that the Dominican wholesalers were franchising out drug sales to another ethnic group, who received the smallest share of profits, and took the greatest risk of street violence and arrest. In effect, the Dominican dealers were capitalists exploiting Puerto Rican labor. The ideology of the Latin Kings shows that a large, politically conscious crime-org may actually oppose the drug business.

As Naylor [2004] has pointed out, wholesale production and distribution of illegal drugs is most effective when it is decentralized. It is prudent to employ isolated individuals or small groups, who have little information about supply chains as a whole. An expectable percentage of smugglers are arrested and their drugs confiscated; a wholesaler needs to be decentralized, especially on both sides of an international border crossing. Similarly with the manufacture of raw materials into drugs such as heroin, cocaine, and methamphetamine: economies of scale are not desirable, due to the danger of detection and exploitation (either by the government or by rival crime organizations); many decentralized plants are better than a few large ones. Ethnic networks are often useful, since they provide a degree of cultural filtering to provide trustworthy connections, but to speak of a Dominican mafia or cartel would be misleading: its effectiveness is in staying decentralized. It is not surprising that a big, unusually centralized (and out-front) organization like the Latin Kings, would be in low-end drug retailing, and exploited by a decentralized (and much more secretive) network of wholesalers (who are unified only by being Dominican).

Large Corporate Hierarchical Gangs: an exception, but temporary

For a time during the 1960s through the 1990s, large hierarchic gangs built up, chiefly in Chicago. Among the most famous were the Blackstones (with various name changes) and the Vice Lords, each reaching 60,000 members. As we know from Venkatesh, who penetrated the middle levels of the organization, there was an unusual amount of top-down control: low-level members were paid salaries, assigned regular hours, and inspected by managers. Earnings went upwards through middle ranks to the top of the hierarchy, a board of directors who made considerable incomes and held property through their mothers or families. These corporate gangs were to a large extent based on the drug business, not only staking out particular territories (typically staircases in the public housing projects) but also controlling access to wholesalers. It wasn't all drugs; the corporate gang collected protection/extortion money, not only from illicit businesses such as prostitution but also from "gray-market" or "off the books" businesses inside gang turf, such as persons who operated hair salons, grocery stores, or other businesses without getting licenses from the city. It would be misleading to think of the corporate gang as a drug business that operated protection rackets on the side; the key is having sufficient political power to control a territory. If it has this power, it can turn it to any criminal use, including the drug business; but those are outcomes of the criminal organization, not determinants of it.

The Chicago corporate gangs are the major exception to my argument: crime organizations that are indeed centrally based on the drug business. But the big Chicago gangs were largely destroyed in the 1990s; not by eradicating the drug business, but by attacking the political vulnerabilities of the crime-org and its territorial base. Federal prosecutors used RICO indictments pioneered against the Mafia. Especially fatal was the city of Chicago tearing down the massive pubic housing projects in the black neighbourhood-- huge buildings that were effectively no-man's-land where the police would not venture. It was like a real-life experiment, changing the crucial variable. Once their protected turf was gone, the big corporate gangs broke up. Today the old gang names remain, but without economic clout nor violent backup from the hierarchy; Gangster Disciples and others mingle on the streets, warily negotiating mutual threats on the level of small groups. [see forthcoming analysis by Joe Krupnick and Chris Winship]

The Mexican Cartels: Theory and Predictions

The situation of Mexican crime organizations is distinctive because there are so many organizations-- at least 11 major ones. The original cartels were in geographically distinct areas-- the Gulf cartel along the south-east Texas border; the Sonora cartel on the Arizona border; the Arrelano Felix organization in Baja California; the Sinoloa cartel on the major port cities on the Pacific side such as Mazatlan. They were cartels in the true sense, dividing up non-competing territories. More recently they stopped being cartels, as they have invaded each other's territory. The name "cartel" serves as a term of convenience, but it is inaccurate. To call them "drug cartels" involved in a "drug war" is doubly inaccurate. Popular terms are impossible to eradicate, but they hide reality. Mexican crime-orgs are less like rival businesses than rival warrior states of the pre-modern era.

Over time, the situation of territorial cartels developed into multi-sided wars. This came about by a combination of political processes.

Because there are so many players, various alliances were possible. When any one crime organization attempted territorial expansion, counter-alliances were created. Some organizations sent expeditionary forces-- military aid-- to their allies, thereby also expanding their territorial reach, and provoking further defensive reactions. Alliances could change when the host organization felt threatened by the outsiders and made new alliances to throw them out.
A defeat in war, or arrests by the government, would result in a power vacuum. When one organization was weakened, outsiders could move into their territory; or a split would occur within the organization, creating a power struggle among new factions that might crystalize into separate organizations if there were no clear-cut winner. Thus a multi-sided struggle became even more multi-sided as time went on.

Examples: this happened as the Arellano Felix Organization in Baja California split when its older leaders were arrested; when the Sinaloa cartel was split by the Beltrán Leyva brothers; and when Los Zetas broke from the Gulf Cartel. The government acts as a complicating factor in the multi-sided situation, bringing the total number of players to 12 or more, depending on how many independent factions there are in the Mexican government. Above all, government strategy of decapitating the most prominent cartels kept stirring the mix; far from destroying organizations, it kept creating new opportunities for local factions and rival cartels to expand.

These multi-sided conflicts and repeated power vacuums push cartel members into switching sides. Such switches of allegiance are typical in multi-sided geopolitics; the history of European diplomacy is full of them, with the English, French, Germans, Russians and others switching between friends and enemies numerous times over the centuries. The Zetas have been the most aggressively opportunistic. They began as elite airmobile Special Forces set up by the Mexican government distrustful of their own regular military and police. Their very isolation made them amenable to being recruited as bodyguards by one of the quarreling leaders of the Gulf cartel in the early 2000s. Within a few years, as the Gulf cartel was weakened by the government offensive, the Zetas broke free of their underground employers and became another crime-org in its own right. This disloyalty is typical of what happens to paramilitary forces set up by governments outside of normal bureaucratic channels-- for instance in Africa and the Middle East [Schlichte 2009]. Not only are the Zetas not a cartel in the technical sense, but they lack a distinctive territorial base-- being tied neither to particular drug routes nor local roots. They have expanded into anywhere in Mexico where there has been the hint of a power vacuum.

Similarly, vigilante groups set up to keep out intruders or punish criminals the weak state cannot touch, themselves can turn into crime organizations. This has been the pathway of La Familia in Michoacan (far from the major drug routes, but including the Pacific port nearest to Mexico City). It started as a religiously-based rehabilitation movement for drug addicts (not unlike the Synanon movement in California during the 1960s and 70s, which evolved from a group psychotherapy drug cure into a religious cult and eventually into a violent organization). La Famiglia Michoacana (which means "the Michoacan family") gradually shifted from a local self-protection group, defending the region against incursions by the Zetas and others, into the business of protection/extortion and eventually into drug dealing itself. Local loyalties have remained especially strong, since La Famiglia not only maintained its public image as a protector but siphoned its criminal income back into supporting local churches and institutions. The pattern resembles the Latin American tradition of a populist revolutionary movement gone corrupt.

We now have a structural mechanism to explain a distinctive feature of the Mexican cartel wars: spectacular public violence. These are not just killings but tortures and beheadings; displaying corpses-- or parts of them-- in public places; leaving notes on the bodies giving warnings, insults, or explanations.

This raises a technical question in the sociology of violence. In order to torture and mutilate someone, it is necessary to capture them first. This is difficult to do. Many kinds of criminal organizations are unable to do this. American street gangs do not engage in torture and mutilation of their enemies, because their style of violence is brief confrontations or drive-by shootings. Street gangs have narrow territorial boundaries, and lack information about what happens outside their turf.

To capture an enemy requires stealth and planning; this requires information, and therefore informants. A high level of information is displayed also in extortion threats; for instance phone calls to victims telling them details of their families' daily routines.

The structural situation of multi-sided conflicts produces much switching of sides; and this generates useful informants with information about enemies' routines. Internal splits in an organization also creates informants, since whoever leaves one faction can tell the other details about the routines of those they have left.

The situation worsens. As the cycle of defeats and victories, power vacuums, and side-switching goes on, informants themselves are targeted. Informants are the most important weakness for an organization, therefore it tries to deter informants by using spectacular punishments-- more tortures and beheadings.

Public violence has a second purpose: it can be used to send propaganda. Messages attached to bodies can proclaim that it is done to protect the local communities against terrorists, drug cartels, and criminals-- as in the propaganda of La Familia Michoacana. This is propaganda to create local legitimacy. Or propaganda of violence can be used to intimidate possible enemies. But the intimidation is usually not definitive, since in a situation of multi-sided instability, side-switching inevitably happens and the cycle of spectacular violence continues.

What can we predict, using our comparisons among the political structures of crime organizations?

First: drug business is not the key determinant of what they will do. Drugs provide one source of income, and the variety of drug routes provides bases on which some-- but not all-- of the cartels originated. But the main resource of a crime-org is how much violence it can muster, and that depends on its political reach and strength of organizational control. Once the military/political structure exists, it can be turned to different kinds of criminal businesses, whether these involve running a drug business (or any other illegal business); or merely raking off protection money from those who run an illegal business; or extorting protection money from ordinary citizens, including kidnapping. A crime-org might start out in the drug business and shift its activities elsewhere, or vice versa. Randolfo Contreras has shown that in the 1990s when the crack cocaine business dried up in the Bronx, former street dealers went into other areas of crime. In Mexico, when increased border surveillance cut into drug deliveries to the US, crime-orgs expanded into extortion and kidnapping for ransom. The Zetas, because of their organization as Special Forces, were less directly connected with the drug business itself; when they became independent of the declining Gulf cartel, they have moved aggressively into more purely predatory use of violence against other cartels' territories. This is not so much an effort to monopolize the drug trafficking business, as a different political strategy, leveraging their special skill, highly trained military violence.

It follows that ending the illegal drug business-- whether by eradication or by legalization-- would not automatically end violence. Mexican crime-orgs could intensify other types of violent extortion (and so they have, with increased pressure on the US border), as along as they still held territories out of government control; and wars between the cartels would not cease. It is a non sequitur to argue that if the US would stop drug consumption, Mexican cartels and their violence would disappear.

A second prediction comes from historical similarities. The Mexican situation from 2000 onwards resembles the Sicilian Mafia wars that took place from the 1960s through the 1980s. The Sicilian mafias also engaged in corruption of local governments, and a system of protection/extortion covering all economic activities. Surveillance was provided by a large network of informants; and spectacular violence was used when mafiosi were challenged. As in Mexico, there were a large number of Sicilian Mafia families. The wars were multi-sided, both among the different mafia coalitions, and against the Italian national government.

The situation in Sicily was precisely what the US Mafia had organized the Commission to avoid: lengthy civil wars; public violence; and attacks on government officials. After the fall of the crime regime in 1920s Chicago headed by Al Capone-- who was too blatantly public about his political control, and his drive-by machine guns in the street-- the New York mafia ruled by secrecy. There was no effective Peace Commission in Sicily, no centralized organization by which the top Mafia bosses exercized selective violence to keep discipline inside the organization.

The Prognosis: The Italian government won the Sicilian Mafia war; as their local war escalated into attacks on officials of the national government, the government counter-escalated until the Mafia was largely destroyed. Mexico could go the same path. It would be a long war, but winnable-- it took 20-30 years for the national government to reassert control in Sicily.

An alternative pathway could be a change in government policy. Since the PAN administration of President Calderón took office in 2006, the policy has been to pursue all-out war against all the crime organizations, resulting in at least 50,000 persons killed through 2011, with the yearly number rising to 15,000. It is possible that an electoral victory by PRI, the long-time former ruling party known for its history of corruption, would result in a new policy of accommodation. Some of the more stable and least expansive cartels would be given tacit recognition in their region, while the more aggressive and territorially expansive groups-- above all the Zetas--- would be targeted for extermination. The result might be something like the fate of the Russian mafia of the 1990s: ending their mutual turf wars, reducing conflict among themselves, and finding acceptance by corrupt government officials sharing in their wealth. In the case of Russia, the whole process was finished in about 10 years. Given that the intense cartel wars in Mexico so far have gone on about 5 years, the Russian example suggests a decline in violence in a few years might be feasible.

“Collins has channeled his deep knowledge of human violence and the intricacies of combat into a taut and compelling what if fantasy that takes the cultural fissures of our nation to full scale rupture."
– Alice Goffman, author of On The Run: Fugitive Life in an American City

CIVIL WAR TWO Available now at Amazon

 

References


Code of the Street.
The Almighty Latin King and Queen Nation.
Chinatown Gangs: Extortion, Enterprise, and Ethnicity.
The Triads as Business.
The Stickup Kids.
Gangs and Society.

The Sicilian Mafia. The Business of Private Protection.
Andean Cocaine. The Making of a Global Drug.
Mexico: Narco-violence and a Failed State?
Islands in the Street.
Yakuza: Japan's Criminal Underworld.

Wages of Crime. Black Markets, Illegal Finance, and the Underworld Economy.
Five Families. The Rise, Decline, and Resurgence of America’s Most Powerful Mafia Families.
Vampires, Dragons and Egyptian Kings. Youth Gangs in Postwar New York.
Smack: Heroin and the American City.
In the Shadow of Violence: The Politics of Armed Groups.
Off the Books. The Underground Economy of the Urban Poor.
Gang Leader for a Day.
Violent Entrepreneurs. The Use of Force in the Making of Russian Capitalism

MOBY DICK AND HEMINGWAY’S BULLS: ON THE LEARNING OF TECHNIQUES OF VIOLENCE

Moby Dick is usually regarded as a novel of deep symbolism. No doubt this accounts for much of its literary appeal. But it is built on a practical observation. Herman Melville, through his experiences on whaling ships, recognized that a harpooned whale essentially kills itself. By running away, the whale dragged a boat-load of sailors for several miles until the whale was exhausted, and this eventually allowed the harpooner to close in and finish it off. A whale is much bigger and stronger than its pursuers; if it fought them head-to-head in the water it would win. But whales are not belligerent animals, and they are frightened, and this is what drags them to their death.

Moby Dick is a thought-experiment. Melville imagines what it would be like if a whale were as intelligent as a human. Instead of running away it would turn and fight. Moby Dick, the white whale, is scarred with harpoons still tangled on his back; these are wounds or trophies from previous encounters with humans, but he always turned and wrecked the harpooners’ boats. As literary critics have generally recognized, he is white to indicate he is nearly human. But no one in the novel explicitly recognizes wherein his humanness lies-- that he recognizes the tactic humans rely on to kill whales. The limits of humans’ perceptiveness of animals come out in their seeing Moby Dick only as supernatural or diabolical (and in the case of the critics, as symbolic). Moby Dick is not necessarily malevolent, but he is intelligent enough to see that running away will kill him, and that his only chance is to turn and counter-attack.

In this respect, Moby Dick also illustrates a main principle of human-on-human conflict. Winning a fight generally begins with establishing emotional dominance; and most of the physical damage occurs after one side emotionally dominates the other (Collins, 2008, Violence: A Macro-Sociological Theory).

Ernest Hemingway gives a parallel but much more explicit analysis of bullfighting (1932, Death in the Afternoon). A mature fighting bull is much bigger, stronger, and faster than the humans who try to kill him in the bull ring. The bull weighs 800 to 1000 pounds, runs faster than humans for short distances, and has horns that are sharp and penetrating as the swords and lances bullfighters use against him. The only way humans can kill a bull (without resorting to guns or poisons, that is) is wear him out. The team of bullfighters lure the bull one way and another by waving bright colored capes and cloths on sticks at him, getting the bull to chase the human but getting out of the way of his horns (if the bullfighters are skilled and lucky) while the bull follows the lure of the cloth. Near the beginning of the fight the humans also stab the bull in the shoulders with lances and mini-harpoons so that the bull will spend the rest of the fight goaded into anger, and indeed looking a little like Moby Dick; also this is done to tire the bull from carrying his head high where he can use his horns to stab a human in the chest. The bullfighter’s main technique for wearing down the bull, however, is the fancy spins and side-steps with the cape, which not only cause the bull to miss the man-- and make the audience cheer-- but make the bull turn abruptly in his tracks. This is a way of using the bull’s weight and speed against himself; the bull cannot turn in a radius less than the length of his own body, and if he tries, he twists his spine and eventually reaches a point where he can barely charge, and cannot keep his head up where he can kill the man with his horns. At this point, the matador lures the bull’s head downwards with the bright red cloth in one hand, while he reaches in over the horns with a sword and kills him through a spot in the back of his neck.

Hemingway explicitly mentions that the bullfighters’ technique is like that of a big-game fisherman, tiring a big fish by letting it run on a rod until it is so exhausted that it can be hauled out of the water to die. He doesn’t mention Moby Dick, and presumably no fish have been intelligent enough to use Moby Dick’s tactic against the fisherman. But bulls appear to be good learners. In fact, Hemingway states, the cardinal rule of the modern bullfight is that the bull should never have fought a human before it enters the bullring. It has never seen the inside of a bullring before, nor a crowd, nor the bright-colored capes nor the bullfighters and their weapons. It follows the lure of the bright-colored cloths and misses the humans it aims at with its horns. It tires itself out chasing and dies when it is worn out.

For bullfighting, this is more than a thought-experiment, as Hemingway describes occasions where experienced bulls fight humans again and again. The first-rate bullrings in the big cities use only new, inexperienced bulls, but bullfights in smaller towns, and especially those where amateurs fight, use cheaper bulls-- used bulls. Like used cars, the principle is buyer beware, since used bulls are experienced bulls, and they have learned the tricks humans are up to [pp. 19-21, 94, 104, 111-114]. Hemingway mentions a bull that killed sixteen men and wounded another sixty; no humans were ever skilled enough to kill it in the ring, and it was eventually disposed of in a meat slaughter-house. Experienced bulls soon recognize that they are only being lured by the bright cape; they will stand still and refuse to charge, then pick out one particular human in the throng and chase him down, refusing to be distracted by the others, until the bull has caught his victim and tossed and gored him as many times as he can. The bull who has fought before no longer charges straight at the bright cloth, but chops sideways and cuts with his horns looking for the man behind it. Hemingway comments that even inexperienced bulls can learn during the course of a 15-minute bullfight, so that if they are not sufficiently worn down by the bullfighters’ tactics the bull will become increasingly dangerous and able to kill the man before he kills him. The most intelligent bulls are the most dangerous, and a bull that has successfully gored a man gains confidence and aims to gore him again.

There is another way in which bulls learn their fighting skills, although this part has nothing to do with humans. Bulls out on the range fight with each other, head to head; using their horns like fencers, blocking and parrying; if one bull gets through and gores the other, he may not let it get up but keeps it off balance and gores it again until it is dead. Once getting the momentum, the dominance of energy and psychology, the victorious animal may push his advantage to the death. If two sparring bulls develop their skills at the same rate, their fights end in stalemates, like skilled boxers who block all the dangerous blows and wind down in respectful equality. But a bullfight with humans is designed to end in death on one side or the other; and if a bull is able to get through the human’s tricks of distracting his attention with bright colored cloths, he has the skilled moves with his horns that he learned against other bulls. Bulls who are only two or three years old are not very dangerous yet, and novice bullfighters can use them to practice their own skills on; but five-year-old bulls have learned too much and only the best bullfighters are supposed to fight them (and vice versa).

Hemingway comments that a bullfight is not designed to be a fair fight or a sport, but a spectacle to show off certain human skills and generate emotions from the apparent risk of human death. Hemingway was a meticulous observer and did not take other people’s word for anything, but checked out the details himself. (In this case, he saw 1500 bullfights.) He was one of the great sociologists of micro-interaction, 30 years before Goffman and 50 years before we started examining human-on-human interactions in detail with audio recordings and now photos and videos. Not many sociologists have studied human-animal interaction (although recently an increasing number; see especially the works of Colin Jerolmack on pigeons). My chief caveat is about claiming that human-animal confrontations-- i.e. violence-threatening confrontations-- are the same as human-on-human confrontations. Humans confronting each other come up against a wall of confrontational tension/fear (ct/f), a tension arising from the hard-wiring in humans that makes us especially susceptible to rituals of mutual solidarity, Interaction Rituals in the specifically sociological sense. (This is very different from the way ‘ritual’ has been used by animal ethologists where it means genetically determined gestures of dominance and submission; see Collins, Violence, pp. 25-29). Successful instances of human violence come from getting around the barrier of ct/f, sometimes by chance, but also by techniques that persons skilled in violence learn to use.

The kinds of violence that Hemingway describes in the case of bullfights come close to the claim that both humans and animals can lose fights by fear, and by the dominant side taking advantage of the other side’s fear; (elsewhere Hemingway makes somewhat similar arguments in the case of big-game hunting: see especially The Short Happy Life of Francis Macomber). Apropos of bulls, Hemingway also claims that some animals and humans are just inherently braver than others (how would we prove this?); and that they get killed not out of being emotionally dominated but out of stupid moves that they make out of bravery and that get taken advantage of by the other side’s conscious anticipation and trickery (which is probably true). Thus the skilled human bullfighter finds it easier to fight a brave bull than a less aggressive but more intelligent, strategic bull. With big-game fish like marlins, the equivalent of bravery seems to be the energetic drive that makes them run and pull to their deaths. With whales, the animals seem easily put into a state of fear and avoidance; and for a whale to act like Moby Dick would take an unrealistically human quality of intelligence that tells him the best strategy is not to run but to use his superior strength in a counter-attack.

The comparison makes it puzzling why whales, which are regarded as closer to humans in their intelligence, are not as good at learning fighting skills (and seeing through the techniques that humans use on them) as bulls, of a species never regarded as very intelligent. The entire question suggests that we are dealing with multiple dimensions, and that neither a biological generalization across all species, nor a gradation of nonhuman-to-human intelligence, will get at the key sources of variation. In my own work on the micro-sociology of human violence, I have avoided generalizing beyond humans, because I have not systematically looked at the primary data on infra-human animal interactions; and judging from my own experience with what other researchers have said about human violence, I don’t trust many researchers to make a well-justified theory out of what first-hand observations actually show.

With Hemingway on bulls, and Melville on whales, we have a couple of careful observers who came to their own conclusions. I’m not prepared to go much further than this, but one point seems justified: skills at violence are learned, certainly by humans, and apparently by animals where we can carefully observe them in various situations. The main aspect of violent skill is a social skill: being able to pick out a favorable opponent, recognize the opponent’s weakness-- above all the moment of emotional weakness-- and in more highly skilled forms, to recognize the tactics the opponent is using and make allowance for them. Fighting bulls, who otherwise don’t seem very intelligent, are good at this, just as humans who fight bulls have collectively evolved a set of techniques for fighting animals who also have such capacities. Perhaps this is an unusual case in humans-vs-animals conflict. But it illustrates well the character of human-vs-human conflict in our own history.

THE INFLATION OF BULLYING: FROM FAGGING TO CYBER-EFFERVESCENT SCAPEGOATING

Bullying was once a fairly well-defined phenomenon. Recently the term has been expanded by journalists, politicians, and in popular expression. What difference does it make what we call these events? The word is being used to cover differing types of conflict, which have different causal paths, and thus very different implications for what to do about them, and for the damage done.

Traditional bullying is picking on network isolates-- victims who are lowest in the group status hierarchy, who lack friends and allies, and lack the emotional energy to defend oneself. Bullying is a repetitive relationship, the same bullies persistently domineering and tormenting the same victims. The classic version was in British boarding schools, where older boys were allowed to make a younger boy into a servant, carrying their books, cleaning their rooms, and generally deferring and taking orders. Nineteenth century school administrators regarded this system as a salutatory way for boys to learn discipline; but it often intensified into maliciousness, physical abuse, and commandeering the younger boy’s possessions. Some boys became school bullies. [sources in Collins, Violence: A Micro-sociological Theory, chapter 4.] The system was called fagging and the younger boys were called fags; this was the origin of the slang term for homosexuals, although that was not its original connotation.

Bullying is not a single event but an ongoing relationship, i.e. a network tie with asymmetrical content: one side bullies the other, never vice versa. It has a specific network location: bullies are not the top of the status hierarchy, but middle-ranking, not very popular themselves, but aggressors rather than victims. Bullying should not be confused with a dominance contest over who is the top-ranking male, which centers on the top contenders, and matches good fighters and leading personalities against each other. Bullying is exploitation by a particularly predatory type of individual from the middle against the bottom. In effect bullies make up for not very good social skills by picking on those who are even worse. Being a bully is not just anybody who fights; it is a specialized role in the status hierarchy, and not a very honorific one.

Classic bullying arises in total institutions like prisons, boarding schools or camps. Key conditions are: there is no escape from close contact with the same set of people; reputations are widely circulated; and the split between control staff and inmates creates a code of no snitching which cuts off victims from protection by authorities. The totalness of institutions is a continuum; as the strength of these variables increases, we may expect bullying relationships to be more frequent.

Classic bullying should be distinguished from scapegoating, where everyone in the group gangs up on a single victim. Usually this is someone who is blamed for a community catastrophe, or otherwise becomes the center of hostile attention. Scapegoating tends to be a single-shot event, rather than an ongoing relationship. The scapegoat might be low-ranking, an isolate, new arrival or cultural deviant; but scapegoats can also be selected from the elite. This happens in scandals, where the secondary scandal-- threatening supporters of the scandalous individual with contagious blame if they don’t join the condemnatory majority-- can rapidly strip even eminent persons of support.

Scapegoating is not carried out by bullies seeking individual dominance, but is genuinely mass-participation ritual of community solidarity, self-righteous Durkheimian unity at its least attractive. Scapegoating tends to arise in tightly integrated communities-- not the hierarchic ones characteristics of bullying; in complex societies, scapegoating requires a huge media frenzy to generate a comparable amount of focus and social pressure. On a smaller level, there is some evidence that girls focus their attacks (mainly verbal) on the lowest-ranking girl-- i.e. a collective action of the entire group against the bottom. This fits with females as being more solidarity-oriented than males, using verbal and emotional attack to keep up group unity at the expense of common target. In contrast, boys tend to fight it out over individual status at the top [studies reviewed in John Levi Martin, Social Structures, 2009, chapter 4]

What Isn’t Bullying?

It is misleading to refer to all kinds of personal conflict as bullying, even if it does happen in school or among young people. Bullying, as a repetitive, unequal relationship among individuals, where distinctive bullies target low-status isolates, has a very different structure and causality than two-sided fights. Among the latter are:

Individual honor contests: two rivals square off against each other, whether with fists, blades or guns, informally or under conventional rules like a duel. Honor contests are almost never top against the bottom, because there is no honor to be gained unless you show you can beat someone of considerable prowess, or at least stand in with them. This is a reason why bullies have mediocre status at best.

Intergroup fights: horizontal struggles between rival gangs, ethnic groups, schools, or neighbourhoods. These can be pretty nasty; in part, because the antagonists tend to be mutually closed Durkheimian communities, so they have no moral compunctions against vicious tactics; on the verbal level, they are prone to derogatory stereotyping, including racial slurs. And because confrontational tension makes fighting difficult to carry out in real life, groups are most successful when they engage in ambushes, drive-bys, or ganging up on outnumbered members of an opposing group who happen to stray into vulnerable territory. Thus actual incidents between gangs or ethnic groups may have something of the look of bullying, where a stronger group beats up on a weaker. News stories about a single incident cannot tell us whether it is bullying or not. Horizontal conflict is not a repetitive relationship of institutionalized inequality, but generally a sequence of alternating tactical advantages.

Another important difference is that inter-group violence chooses its targets as members of a group, not as low-status isolates. For this reason, intergroup violence is probably not as psychologically debilitating as being a bully victim, and may even give emotional energy and solidarity. In contrast to bullying, which leaves victims with very negative self-images, intergroup violence often gives members meaningful self-narratives-- one of the main attractions of belonging to a fighting group. [This is brought out vividly in Curtis Jackson-Jacobs, Tough Crowd: An Ethnographic Study of the Social Organization of Fighting; unpublished ms, UCLA.]

Some intergroup fights combine with aspects of bullying, where a weaker group is repeatedly attacked by a stronger. Instances include school majority black students attacking academically better-performing Asian minorities (e.g. in Philadelphia high schools in 2009-10). But although one side is dominant in the violence, there is an element of horizontal conflict as well, as the two groups compete with different resources-- violence vs. academic capital.

Insult contests: individuals bragging, boasting and making gestures about their alleged superiority to others. This can be done in a tone of entertainment and humor, or it can be hostile and malicious, attempting to establish emotional dominance; it can remain contained, or escalate in emotional tone and physical violence. Ethnographies of gangs and youth culture show a great deal of this. Although insults can be part of a bullying relationship, where they serve to maintain emotional dominance, or to provoke the victim into futile and humiliating outbursts; nevertheless much insults are not part of a unequal relationship. Moves in an insult contest are often reciprocal, and may be compatible with equality and even a ritualistic form of play producing solidarity. An observer cannot simply classify all insults as bullying, without seeing what kind of relationship it is.

Malicious gossip: This is a form of insult, but instead of being in your face, allowing the possibility of direct response, negative gossip is indirect. Gossip is felt to be more unfair, because it is harder to counteract. Nevertheless, malicious gossip is not necessarily bullying. It does not always, or even generally, take the form of attacks on those at the bottom; often it is an attack on those at the top, and on leaders of rival groups. Nor need gossip originate from bullying specialists (although it could-- persons who initiate malicious gossip might be structurally analogous to bullies, although we lack good data on this aspect of gossip networks). Most importantly, malicious gossip is often two-sided, between factions mutually attacking each other.

Research on children’s and adolescent status systems shows that girls tend to engage in more verbal attacks than boys. This is sometimes referred to as bullying, but before deciding, we need to examine the structure of relationships. Outcomes can be quite different, depending on whether the target is isolated, or herself a well-integrated member of a clique. Girls’ two-sided quarrels in the goldfish bowl of school or neighbourhood may well be the equivalent of gang fighting for boys, manufacturing a sense of excitement and meaningful narratives for their lives. How you experience this depends on your network location.

Homophobic attacks: conceptual confusions and real consequences

With increasing public focus on attacks based on sexual orientation, there has also been considerable muddying of what is actually going on. Homophobic attacks are bullying if there is a repetitive pattern of attacks by individuals or bully cliques on isolated, low status individuals; and their target is homosexual. What if the target is not an individual but an entire group of gay persons? Most of the reported evidence among school children is not about group confrontations, but attacks on isolated individuals, although group violence sometimes happens among adults when a gay bar or hangout is attacked by homophobic outsiders. [For examples see Elizabeth A. Armstrong and Suzanna Crage, 2006, “Movements and Memory: the Making of the Stonewall Myth,” American Sociological Review 71: 724-751.] What difference does it make, if it’s all bad? From the sociology of violence we can infer that isolates are much easier targets than groups; unless there is an extreme imbalance in numbers or weapons, attacks by one group on another usually abort. Homosexuals are more likely to be harassed and attacked in classic bullying conditions-- a reputational goldfish bowl of a quasi-total institution, with isolated individuals at the bottom of the status hierarchy. A reasonable hypothesis is: where there are groups (i.e. real networks) of homosexuals in schools, they are less likely to be attacked.

What if the attack is not repetitive but an ephemeral incident? Again, what difference does it make? But differences in degree do matter; subjectively, most of the damage of being on the receiving end of bullying relationship comes from the constant harassment, leaving a feeling of hopelessness and inability to act on one’s own volition.

Homophobic attacks where the entire community unites in ganging up on a victim, are not bullying, but scapegoating. In schools, this ranges from mocking and jeering, to pranks (playing keep-away with one’s things, stealing his possessions, locking a boy in his locker, dumping him into a trash bin) in a progression to varying degrees of violence. What difference does it make how we classify it? It makes a big difference in terms of practical counteractions. Scapegoating versus bullying is the difference between trying to change the entire culture and dynamics of a school, and trying to control or remove a small group of bullying specialists. It is the difference between a lynch mob (and the community structure and mentality that fosters it), and dealing with a small number of criminals, and not very popular ones at that. [On the structural conditions for lynch mobs, and the network relationship between them and their victims, see Roberta Senechal de la Roche, 1997. “The Sociogenesis of Lynching,” in W. Fitzhugh Brundage, Under Sentence of Death: Lynching in the South.] If the problem is homosexual bullying rather than homosexual scapegoating-- and most detailed evidence seems to indicate the former-- this is an easier problem, giving optimism for the future.

Another variant may be called pseudo-homophobic insult. Here homosexuals are not involved at all, but only invoked rhetorically. Such homosexual taunting operates as part of a repertoire of insult. It can be used both horizontally or vertically.

Insults are a major part of horizontal conflict, between rival groups or individuals [patterns summarized in Collins, Violence, chapter 9], and are more common than violence itself. Ethnographic literature on black gangs shows it is fairly routine to call someone ‘nigger’, either playfully or as a degrading expression. To call someone ‘gay’ can operate in a similar way. We lack good comparative evidence on which contexts are where this is likely to happen; but the following gives at example where pseudo-homophobic insult is routine. [personal communication June 2010, from Anthony King, Univ. of Exeter sociologist engaged in research on training and combat practices of US Marines and UK Marines.] In combat training to clear a building, British Marines line up closely one behind each other in a “stack”, ready to fan out once they are inside. American Marines deride the stack as “gay”. This does not mean they literally believe the Brits are gay; they are critical, in part on practical grounds that the stack makes them more vulnerable to all being shot in the doorway; and even more so out of inter-service rivalry between elite, high solidarity combat teams that are otherwise quite similar. The close bodily formation of the stack brings an ironic association with homosexuality as a readily available insult. The insult itself is a ritual of rivalry among equals.

Vertical pseudo-homophobic insult can be a way that boys mock low status isolates. To call someone gay is a form of rhetoric, an escalated insult meant to be especially wounding. The perpetrator may or may not believe the insult to be true (another detail awaiting good research). The consequences can be severe; there is evidence that in a considerable proportion of school rampage shootings, the shooters are striking back at those who insulted them in this fashion [Katherine Newman et al. 2004, Rampage: The Social Roots of School Shootings]. But investigating cases of school rampages are sampling on the dependent variable. Some recipients of pseudo-homosexual insults do not strike back, but commit suicide. But if calling someone gay is a popular-- and therefore frequent-- form of insult, in the great majority of instances it must be the case that the recipient neither strikes back with murder nor commits suicide. Here the pattern is similar to the sociology of violent interaction: most conflicts go no further than verbal bluster. It remains to be found: what are the conditions under which pseudo-homosexual insults result in escalation, or not?

In both non-sexual violence and homophobic conflicts, there are many contingencies between occasions for taking offense and subsequent escalation. Some persons are more touchy than others, and this touchiness is sociologically grounded in situational dynamics and network positions. Understanding what makes a chain of events worse, or better, is not simply a matter of a static culture of homophobia. The interactional patterns and locations of the individuals involved are what is fateful for the paths they follow.

Research Methodology Makes a Crucial Difference

There are widely disparate reports on the amount of bullying in schools. Some recent reports reach as high as 80% of students claiming they are victims of bullying-- if true, this would be a huge break from the well-documented pattern of low-status isolates as victims. Detailed studies of traditional bullying found about 16-18% of second graders as victims, and 3-5% of 9th graders, with girls always the lower number than boys (Dan Olweus, 1993, Bullying at School.) High estimates come from using survey questions that ask whether someone is subjected to being left out of activities, name-calling, rumours, teasing, sexual comments, threats, pushing or hitting [e.g. Bradshaw et al., “Assessing rates and characteristics of bullying through an internet-based survey system.” Persistently Safe Schools, 2006]. But we have no way of knowing from such answers whether these are two-sided fights, insult contests, or teasing games; or if they fit the bullying pattern of repeated, asymmetrical aggression between specialists in domineering and isolated low-status victims. We can only tell the dynamics of bullying-- and other varieties of violence-- if we explicitly ask whether these aggressive actions are reciprocated; if they are repeated, and between whom; and what the network positions are of these individuals are in the status hierarchy.

As it stands, there is no good evidence to suggest there is any more widespread bullying than in the past; conceivably real bullying could be lower, as schools have become more control-oriented. What seems certain is that the appearance of an epidemic of bullying has been created by inflating the definition, so that it now includes all kinds of horizontal fighting, and indeed any negative expressions at all among school children.

Cyber-bullying

This is a recent journalistic term for insults, malicious rumours, and degrading images and videos spread via the Internet and other electronic media. The effects of high-tech character assassination can be very negative, including some wide-publicized suicides. Since such instances constitute sampling on the dependent variable, however, it is not clear what proportion of the presumably vast amount of negative postings lead to what kinds of results. If nasty cyber-communication is very widespread, (with surveys ranging from 20% to 50% of youth saying they have been targets: Sameer Hinduja, and Justin Patchin, Bullying Beyond the Schoolyard, 2009; National Crime Prevention Council, report Feb. 2007 ), the majority of victims may simply accept it as the new normal and shrug it off, or otherwise depending on their network position.

Is it bullying, or another kind of common conflict? Let us go through a brief check-list, reversing order and starting with the major types of two-sided fights:

Individual honor contests: conceivably the e-media can be used for public quarrels between two individuals, by means of which they get greater honor and eliteness. But the e-media are more widely participatory, giving audiences a chance to take part; unlike duels, where there is a sharp break between the audience who are supposed to keep their place, and the duelists in the center of attention, e-media audiences tend to spill into the fight itself, usually in an unrestrained and undignified way. Anonymity and lack of physical presence make it easy to do so. I conclude it is difficult to get elite status by fighting on the internet.

Intergroup fights: These horizontal brawls seem fairly common on the e-media. Much of what is called cyber-bullying may be of this sort.

Insult contests: In the early days of the Internet, so-called flame wars were common; given the opportunities for making long-distance connections with persons one is unlikely to ever meet, plus the use of pseudo-identities, insults were much more common than they are in everyday discourse. This is the opposite of what happens in real-life talk; conversational analysis (CA) concludes from a wide survey of evidence there is a preference for agreement in face-to-face encounters. The social media make it easier to spread negative messages. Here intergroup fights and insult contests collapse into the same thing, since there is little one can do in a fight in cyber space except make insults -- along with trying to release damaging information, or to hack the communications device itself, or its finances. IMPORTANT TANGENTIAL POINT: There is little in the pattern of hackers’ behavior to suggest they follow any of the conflict patterns I have listed. The topic of hackers remains a gaping hole in our sociological knowledge of the contemporary world.

Malicious gossip: Compared to direct word-of-mouth insult, indirect gossip has more of a Durkheimian quality, constructing the collective reality of a community defining an individual, especially if the network is large and its boundaries are vague, so circulated insults take on the anonymous and objective quality of “what everyone knows”. Gossip could be ganging up on an individual; but it is likely that much of the negative gossip spread by E-media is mutual recrimination among rival gossip networks. Probably most of this is horizontal, or even upward carping (as is most hacking), rather than the downward pattern of bullying.

Now for the forms of conflict that are asymmetrical, picking on an isolated individual:

Traditional bullying: there is little evidence of what proportion of cyber-negativity is repetitive aggression by habitual bullies against isolated, low-status victims too cowed to retaliate. Certainly there is no cyber equivalent of the original fagging pattern, where the bully made his victim into a servant, or a sexual slave as in prisons. Cyber attacks can hurt, but they seem incapable of forcing anyone to do their will. From particular cases, we know there is some genuine cyber-bullying; some of it adds cyber-mediated insults and rumours to face-to-face harassment, jeering, pushing and hitting. How much is bullying exclusively on-line, without personal contact? Research is needed to tell us which is worse, and what difference it makes.

Note that mediated bullying, or at least harassment, is not new. The tools of harassment include anonymous telephone calls (probably most prevalent in mid-20th century; some data is available since this is a category routinely collected in police reports, even now). This should remind us that all communications media, historically, could be used for harassment-- further back in time, it was poison-pen letters.

The chief difference with cyber media is that they are so widely networked that negative rumours can spread very rapidly, and leave permanent records, and hence the victim’s sense that a huge, impersonal collective consciousness has them skewered in its scornful attention. Thus the pressure of cyber-attacks may be stronger and more emotionally damaging than other kinds of mediated reputational attacks. But we don’t know this from systematic evidence; and it may not be true, since there are other kinds of exacerbating and mitigating factors in the realm of social relationships and resources. The research is yet to be done.

Scapegoating: the entire community, or a large segment of it, gangs up on a target of its outrage. This sounds like what happens when cyber-attacks go into a feeding frenzy, drawing in more and more participants. But maybe cyber-attacks only give the appearance of a community-wide feeding frenzy. Email can append long lists of recipients, and by carrying along a growing tail of previous messages, a dozen persons may generate the illusion of a huge number of messages, when in fact they are mostly recycling the same messages with additions.

I have personally observed such email-cascades develop on a half-dozen occasions during my presidency of the American Sociological Association in winter-spring 2011. (Most of these were campaigns from inside ASA membership; one was an attack on the ASA and its leaders from a right-wing political movement.) Not to say that these were all character assassination campaigns, but they shared the pattern of a flurry of messages being sent in a period of days, growing rapidly more importunate, denouncing a particular situation or policy and urgently demanding something be done. What I want to emphasize is a common pattern: once a critical mass was reached (sometimes after a slow start), messages came more rapidly and with more vehement content; but then the flurry dropped off again within 3-5 days. Upon careful inspection of each set of messages, I concluded that less than 20 people were sending and resending the great bulk of the messages. Especially during the up-phase, the process gave the impression that a huge and growing number of persons were involved, an exploding Durkheimian collective consciousness promising to engulf everything in its path. Where did this sudden bout of emotional enthusiasm and mutual entrainment come from, and why did it die off so rapidly? The timing of the emails (conveniently time-stamped for researchers) showed them getting closer and closer together, until the peak; thereafter, as the intervals between messages began to lengthen, the emotional urgency in their contents began to drop off precipitously, as did their numbers. At its height, a cyber-flurry exemplifies the “circular reaction” that Herbert Blumer and other collective behavior researchers have described for the flow of mutually supporting emotions in an excited crowd.

Although a set of people linked only through their computers or hand-held media devices lack the physical co-presence that I have argued is a precondition for a successful interaction ritual, it can generate a high level of collective effervescence when participants ramp up their sending and resending of messages to a rapid rate. At peak moments, I felt the excitement myself, even though I was more on the fending-off side than the side mobilizing the cascade, finding myself anticipating the period of hours, then minutes, when the next message would come in. When the pace slowed down, so did my excitement. Rhythmic entrainment generates emotional excitement, amplified by Durkheimian solidarity, as if we are all collectively pedaling a set of interlinked bicycles together, mutually rushing toward a speed-record. But as Durkheim noted, periods of collective effervescence are limited in time. The collective process of building excitement passes a second turning point (second after the critical mass of takeoff), where the sense of rushing forward together begins to flatten out, loses its enthusiasm and then begins to dissipate. The energy and entrainment depends on the sense that our collective enterprise is continually growing, adding more members (which, as noted, may well be an illusion of the cyber-format). This ephemeral community of communication relies on the sense of acceleration; when this palpably falls off, its jolt of emotional energy declines.

My inference is the following: cyber-bullying is really cyber-scapegoating, or rather a cyber-effervescent version in which a moderate size group of people become excitedly entrained in their common enterprise of trashing someone via E-media. Unlike bullying, where the chief link is between bully and victim, the former draining the latter of emotional energy and thereby getting a little status surge, in cyber-effervescent-scapegoating, the important emotional tie is among the posters of the negative messages. The victim is just a focal point, virtually a non-person, who serves only as content to circulate messages about.

A lurid example is the so-called “Kill Kylie” campaign. In 2004 a group of classmates ganged up on an eighth-grade girl in Vermont, proliferating websites and posts filled with homophobic attacks, and urging her to commit suicide. [www.deseretnews.com/schoolyard-bullying-has-gone-high-tech, August 19, 2006; analysis in paper by Jason Haas, University of Pennsylvania, 2011.] In my terminology, this was probably pseudo-homophobic insult, since it is unclear that Kylie was gay; nor did she appear to be low status or a social isolate.

The middle-class kids who tried to drive her to her death might seem unusually vicious. But their behavior, I would suggest, is largely to be attributed to the collective effervescence of the cybernetwork experience, very likely a higher emotional rush than anything they had experienced previously. In this respect they are like members of lynch mobs, who often describe their experience retrospectively as unreal, a lapse from their normal consciousness. (In his lectures at Berkeley in 1964, Blumer described in this way a lynch mob that he observed in Missouri in the 1920s; see also “forward panic” in Collins, Violence, chapter 3). The perpetrators probably experienced their network scapegoating cascade more as antinomian than as evil, an exciting alternative reality, a festive holiday from morality (AKA “moral holiday”) but bound together in an emotional community of primitive Durkheimian solidarity. In short, they were doing it for the shared buzz. In lay terms, as some survey evidence suggests, kids do cyberattacks because they are fun.

Girls take part in cyber-bullying more than boys [Hinduja and Patchin, 2009]. This is not surprising, since girls do more scapegoating than bullying. An ironic conclusion is that girls’ greater concern for group solidarity makes them more attracted to the effervescence of cyber-scapegoating. The other major form of cyber-troublemaking, hacking, is much more the province of boys, and is oriented not to their group but to the disruption they can cause to authoritative organizations.

Bottom Line

Many different kinds of conflicts can take place in closed communities like schools, both in direct confrontation and via old and new media. Bullying has the most severe results for its victims, chiefly because they are in isolated network positions. Other kinds of conflict may actually generate a good deal of solidarity and meaningfulness for participants, albeit at the cost of some physical casualties and organizational disruption.

But bullying can only be recognized if one knows the location of participants in their social networks. Teachers may not have a very good sense of the network and status structure that is the context for any particular event of name-calling, exclusion or violence. It may seem that the best policy is simply to ban everything that is the slightest bit aggressive or negative. School administrators, who are even further from the action, are even less likely to know the social realities on the ground.

Kids themselves can generally tell the difference between the class bully being mean to an isolate, and playful teasing among friends, honor contests, or group rivalries. Officials trying to impose discipline by blanket orders, prohibiting all less-than-ideal-middle-class-politeness, may get a certain amount of surface compliance-- if they invest enough resources in monitoring. But such authorities also convince the kids that adults are rigid doctrinaires, clueless about what is really going on. The result may be nothing worse than to reinforce the normal suspiciousness on the part of the youth underground against official authorities. More seriously, it may make some kids feel they are being unjustly punished for acts misunderstood by self-righteous adults, reinforcing a spiral of alienation and defiance that is a component of criminal careers.

The practical advice may not be easy to carry out, but it is this: learn the network structure of the group, and judge all conflict in terms of its location.

INTERACTION RITUALS AND THE NEW ELECTRONIC MEDIA

The question I am most frequently asked about Interaction Ritual (IR) theory is whether new electronic media are changing the conditions for IRs. After all, what allows an IR to be constructed is assembling human bodies in face-to-face interaction. Further ingredients are the rapid back-and-forth of micro-behaviors (voice tones and rhythms, bodily movements); focusing attention on the same thing and thereby recognizing mutual intersubjectivity; feeling the same emotion or mood. When these ingredients reach a sufficiently high level, they intensify through a system of feedbacks: emotions grow stronger; bodily gestures and voice patterns become closely coordinated, down to the level of micro-fractions of a second. A successful IR builds up to a condition of high entrainment in a shared rhythm that Durkheim called collective effervescence. At high levels, this is what humans experience as the most powerful force in their lives; it constitutes the great moments, and shapes their most deeply held views and values. Thus human action is oriented around the attractiveness of different situations of social interaction: we are motivated towards those that are more successful IRs, and away from those that are mediocre or failed IRs. The human world is organized as a landscape of centers of social attraction, repulsion, and indifference.

What happens, then, when more and more of our interaction takes place at a distance, mediated by mobile phones, text messages, computer posts to a network of perhaps thousands of persons? When interaction is mediated rather than face-to-face, the bodily component of IRs is missing. In the history of social life up until recently, IRs have been the source of solidarity, symbolic values, moral standards, and emotional enthusiasm (what IR theory calls Emotional Energy, EE). Without bodily assembly to set off the process of building IRs, what can happen in a mediated, disembodied world?

There are at least 3 possibilities. First, new kinds of IRs may be created, with new forms of solidarity, symbolism, and morality. In this case, we would need an entirely new theory. Second, IRs fail; solidarity and the other outcomes of IRs disappear in a wholly mediated world. Third, IRs continue to be carried out over distance media, but their effects are weaker; collective effervescence never rises to very high levels; and solidarity, commitment to symbolism, and other consequences continue to exist but at a weakened level.

Empirical research is now taking up these questions. The answer that is emerging seems to be the third alternative: it is possible to achieve solidarity through media communications, but it is weaker than bodily, face-to-face interaction. I argued in Interaction Ritual Chains, chapter 2 [2004] that mediated communications that already existed during the 20th century-- such as telephones-- did not replace IRs. Although it has been possible to talk to your friends and lovers over the phone, that did not replace meeting them; a phone call does not substitute for a kiss; and telephone sex services are an adjunct to masturbation, not a substitute for intercourse. When meaningful ceremonies are carried out-- such as a wedding or funeral-- people still assemble bodily, even though the technology exists to attend by phone-plus-video hookup. Research now under way on conference calls indicates that although organizational meetings can be done conveniently by telephone, nevertheless most participants prefer a face-to-face meeting, because both the solidarity and the political maneuvering are done better when people are bodily present.

The pattern turns out to be that mediated connections supplement face-to-face encounters. Rich Ling, in New Tech, New Ties [MIT Press, 2008] shows that mobile phone users talk most frequently with persons whom they also see personally; mobile phones increase the amount of contact in a bodily network that already exists. We have no good evidence for alternative number two-- solidarity disappearing in a solely mediated world-- because it appears that hardly anyone communicates entirely by distance media, lacking embodied contact. It may be that such a person would be debilitated, as we know that physical contact is good for health and emotional support. The comparative research still needs to be been done, looking at the amount of both mediated and bodily contact that people have; moreover, such research would have to measure how successful the IRs are which take place, in terms of their amount of mutual focus, shared emotion, and rhythmic coordination. Face-to-face encounters can fail as well as succeed; so we should not expect that failed face-to-face IRs are superior to mediated interactions in producing solidarity, commitment to symbols, morality, and EE.

Compare now different kinds of personal media: voice (phones); textual; multi-media (combination of text and images). Voice media in real time allow for some aspects of a successful IR, such as rhythmic coordination of speaking; voice messages, on the other hand, because there is no rapid flow of back-and-forth, should produce less solidarity. There is even less rhythmic coordination in exchanging messages by text; even if one answers quickly, this is far from the level of micro-rhythms that is found in mutually attuned speech, taking place at the level of tenths of seconds and even more fine-grained micro-frequencies of voice tones which produce the felt bodily and emotional experiences of talk. Adding visual images does not necessarily increase the micro-coordination; still photographs do not convey bodily alignments in real time, and in fact often depict a very different moment than the one taking place during the communication; they are more in the nature of image-manipulation than spontaneous mutual orientation. Real time video plus voice is closest to a real IR, and should be expected to produce higher results on the outcome side (solidarity, etc.), although this remains to be tested.

Many people, especially youth, spend many hours a day on mediated communication. Is this evidence that mediated interactions are successful IRs, or a substitute for them? I suggest a different hypothesis: since mediated IRs are weaker than bodily face-to-face IRs, people who have relatively few embodied IRs try to increase the frequency of mediated IRs to make up for them. Some people spend a great deal of time checking their email, even apart from what is necessary for work; some spend much time posting and reading posts on social network media. I suggest that this is like an addiction; specifically, the type of drug addiction which produces “tolerance,” where the effect of the drug weakens with habituation, so that the addict needs to take larger and larger dosages to get the pleasurable effect. To state this more clearly: mediated communications are weaker than embodied IRs; to the extent that someone relies on mediated rather than embodied IRs, they are getting the equivalent of a weak drug high; so they increase their consumption to try to make up for the weak dosage. Here again is an area for research. New kinds of electronic media appear rapidly, and are greeted with enthusiasm when they first spread, hence most of what is reported about them is wild rhetoric. The actual effects on people’s experience of social interaction are harder to measure, and require better comparisons: people with different amounts of mediated communication, in relation to different amounts of embodied IRs (and at different levels of IR success and failure); and all this needs to be correlated with the outcome variables (solidarity, symbolism, etc.)

Theory of IRs is closely connected with sociological theory of networks. Networks are usually conceived on the macro or meso-level, as if it were an actual set of connecting lines. But seen from the micro-level, a connection or tie is just a metaphor, for the amount and quality of micro-interaction which takes place between particular individual nodes. What we call a “strong tie” generally means people who converse with each other frequently about important matters-- which is to say, people who frequently have successful IRs with each other. A “weak tie” is some amount of repeated contact, but with less strong solidarity and emotion--- i.e. moderate IRs. With this perspective in mind, let us consider two kinds of electronic network structures: those which are node-to-node (an individual sends a message to another specific individual-- such as email); and those which are broadcast, one to many (such as posts on a blog or social media site). Popular social media in recent years have created a type of network structure that is called “friends”, but which differs considerably from traditional friendships as taking place through embodied IRs.

Traditional embodied IRs can be one-to-one. This is typically what exists in the most intimate kinds of friendships, such as lovers, partners, or close friends. In Goffman’s terms, they share a common backstage, where the nuances and troubles of how they carry out frontstage social performances are shared in secret. In my formulation, close friends are backstage friends. An intermediate type of friend might be called a “sociable friend”, someone who meets with others in an informal group (such as at a dinner table or a party); here the conversation is less intimate, more focused on items of entertainment, or in more serious circles, discussing politics or work. Research on networks indicates that most people have a very few intimate friends (sharing backstage secrets), and perhaps a few dozen sociable friends.

What then is the status of “friends” defined as those with whom one exchanges posts on a social media site, typically with hundreds or thousands of persons? This is a broadcast network structure, not one-to-one; thus it eliminates the possibility of strong specific ties. In addition, because these interactions do not take place in real time, micro-coordination does not exist; no strong IRs are created. It is true that persons may post a good deal of detail about their daily activities, but this does not necessarily lead to shared emotions, at the intensity of emotional effervescence that is generated in successful IRs. Pending the results of more micro-sociological research, I would suggest that broadcast-style social media networks have generated a new category of “friendship” that is somewhere on the continuum between “sociable friend” [itself a weaker tie than backstage friend/strong tie] and “acquaintance” [the traditional network concept of “weak tie”]. The “social-media friend” has more content than an “acquaintance tie”, since the former gives much more personal information about oneself.

As yet it is unclear what are the effects of this kind of sharing personal information. The information on the whole is superficial, Goffmanian frontstage; one possibility that needs to be considered is that the social media presentation of self is manipulated and contrived, rather than intimate and honest. This is nothing new; Goffman argued that everyone in traditional face-to-face interaction tries to present a favorable image of oneself, although this is mostly done by appearance and gesture, whereas social media self-presentation is based more on verbal statements, as well as photo images selected for the purpose. One could argue that Goffmanian everyday life interaction makes it harder to keep up a fake impression because flaws can leak through one’s performance, especially as emotions are expressed and embarrassment may result; whereas a social media self-presentation gives more opportunity to deliberately contrive the self one wants to present. It is so to speak Goffmanian pseudo-intimacy, a carefully selected view of what purports to be one’s backstage.

It is true that young people often post things about themselves that would not be revealed by circumspect adults (sex, drugs, fights, etc.). But this is not necessarily showing the intimate backstage self; generally the things which are revealed are a form of bragging, claiming antinomian status-- the reverse status hierarchy of youth cultures in which official laws and restrictions are challenged. Nevertheless, talking about illicit things is not the same as intimate backstage revelation. To say that one has gotten into a fight can be a form of bragging; more intimate would be to say you were threatened by a fight and felt afraid, fought badly, or ran away. (The latter is a much more common occurrence, as documented in Randall Collins, Violence: A Micro-sociological Theory, Princeton Univ. Press, 2008.) To brag about one’s sex life is not the same as talking about the failures of a sexual attempt (again, a very common occurrence: David Grazian, On the Make: The Hustle of Urban Nightlife, Univ. of Chicago Press, 2008). The antinomian selves posted by many young people are a cultural ideal within those groups, not a revelation of their intimate selves. In fact research here would be a good site for studying the contrived aspect of the presentation of self.

Let us consider now the relationship between IR theory and social conflict; and ask whether the new electronic media change anything. Consider first the personal level, as individuals get into conflicts with other individuals, or small groups quarrel and fight each other. In the early days of the Internet, people used to frequently insult each other, in so-called “flame wars”. The practice seems to have declined as participation in the Internet has become extremely widespread, and people configure their networks for favorable contacts (or at least favorable pretences, as in spamming). Insulting strangers whom one does not know face-to-face fits quite well with the patterns of violent conflict [Collins, Violence]: violence is in fact quite difficult for persons to carry out when they are close together, and is much easier at a distance. Thus in warfare, artillery or long-distance snipers using telescopes are much more accurate in killing the enemy than soldiers in close confrontation. Contrary to the usual entertainment media mythology about violence, closeness makes antagonists incompetent; they often miss with their weapons even if only a few meters away, and most antagonists are unable to use their weapons at all.

I have called this emotional pattern “confrontational tension/fear”, and have argued that this difficulty in face-to-face violence comes from the fact that violence goes against the grain of IRs. Humans are hard-wired in their nervous systems to become easily entrained with the bodily rhythms and emotions of persons they encounter in full-channel communication; hence the effort to do violence cuts across the tendency for mutual rhythmic coordination; it literally produces tension which makes people’s hands shake and their guns not to shoot straight. Professionals at violence get around this barrier of confrontational tension/fear by techniques which lower the focus of the confrontation: attacking their enemy from the rear; or avoiding the face and above all eye contact, such as by wearing masks or hoods.

Thus, to return to the case of conflict over the Internet, it is easier to get into a quarrel and to deliver insults from a distance, against a person whom you cannot see. Internet quarrels also have an easy resolution: one simply cuts off the connection. This is similar to conflict in everyday life, where people try to avoid conflicts as much as possible by leaving the scene. (The style of tough guys who go looking for fights applies only to a minority of persons; and even the tough guys operate by micro-interactional techniques which enable them to circumvent confrontational tension, especially by attacking weak victims. The fearless tough guy is mostly a myth.)

On the individual level, then, electronic media generally conform to larger patterns of conflict. What about on the level of small groups? Little groups of friends and supporters may get into conflicts in a bar or place of entertainment, and sometimes this results in a brawl. The equivalent of this in the electronic media seems hardly to exist. There are fantasy games in which the player enacts a role in a violent conflict--- but this is not a conflict with other real people; and furthermore it is in a contrived medium which lacks the most basic features of violence, such as confrontational tension/fear. Violent games only serve to perpetuate mythologies about how easy violence really is. In my judgment, such games are more of a fantasy escape or compensation for the real world than a form of preparation for it.

It is not clear from sociological evidence that gangs use the social media much. A major component of the everyday life of a criminal gang is the atmosphere of physical threat. Although gang members do not engage in a lot of violence statistically-- contrary to journalistic impressions, murders even in very active gang areas hardly happen at a rate of about 1 per 100 gang members per year [evidence in Collins, Violence p. 373]-- but gangs spend a great deal of time talking about violence, recalling incidents, bragging and planning retaliation. Moreover, gang members have territory, a street or place they control; they must be there physically, and most of their contacts are with other persons in their own gang, or its immediate surroundings. Gang members are very far from being cosmopolitans, and do not have wide networks. Studies of network usage rarely show gang members involved. (There are some incidents of gangs monitoring a neighbourhood information network, to see who is away from home so that their houses could be burglarized; but this is more in the nature of using the Internet to locate victims, rather than for ties within the gang.) I would conclude that gangs are too concerned about maintaining a high level of solidarity inside their group, and with physical threat against outsiders, to be much concerned with weak-IR media.

Let us consider another level of conflict, that involving official or formal organizations. On one side are hackers, individuals who use their electronic expertise to hack into an organization, either purely for the sake of disruption, or for financial gain. Sociologists and criminologists know relatively little about hackers. They do not appear to be the same kinds of people who belong to gangs; as indicated, gangs are very concerned about their territorial presence, and are most concerned to fight against rival gangs; hackers seem to be from a different social class and are more likely to be isolated individuals. (This needs investigation-- do hackers connect with each other via the internet? Are they underground groups of close friends? Some are probably more isolated than others; which type does the most hacking and the most damage?)

On the other side, officials also use electronic media to attack and counter-attack. Leaving aside the issue of how organizations defend themselves against hacking and cyberwar, the point I want to emphasize is that official agencies of control have an abundance of information about individuals who most use electronic media. Especially social networking sites, where young people post all sorts of information about themselves, are vulnerable to police, as well as employers and investigators; as many naïve youth have discovered to their disadvantage, their antinomian bragging can get them disqualified from jobs, or even arrested (for example, by contact with forbidden porn sites.). Sociologically, it is best to conceive of the electronic media as a terrain on which conflict can take place between different forces. For many people, especially youth in the first flush of enthusiasm for new possibilities of connections and self-presentation, the electronic media seem to be a place of freedom. But this depends on the extent to which official agencies are constrained from invading the same media channels in search of incriminating information. Here the electronic media have to be seen in the perspective of surrounding social organization: political and legal processes influence how much leeway each side of the conflict has in being able to operate against the other.

The technology of the media is not a wholly autonomous force; it is chiefly in democracies with strong legal restrictions on government agencies that the electronic media give the greatest freedom for popular networks to operate. It is sometimes argued that network media favor social movements, allowing them to mobilize quickly for protests and political campaigns; thus it is claimed that the network media favor rebellions against authoritarian regimes such as China or Iran. But these same cases show the limits of electronic networks.

One weakness is that networks among strangers are not actually very easy to mobilize; social movement researchers have demonstrated that the great majority of persons who take part in movements and assemble for demonstrations do not come as isolates, but accompanied by friends; a big crowd is always made up of knots of personal supporters. It is this intimate structure of clusters in the network that makes political movements succeed; and their lack makes them fail. Thus electronic media are useful for activating personal networks, but are not a substitute for them. (This parallels Ling’s conclusion about mobile phone: that they supplement existing personal contacts rather than replacing them.)

A second weakness of electronic networks for mobilizing political protests is that a sufficiently authoritarian government has little difficulty in shutting down the network. China and Iran have shown that a government can cut off computer servers and mobile phone connections. The more democratic part of the world can protest; and the commercial importance of the Internet gives the protests some economic allies. But mere disapproval from the outside has not been a deterrent for authoritarian regimes in the past. It is not at all impossible that a Stalinist type of totalitarian dictatorship could emerge in various countries. The multiple connections of the electronic media would not prevent such a development; and indeed a determined authoritarian government would find the Internet a convenient way of spying on people. Especially as the tendency of technology and capitalist consolidation in the media industries is to bring all the media together into one device, it would be possible for government super-computers to track considerable details about people’s lives, expressed beliefs, and their social connections. In George Orwell’s Nineteen Eighty-four (published in 1948), the television set is not something you watch but something that watches you, at the behest of the secret police. The new media make this increasingly easier for a government to do. Whether a government will do this or not, does not depend on the media themselves. It is a matter for the larger politics of the society. In that respect, too, the findings of sociology, both for micro-sociology and macro-sociology, remain relevant for the electronic network age of the future.

I will conclude with an even more futuristic possibility. Up to now, the electronic media produce only weak IRs, because they lack most of the ingredients that make IRs successful: bodily presence is important because so many of the channels of micro-coordination happen bodily, in the quick interplay of voice rhythms and tones, emotional expressions, gestures, and more intense moments, bodily touch. It is possible that the electronic media will learn from IR theory, and try to incorporate these features into electronic devices. For instance, communication devices could include special amplification of voice rhythms, perhaps artificially making them more coordinated. Persons on both ends of the line could be fitted with devices to measure heart rate, blood pressure, breathing rate, perhaps eventually even brain waves, and to transmit these to special receivers on the other end-- individuals would receive physiological input electronically from the other person into one’s own physiology. Several lines of development could occur: first, to make electronic media more like real multi-physiological channel IRs; hence mediated interaction would become more successful in producing IRs, and could tend to replace bodily interaction since the latter would no longer be superior. Second, is the possibility of manipulating these electronic feeds, so that one could present a Goffmanian electronic frontstage, so to speak, making oneself appear to send a physiological response that is contrived rather than genuine. Ironically, this implies that traditional patterns of micro-interaction are still possible even if they take place via electronic media. More solidarity might be created; but also it might be faked. Interaction Rituals have at least these two aspects: social solidarity, but also the manipulated presentation of self. The dialectic between the two seems likely to continue for a long time.

Napoleon Never Slept:

How Great Leaders Leverage Social Energy  

 Micro-sociological secrets of charismatic leaders from Jesus to Steve Jobs

E-book now available at Maren.ink and Amazon

 

EMOTIONAL ENERGY AND THE CULT OF FREE WILL

Free-will is long-standing philosophical question. Although often regarded as intractable, the issue becomes surprisingly clear from the vantage point of micro-sociology, the theory of Interaction Ritual Chains. Every aspect of the free-will question is sociological. Will exists as an empirical experience; free will, however, is a cultural interpretation placed upon these experiences in some societies but not in others. Since both the experience of will, and the cultural interpretations, vary across situations, we have sociological leverage for showing the social conditions that cause them. I will conclude by arguing that our goal as sociologists is to explain as much as we can, and that means a deterministic position about will. Nevertheless, not believing in free will does not change anything in our lives, our activism, or our moral behavior.

I will not concentrate on philosophical arguments regarding free-will. A brief summary of the world history of such arguments is in the Appendix: The Philosophical Defense of Free Will. Its most important conclusion is that intellectuals in Asia were little interested in the topic; it had a flurry of discussion in early Islam, but then orthodoxy decided for determinism; free will was chiefly a concern of Christian theologians, and has become deeply engrained in the cultural discourse of the modern West.

One philosophical point is worth making at the outset, in order to frame the limits of what I am discussing. Most philosophical argument takes the existence of free-will as given, and concentrates on criticizing viewpoints which might undermine it. It is notoriously difficult to say anything substantive about free-will itself, and indeed it is defined mainly by negation. The same can be said about the larger question of determinacy and indeterminacy; most argument is about the nature and limits of determinacy, with indeterminacy left as an unspecified but often militantly supported residual. I will bypass the general question of determinacy/indeterminacy, with only the reminder that the indeterminist position in general does not necessarily imply free-will. A universe of chance or chaos need not have any human free-will in it. One long-standing philosophical argument (shared by Hume and J.S. Mill) is that for free-will to operate, there must also be considerable determinacy in the world, otherwise the action of the human will could never be effective in bringing about results. Some intellectuals today believe that sociology cannot or should not try to explain anything, since the social world is undetermined. If so, they should recognize they are undermining the possibility of human agency they so much admire. In any case, what I am concerned with here are the narrower questions: what is will; and why some people think it is free.

 

Three Modern Secular Versions of Free Will

There are three main variants of what we extol as free-will.

First, the individual is held to be responsible for his or her acts. This is incorporated in our conception of a capable adult person, exercising the rights of citizenship and subject to constraints of public law. Above all, the concept of free will is embedded in modern criminal law; for only if someone is responsible for his/her actions is it considered just to impose a criminal punishment. Similarly in civil law the notion of the capability to make decisions is essential for the validity of contracts. This is characteristic of modern social organization and its accompanying ideology: individual actions are to be interpreted in terms of the concept of free will if these institutions are to operate legally.

Responsibility in the public sense is related to self-discipline in the private sphere. Will-power is when you "resist temptation," keeping away from the refrigerator when on a diet, forcing yourself to exercise to stay in shape, doing your work even though you’d rather not. In both public and private versions, free-will of this sort is actually a constraint. You might want to do something differently, but you pull yourself together, you obey the law, you do what you know you should. One might describe this with a Freudian metaphor as "superego will"; under another metaphor, "Weberian Protestant Ethic will". Such will power has a psychological reality: the feeling of putting out effort to overcome an obstacle, the feeling of fighting down temptation. But is it free, since it so obviously operates as a constraint, and along the lines of the official standards of society?

A second, almost diametrically opposed notion of free-will is spontaneity or creativity. Here we extol the ability to escape from socially-imposed patterns, to throw off the restraints of responsibility, seriousness and even morality. This is private will in opposition to official will, or at least on holiday from it. This could be called "Nietzschean will", or in Freudian metaphor, "Id-will". Again one could question its freedom. The Freudian metaphor implies that such will is a drive, perhaps the natural tendency of the body to fight free of constraints and pursue its own lusts. Schopenhauer, whose metaphysics rests on the will as Ding-an-sich, explicitly saw will as driven rather than as free. Neither Greek nor Christian philosophy would regard spontaneity-will as free, but as bondage to the passions. The value of spontaneity is a peculiarly modern one, connected with romanticism and counter-culture alienation from dominant institutions.

A third conception of will is reflexiveness: the capacity to stand back, to weigh choices, to make decisions. Reflexive deliberation no doubt exists, among some people at some times. Some philosophical and sociological movements give great emphasis to reflexivity (including existentialism, ethnomethodology, post-modernism), but apart from intellectual concerns, it is not clear how much reflexivity there is in everyday life.

The three types of free-will are mutually opposed to each other; they are all distinctively Western and modern; and they all have moral loadings of one kind or another. It is easy to find a social basis for all three components: for moral ideals and commitments to self-discipline; for feelings of spontaneous energy; and for reflexive thinking. All three can be derived from the theory of Interaction Rituals.

 

Interaction Rituals Produce Varying Emotional Energy, the Raw Experience of Will

The basic mechanism of social interaction is the Interaction Ritual (IR). Its ingredients are assembly of human bodies in the same place; mutual focus of attention; and sharing a common mood. When these ingredients are strong enough, the IR takes off, heightening mutual focus into intersubjectivity, and intensifying the shared mood into a group emotion. Voice and gesture become synchronized, sweeping up participants into rhythmic entrainment. Successful IRs generate transituational outcomes, including feelings of solidarity, respect for symbols recalling group membership, and most importantly for our purposes, emotional energy (EE). The person who has gone through a successful ritual feels energized: more confident, enthusiastic, proactive. Rituals can also fail, if the ingredients do not mesh into collective resonance; a failed ritual drains EE, making one depressed, passive, and alienated. Mediocre IRs result in an average level of EE, bland and unnoticed.

EE is the raw experience that we call “will”. It is a palpable feeling of body and mind; “spirit” in the sense of feeling spirited, in contrast to dispirited or downhearted (among many metaphors for high and low states of EE). When one is full of emotional energy, one moves into action, takes on obstacles and overcomes them; the right words flow to one’s tongue, clear thoughts to one’s head. One feels determined and successful. But will is not a constant. Some people have more of it than others; and they have more of it at some moments than other times. The philosophical doctrine that people always have will is empirically inaccurate. And precisely because it does vary, we are able to make sociological comparisons and show the conditions for high, medium, or low will power.

Persons participating in a successful IR generate more EE, more will. Thus one dimension of variation is between persons who have a steady chain of successful IRs as they go through the moments of their days, and those who have less IR success, or no success at all. I have called this process the market for interactions; persons do better in producing successful IRs when they are able to enter bodily assemblies and attain mutual focus and shared mood; this in turn depends on cultural capital and emotions from prior interactions. IR chains are cumulative in both positive and negative directions; persons who are successful in conversations, meetings and other shared rituals generate the symbolic capital and the EE to become successful in future encounters. Conversely, persons who fail in such interactions come out with a lack of symbolic capital and EE, and thus are even less likely to make a successful entry into future IRs.

A second dimension of relative success in IRs comes from one’s position inside the assembled group: some are more in the center of attention, the focal point through which emotions flow; such persons get the largest share of the group’s energy for themselves. Other persons are more peripheral, more audience than a leader of the group’s rhythms; they feel membership but only modest amounts of EE. Qualities traditionally ascribed to “personality” or individual traits are really qualities of their interactional position.

IR chains tend to be cumulative, making the EE-rich richer still, and the EE-poor even less energized. But the EE-rich may stop rising, and even fall, depending on the totality of conditions for IRs around them. Extremely high-EE individuals have a trajectory that makes them the center of attention, not just in small assemblies such as two-person conversations, but the orator or performer at the center of crowds. Persons who channel the emotions of large crowds by putting him/herself in the focus of everyone’s attention are variously called “hero”, “leader”, “star”, “charismatic”, “popular”, etc. But charisma can fall, if crowds no longer assemble, or their attention is diverted by other events or by rival leaders. The strongest focus of group attention happens in situations of conflict, and it is in periods of danger and crisis that charismatic leaders emerge; but a conflict can be resolved, or the leader’s efforts to control the conflict may fail. Thus the social structure of conflict, which temporarily gives some individuals high EE, can also deprive them of those conditions and hence of EE.

In a conflict, will power consists in imposing one’s trajectory upon opponents; in Weber’s famous definition, power is getting one’s way against others’ resistance. This is true on the micro-level of individual confrontations in arguments and violent threats [how this is done is documented in Violence: A Micro-Sociological Theory]. Conflict is a distinctive type of IR in which someone’s EE-gain is at the expense of someone else’s precipitous EE-loss. In the conflict of wills, some find circumstances that give them even more will power, while others lose their will. Sociologically, will power is not merely one’s own; not an attribute of the individual, but of the match-up of all the individuals who come together in an IR. Religious and philosophical conceptions of will, abstracted from the real social context in which it is always found, create a myth of the individual will.

Famous generals, politicians, social movement leaders are lucky if they die at the height of their energy; many fade away when the crowds no longer assemble for them, or they can no longer move the crowd. Napoleon, during his meteoric career as victorious general, government reformer and dictator, was noted for his extreme energy: taking on numerous projects, inspiring his followers, moving his troops faster than any opponent; he was known for sleeping no more than a few hours a day, in snatches between action. He got his EE by being constantly in the center of admiring crowds, in situations of dramatic emotion, focusing the energy of military and political organization around himself. Yet when Napoleon was finally defeated and exiled to a remote island, he lasted only 8 more years, dying at the age of 53. [Felix Markham, Napoleon] The swirl of crowd-focused emotion that had sustained him was gone; the interactional structure that had given him enormous powers of will in far-flung organization now deserted him, leaving him fat and indolent, eventually without will to live.

Naïve hero-worship ignores the interactional structures that produce high-EE individuals in moments of concentrated assembly and social attention. Will is socially variable; and the IR patterns that give large amounts of will to some few individuals thereby deprive many others of having similar amounts of EE. Very few can be in the center of big crowds. Will power is not entirely a zero-sum game, since successful IRs can energize everyone, in degree, who takes part in the enthusiastic gathering. But such hugely energizing gatherings are transitory; and moments of collective will become ages of remembered glory, because they are rare.

 

When is EE Experienced as “Free Will?”

EE is a real experience, and thus will power is an empirically existing phenomenon. One mistake is to interpret will experiences as solely a characteristic of the individual. As many philosophers have noted from Hobbes (1656) to O'Shaughnessy (The Will, 1980), you cannot will to will. The sociological equivalent is: the structure of social interaction, as you move through the networks in an IR chain, determines how much emotional energy you have at any given moment, hence how much will.

Another mistake is to identify “will” with “free will.” Across world history, experiences of EE are sometimes interpreted as free will, sometimes not.

Julius Caesar had a high level of EE. During his political career and military campaigns he was extremely energetic: fast-moving, quick to decide on a plan of action, confident he could always lead his men through any difficulty. Like Napoleon, he needed little sleep, and carried out multiple tasks of organizing, negotiating, dictating messages incessantly even while traveling by chariot. In combat, he wore a scarlet cape, letting enemies target him because it was more important to make himself a center of inspiration for his troops. His EE came from techniques not only of putting himself in the center of mass assemblies, but in dominating them. A telling example occurred during the civil war when he received a message that his troops were mutinous for lack of pay. Although they had attempted to kill the officers Caesar sent to negotiate with them, he boldly went to the assembly area and mounted the speaking platform. The troops shouted out demands to be released from their enlistments. Without hesitation, Caesar replied, “I discharge you.” Thousands of soldiers, taken aback, were silenced. Caesar went on to tell them that he would recruit other soldiers who would gain the victory, then turned his back to leave. The soldiers clamored for Caesar to keep them in his army; in the end, he relented, except for his favorite legion, whose disloyalty he declared he could not forgive. [Appian, The Civil Wars; see also Caesar, Gallic Wars] Caesar was famously merciful to those he defeated, but he always reserved some for exemplary punishment. Caesar’s success as a general depended to large extent on being able to recruit soldiers, including taking defeated soldiers from his opponents into his own army. Thus his main skill was the social technique of dominating the emotions of large assembled groups. He was attuned in the rhythm of such situations, playing on the mood of his followers and enemies, seizing the moment to assert a collective emotional definition of the situation.

In the language of modern hero-adulation, we would say Caesar was a man of enormous will power. But ancient categories of cultural discourse described him in a different way. Caesar was regarded as having infallible good luck-- whenever disaster threatened, something would turn up to right the situation in his favor. This something was no doubt Caesar’s style of seizing the initiative and making himself the rallying point for decisive action. But the ancient Romans had no micro-sociology; nor did they have a modern conception of the autonomous individual. Caesar was interpreted as possessing supernatural favor-- the kind of disembodied spirit of fate that augurs claimed to discern in flights of birds or the organs of animals in ritual sacrifices. In the ancient Mediterranean interpretive scheme, outstanding individuals were explained by connection to higher religious forces. Religious leaders were interpreted as mouthpieces for the voice of God. Hebrew prophets, pagan oracles, as well as movement leaders such as John the Baptist, Jesus, Mani, and Muhammad, were charismatic leaders in just the sense described in IR theory: speakers with great resonance with crowds of listeners, able to sway their moods and impose new directions of action and belief. But instead of regarding these prophets and saviors as possessing supreme will of their own, all (including in their own self-interpretation) were held to be vessels of God-- “not my will, but Thy will be done.”

EE arising from being in the focal point of successful IRs has existed throughout world history. When does this EE become interpreted as will inhering in an individual? And when does it become interpreted as “free will”? I have noted three species of “free will” recognized in the Christian/post-Christian world: self-disciplined will against moral temptations; spontaneous will against external restraints; and reflexivity in considering alternatives. All of these have the character of a conflict between opposing forces inside an individual. Free will is not just will power in the sense of Caesar or Napoleon being more energetic and decisive than their enemies, and imposing their will upon their troops and followers. Free will is not Jesus or Muhammad preaching moving sermons and inspiring disciples. These are phenomena of EE, arising out of a collective IR, in which the entire group is turned in a direction represented by the leader at the focal point of the group. Although such EE is the raw material of experiences that can be called “will”, it does not inhabit the conceptual universe in which the issue of “free will” arises.

Micro-sociologically, free will is an experience arising where the individual feels opposing impulses within him/herself. Consider the scheme of self-disciplined free will, doing the right thing by rejecting an impulse to do the wrong thing. Such conflict is serious when both impulses have strong EE; you want to drink, take drugs, have sex, steal or brawl; but another part of yourself says no, that is the way of the devil (immoral, unhealthy, illegal, and other cultural phrasings). In terms of IRs, on one side are interactional situations of carousing, drug-taking, hanging out with criminals, etc; and this IR chain has been successful in generating enthusiasm, appetite, drive for those desired things. [As argued in Interaction Ritual Chains, addiction, sex and violence are not primordial drives of the unsocialized human animal, but social motivations, forms of EE developed by successful IRs focusing on these activities.] These desires set up a situation for free will decision-making when there is another chain of experiences-- such as religious meetings, rehab clinics, etc.-- which explicitly focus their IRs on rejecting and tabooing such behaviors. IRs generate membership and thereby set standards of what is right; simultaneously they define what is outside of membership, and hence is wrong.

Conflicts inside an individual which set up the possibility of the free will experience must come from a complex social experience. Individuals are exposed to antithetical IR chains, some of which generate the emotional attractiveness of various kinds of pleasures; others which generate antipathy to those pleasure-indulging groups. In an extremely simple society, such conflicts do not arise. In a tribal group where everyone participates in one chain of rituals, there is no conflict between rival sacred objects; no splits between different channels of EE for individuals. The individual self in such a network structure would simply internalize the symbols and emotions of a single group; there would be no divided self, no inner conflict, and no occasion for free will.

Self-discipline will can exist only when individuals participate in rival, successful IRs. Self-discipline as a moral choice probably originated in religious movements of conversion. Such movements are first found in complex civilizations; not earlier, since a tribal religion is not an option one joins, but a habitual framework of rituals that structures the activities of the tribe. Religious movements that recruit new followers, pulling them out of household and family-- for instance into a church that shall be “father and mother in Christ”-- put the emphasis on individuals, by their mode of recruitment, extracting persons from unreflective collective identities, and requiring them to make a deliberate choice of membership. This is one source of the conception of individuality, which grows up especially strongly in the Christian tradition. The emphasis on the moral supremacy of free will is heightened when there are rival movements, each seeking to convert followers. Augustine, around 400 A.D., is one of the first theologians to emphasize free will as superior to the intellect, since will is one’s power to choose among alternatives; in his autobiographical Confessions, he dramatizes his moments of rejecting his early carousing, and his conversion from the rival sect of Manicheans.

The doctrine of free will as a choice of the good against evil privileges key life-events, the moment of conversion from one intense IR community to another. But once established in a new IR chain of righteous rituals, there is relatively little tension, hence little of the peak experience of choosing one against the other. The choice of God and the righteous life can be kept in people’s consciousness in a pro forma way by sermons on the topic; and more strongly by church practices which raise the level of tension artificially, by preaching about the danger of temptation and back-sliding at any moment in one’s life, hence the need for continued vigilance and self-discipline. Missionary activities, by attempting to convert others, also ensure a chain of IRs focused on the boundary between the group of the righteous and that of the non-righteous. Revival meetings, where individuals are oratorically called to come forward in public to repent and be saved, use an large-scale IR to repeat the image of the fateful decision; here the repentant sinner becomes an emblem kept before the eyes of church members even if most of them are no longer wielders of free will but merely an audience.

Free will, as an act of self-discipline, is a cultural concept arising from the social experience of choosing the cult of the good against the cult of the bad. It is socially constructed above all by Christianity, as a religious of public conversion in mass assemblies. In secular, post-Christian society, this conception of free will has weakened, although structurally similar practices have carried over into the methods of rehabilitation programs and applied psychology. Health-conscious persons invoke will power to keep themselves from over-eating, or to promote exercising; even here, the source of self-discipline typically comes from social organization-- the diet counselors, gym or exercise group which carries out IRs around the sacred object of health. Something approaching the intensity of religious rituals, with their righteous subordination of wrong behavior to right action, is found in political and social movements, especially those that gather for confrontational demonstrations that generate strong emotions. It is here that the secular conception of “free will”-- now sociologically labeled “agency”-- continues to extol the self-disciplined pursuit of group-enforced higher ideals, against the selfish pleasures of the non-righteous.

The major innovation in conceptions of free will dates from the turn of the 19th century, and becoming prominent after 1960. I called this spontaneity-will. It derives from the same antithesis as self-discipline will, but reverses the moral emphasis: official society now becomes the dead hand of coercion and emotional repression. Instead of converts pulling themselves up from the gutter into respectable society, its image is the neurotically self-controlling individual breaking free of convention, into spontaneity and freedom. Since the time of the Romanticist movement, and continuing through Freudian therapy and 20th century movements for sexual liberation, the emphasis has been on pleasure, precisely because it was prohibited. In the late 20th century, counter-culture movements idealized moments of intoxication-- whether from drugs, carousing, dancing, sex, or fighting. This has a structural base in its own IRs, especially the large popular music concert; in some styles, its IR locus is an athletic event, a clash of fans, or a gang confrontation.

Counter-culture conceptions of spontaneity are a type of EE, generated by mass IRs. The social technology of putting on successful IRs has changed throughout history; mass spectator sports and popular music gatherings are inventions, elaborated especially in the 20th century, for generating moments of high collective effervescence. Sound amplification, antinomian costumes, mosh pits, light shows, and other features enhance the basic IR ingredients of assembly, focus, shared emotion, and rhythmic entrainment. The ideology that goes along with the social technology of mass entertainment IRs is the ideal of opposition to the official demands and duties of traditional institutions. Rebellion-- or at least a break in the routinized conformity of straight society-- becomes the current ideal of free will.

Here again, the experience of “freedom” that individuals have depends on how much tension is felt between rival IRs. It is at the historical moment when a new feature of counter-culture rituals are created that there is the strongest sense of breaking away, of liberating oneself from traditional controls. Pop concerts, like sporting events and gang fights, can become routinized; they pay homage to an image of themselves as spontaneous and antinomian, even when they become local cultures of conformity. Analytically, the process is parallel to the self-discipline will of the old Christian/post-Christian tradition of conversion from evil to good: intense moments of choice are rare in people’s lives, but ritual re-enactment of the ideals of self-discipline found ways of keeping the drama before people’s eyes. In the cult of spontaneous will, early moments of rebellion are re-enacted in institutionalized form.

To summarize: the experience of will is real. It exists wherever successful IRs generate EE. Historically, those individuals who were socially positioned to have the most EE, generally were not culturally interpreted as exercising free will, but were glossed with some depersonalized label, usually supernatural. The cultural category of free will was invented for moments of choice by individuals, in abjuring particular kinds of EE-generating IRs (those considered self-indulgently pleasurable), in favor self-disciplining rejection of temptation. Following centuries of dominance by Christian and post-Christian disciplinary regimes, movements became prominent by the late 20th century, based on new mass-entertainment IRs, with an ideology defining freedom as the rejection of self-discipline. The fact that the two conceptions of free will are diametrically opposed seems ironic, but it comes from the conception of free will as a choice between two impulses within the self. Both sides of the choice are attractive-- and hence generate enough tension to make it a dramatic choice-- because each is grounded in successful IRs creating their own form of EE which individuals carrying within themselves. Ideologies, historically fluctuating as they are, can seize on either side of the conflict and extol it with those words of high moral praise as “free will”.

 

Reflexiveness and the Sociology of Thinking

What people think and when they think it is also determined by micro-sociological conditions. I have made this argument in detail in The Sociology of Philosophies and in Interaction Ritual Chains, chapter 5; here I will summarize key points bearing on free will.

Thought is internalized conversation; conversation is a type of IR, in which words and ideas become symbols of social membership. Successful IRs charge up symbols with EE, so that they come more easily to mind. Unsuccessful conversations deflate the symbols used in them, so they become harder to think with.

This is easiest to document for intellectual thinking, since the most successful thoughts come out in texts. So-called creative intellectuals-- those who produce new ideas that become widely circulated-- have distinctive network patterns, close to intellectuals who were successful in the previous generation, and close to intense arguments of new movements of intellectuals. What makes someone creative is to start by participating in successful IRs on intellectual topics, so that they take these ideas especially seriously, and internalize them. Since ideas represent membership in the groups who use them, intellectuals’ own thinking reproduces the structure of the network inside their minds. Such a person creates new ideas by recombining older ideas, in various ways: translating ideas to a higher level of abstraction; reflexively questioning them; negating some ideas and redoing the resulting combination; applying existing ideas to new empirical observations. These techniques for creating new ideas out of older ideas are also learned inside the core intellectual networks. Creative intellectuals learn from their network, not only what to think but how to think creatively-- they learn the art of making intellectual innovations.

Critics sometimes boggle at the sociological point, that creativity itself is socially determined-- surely, doesn’t creativity mean something new, that didn’t exist before? But just because something is new does not mean we can’t find social conditions under which it appears; and when we examine it-- as in studying the history of philosophy or other fields-- the new is always a rearrangement of older elements. Ideas that have no point of contact whatsoever with previous ideas would not be recognized by anyone else, and would not be transmitted.

There is a sociology of creativity, and it does not require the concept of free will. Will, yes-- the famous intellectuals are full of EE; I have called them energy stars. They exemplify the sociology of EE that comes from being at the core of networks of successful IRs, in this case, intellectual IRs.

Turn now to the paradigm of free will as reflective thinking, not for intellectual innovation, but for ordinary life-decisions. Just because someone thinks about alternatives does not mean that what they think is undetermined, an act of free will, beyond explanation. Psychological experiments show that people have typical biases in making choices; these are so-called non-rational choice anomalies [Kahneman, Slovic and Tversky, Judgment Under Uncertainty, 1982], since most persons do not think the way an economist says they should be calculating. Persons who have economics training, however, do think with more of the prescribed methods of calculation. What this shows is not that economists have more free will, but rather that they follow a disciplinary paradigm, a type of social influence.

The classic free will model of thinking is: someone brings together alternatives and decides among them. Thinking mostly is carried out in sequences of words; and these phrases have a history in previous conversational IRs with others. [I neglect here thinking in images and non-verbal formulations; most likely, such thinking is even more clearly determined by socially-based emotions than verbal thinking.] Hence most of the time we don’t really weigh alternatives: some ideas are already much more charged with EE than others. Some thoughts pop into one’s head and dominate the internal conversation. Internal thinking thus reproduces the social marketplace of IRs.

Sometimes a decision is genuinely hard because alternatives on both sides are equally weighted by prior IRs. This can happen in two ways. In one version, you have been in strongly focused rival camps that think very differently, and so there are two successful IRs chains pitted against each other. This is a situation of anguished decision-making, because symbols for both alternatives are highly charged and vie for attention in your thoughts. In other cases, the rival ideas are not very intense, because the situations in which they emerged were weak or failed IRs. Such ideas are hard to grapple with, hard to keep in mind; an attempted decision would be a vague and unfocused experience, maundering rather than decisive.

The practical advantages of free will, in the philosophical paradigm, are grossly exaggerated. On the whole, it is the individual who is not caught up in reflexivity, but who seizes the moment and throws him/herself into the emotional entrainment of the relevant IRs, who becomes the political leader, the financial deal-maker, the irresistible lover. Ironically, it is just those persons who manifest the least freedom in the philosophical sense who are extolled in our public ideology as the controllers of destiny.

It should not be taken for granted that just because someone can pose a choice between alternatives in the mind-- the archetypal situation imaged by philosophers-- that they will actually come to a decision. We lack sufficient research on what people actually experience, but no doubt on many occasions the decision-making process fails; he/she vacillates, is stuck, paralyzed in indecision. Micro-sociological indications are that high levels of self-consciousness about choices leads to irresolvable discussions, and thus either to paralysis or to a leap back into stream of unreflective routine or impulse. Garfinkel's breaching studies [Studies in Ethomethodology,1967] which force people into reflecting upon taken-for-granted routines show them floundering like fish out of water, and indeed placing considerable moral compulsion upon each other to get back into the unreflective methods of common-sense reasoning.

Norbert Wiley [The Semiotic Self, 1994] proposes that the parts of the self can mesh into harmonious internal IRs, creating solidarity among the parts of the self. When this happens, there is a feeling of decisiveness, what I would call self-generated EE, and thus the experience of “will”. In other social configurations, the parts of the self fail to integrate into internal IRs; Wiley notes that this is the process of mental illness, in extreme cases of dissociation among parts of the self, schizophrenia. At less intense levels of disharmonious inner conversations, the result may be described simply as a lack of will. Once again we see that all the experiences that we call will or free will vary by social conditions.

Whatever sociologists can study empirically, we can explain, by making comparisons. Research on internal conversation or inner dialogue is now proceeding, by various methods [e.g. Margaret Archer, 2003, Structure, Agency and the Internal Conversation; in some cultural spheres, such inner experience is socially shaped as prayer: Collins, 2010. “The Micro-Sociology of Religion.” Association of Religion Data Archives Guiding Papers. http://www.thearda.com/rrh/papers/guidingpapers.asp]. We will learn more about the conditions under which people have internal IRs that succeed and fail, and thus produce inner, self-generated EE.

 

What is Inconsistent about Denying Free Will?

Among intellectuals, philosophers have usually felt that denial of free will is incompatible with other necessary philosophical commitments. I deny this incompatibility. Recognizing a complete sociological determinism of the self changes nothing in everyday life, or in politics or in social movements, nor does it introduce any logical or empirical inconsistency.

Recognizing sociological determinism of emotional energy, of the structural constraints of all interactional situations, and of the network structure that surrounds us, does not make any of these phenomena less real. We are still subject to the up and down flows of emotional energy. Recognizing my own flows of emotional energy does not make me any less likely to join in a political movement, to become angry about a legal injustice, or to perform any other action that our networks make available. How could it be otherwise, since all instances of what are regarded as "free will" are also phenomena of EE arising at particular points in the network of social interactions?

A traditional philosophical argument is that people who do not believe in free will are fatalistic, passive, and lethargic; instead of doing good and improving the world, they loll around indulging in immoral pleasures or mired in poverty. This is a purely hypothetical argument, with no evidence to back it up. Historically, most people in most civilizations did not believe in free will; nevertheless, people in ancient Greece, Rome, China, the Islamic world, and elsewhere had emotional energy just as in the Christian West, and produced a great deal of political and economic action. The argument that we are superior movers of world history because of our free will conception is just another instance of cultural bias.

Is there an inconsistency, then, purely at the level of concepts? There is no inconsistency in the following statements: It is socially determined that people in some networks feel it is morally right to punish criminals harshly, and that people in other networks feel it is morally right to be lenient. Believing in free will is determined. Not believing in free will is determined. Feeling individually responsible (in a given social structure) is determined. Feeling collectively responsible (in a different social structure) is determined. Feeling alienated and irresponsible is socially determined (in the counter-culture groups of the last half century).

We may feel that it is unjust to be punished if one does not have free will. But this is not a logical inconsistency. It is consistent to recognize that the actions of punishment are as much socially determined as the actions which are being punished. If we call this unjust, then injustice is determined. It is also consistent to recognize social circumstances which lead some groups to attempt to eliminate punishments; whether such movements succeed or fail is also socially determined. There is no logical inconsistency in this. It offends our folk methods of thinking about our own moral and political responsibility, but those ways of thinking are also socially determined. Feeling offended by sociological analyses such as this is socially determined.

The final step is to fall back on epistemology, and to claim that social determinism undermines knowledge and therefore undercuts itself. Again I argue that this does not follow. A theory is true or not depending on the condition of the world, however one arrived at the theory. Symbols have reference as well as sense; discussions of truth belong to the former; the social construction of thinking belongs to the latter. Granted, I have not guaranteed that my theory of will is true; but it is consistent with a widely applicable theory of interaction grounded in a full range of micro evidence and historical comparisons. Social determinism, extending even to the sociology of the intellectual networks that produces such theories as this, does not imply that it must be false. It is only a prejudice that theories must somehow exist independently of any social circumstances if they are to be true. To argue thus is to take truth as a transcendent reality in the same unreflective sense as we take our popular conception of "free will."

To insist on the ontological reality of free will has been the source of philosophical inconsistencies. Philosophers have been willing to accept the inconsistencies, just as theologians retreated to the mysteries of faith, because of extra-intellectual commitments. We will have those extra-intellectual commitments in our lives no matter what we do. But there is no need to admit more inconsistency into our intellectual beliefs than we have to.

In today’s intellectual atmosphere, it is widely regarded as morally superior to be a believer in agency, and to treat any discussion of determinism with disdain. I suggest there is more intellectual boldness, more sense of adventure, in short more EE in going beyond agency. Once we have broken the intellectual taboo on treating everything, including human will and human thought, as subject to exploration and explanation, a frontier opens up. Quiet your fears; we lose nothing morally or politically by doing so. Moral commitments and political action will go on whether we think they are explainable by IR chains (or some improved theory) or not. And as a matter of experience, it is entirely possible for you, as an individual, to be a participant in any form of social action while holding the belief, in one corner of your mind, that what you are doing right now is the working out of IR processes. This is one of the things that makes everything interesting to a sociologist.

 

 

APPENDIX The Philosophical Defense of Free Will

The predominant type of argument for free-will has been a defensive one. Necessitarian philosophies have made general claims, on logical, theological, and ontological grounds; libertarian philosophies have always been in the position of seeking an exception. There must be reasons supporting freedom of action, because it seems such an important part of the human condition, and the idea of lack of freedom is so repellant. In Hellenistic philosophy, the Megarian logician Diodorus Cronos and after him the Stoic school argued that every statement about the future is either true or false, and hence everything that will happen is already logically determined. Debaters stressed the point that this would leave the world to Fate, and undermine the motivation of the individual to do anything at all; to which the Stoic Chrysippus made the rejoinder that the actions of the individual are also part of this determined pattern.

We see here the general mode of argument: necessitarian arguments are put forward on grounds of logical consistency, and libertarians reply that they find the results of this reasoning unacceptable because it undermines deeply held values. When libertarian philosophers have tried to bolster this by positive arguments, they have led themselves into metaphysical difficulties. The Epicureans countered the Stoics by introducing into their atomistic cosmology a swerve of the atoms, which should allow for human will. But even aside from the difficulty of demonstrating how this consequence follows, their construction was seen as arbitrary special pleading. Similarly Descartes posited a willing substance operating in parallel to the body with its material causality; but the point of contact between the two substances remained mysterious and arbitrary.

The same kind of problems arose in theological formulations. The metaphysical argument centering on the attributes of a supreme being, leads with seeming inevitability to the power, omniscience, and foreknowledge of God, and these would appear to exclude human freewill. Against this, primarily moral arguments were set forth: that humans should be responsible for their own salvation, or that God should not be responsible for evil. How to safeguard these beliefs without undermining God's absolute power has always been a conundrum. The orthodox Muslim theologians, after an early period of challenge from one of the philosophical schools, opted for the omnipotence of God in every respect. Christianity usually attempted to walk a tightrope between both sides, typically retreating from the claims of consistent reason into the claims of faith; free will was upheld theologically as a sacred mystery.

Secular philosophers have tended to compromise by asserting that freedom can coexist with necessity. Plato argued that all men desire to do what is Good; though this would seem to imply that every action is determined by the Good, Plato interpreted freedom as action of just this sort. Acting under the thrall of the passions would be both bad and unfree. Aristotle argued in parallel fashion that man as the reasonable animal is free when he follows reason rather than appetites. Kant displayed the traditional themes of Christian theology by locating freewill both in the sphere of morals rather than of empirical causality, and in an epistemologically mysterious realm of the Ding-an-sich. This position proved unstable in philosophical metaphysics. Fichte almost immediately attempted to derive all existence from self-positing will. But Fichte's position too was unstable, and the more enduring form of his dialectic was that reformulated by Hegel, which expresses the usual metaphysical compromise in its most extreme form: the world is simultaneously the unfolding of reason and freedom, and freedom on the human level is identified with consciousness of necessity.

Most Western philosophers have felt it desirable to defend free-will, but at the cost of either metaphysical difficulties or of diluting the meaning of freedom. Spinoza is a rare instance of a philosopher who valued intellectual consistency highly enough to embrace complete determinacy. Spinoza responded to the metaphysical difficulties of Descartes' two substances by positing a single substance with thinking and material aspects in parallel, subject to absolute necessity. His argument was generally considered scandalous.

At best, libertarians have been able to counter deterministic philosophies by taking refuge in a skeptical attack on the possibility of any definite knowledge at all. Carneades in the skeptical Middle Academy argued against Stoic determinism on the grounds that not even the gods can have reliable knowledge of events. And twentieth century analytical philosophy, pursuing a skeptical tack, tended to poke holes in any definitive conceptions of causality. But skepticism cuts both ways, and post-Wittgensteinian philosophy (Ryle, Hampshire and others) has undermined the concept of "will" because it hypostatizes an entity -- "will" or an "intention" -- as a cause of subsequent actions. In actual usage, "will" or "intention" is merely a cultural category for interpreting an action in terms of its ends. In general, philosophical arguments are unable to uphold free will directly, but in attacking the implications of the opposite position, this strategy has tended to undermine free will as well.

Two patterns stand out: First, defenders of free-will have based their arguments heavily upon moral considerations to be upheld at all costs, even the destruction of philosophical coherence. And second, this concern for free-will is overwhelmingly a Western consideration. In fact, this is largely a Christian concern, although it is approached by some pre-Christian instances (notably the Epicureans). Plato, Aristotle and most other Greek philosophers were less concerned with human will than with issues of goodness and reason; Hellenistic debate was more focused on Fate and equanimity in the face of it. Freedom of the will was not much explicitly raised until the main tradition of Christianity was formalized in late antiquity; in fact, this is one of the features that differentiates the victorious Christian church from rival contenders such as Gnostic sects and Manicheans. Freedom of the will goes along with distinctively Christian doctrines of the soul and what humans must do for salvation.

Christianity did not end up monopolizing free-will doctrine until after the traditions of Mediterranean monotheism had turned several bends of their road. In the early intellectual history of Islam, a group of rationalistic theologians, the Mu'tazilites, argued for human responsibility and free will, and developed a philosophy of causality in an attempt to reconcile these with God's omnipotence (Fakhry, 1983). They were opposed by more popular schools of scriptural literalists, who declared only predestination compatible with God's power. The secular schools of rationalistic philosophy which emerged at this time were predominantly Neo-Platonist, continuing the typical Greek indifference to the free-will issue. The Mu'tazilites lasted about 200 years (800-1000 A.D.), whereafter predestination went unchallenged. The Sufi mysticism which became prominent after 1000, although opposed to scriptural orthodoxy, was oriented away from the self and towards absorption in God.

Thus Christianity is not the only locus of free-will doctrines. On the other hand, within Christianity the importance of free will has fluctuated. It seems to have been unimportant in early Christianity before doctrine was crystalized by the theological "Fathers" of the late 300s A.D. Again the early medieval period emphasized ritualism and contact with magical artifacts such as the bones of saints. But the Catholicism of the High Middle Ages (1000-1400) strongly focused upon free will, and went to great lengths to explore the implications of free will as applied to God -- as if projecting human free will to the highest ontological level. Although the Protestant Reformation initially stressed predestination, this emphasis was soon undermined and Protestantism became very "willful."

In sum: Islam and Christianity both explored the free-will and determinist sides of activist monotheism, but came out with contrasting emphases. Islam began with many of the same ingredients as Christianity. It was during the urban-based and politically centralized Abassid caliphate when the free-will faction existed, although even then it was never dominant. After the disintegration of the unified Islamic state around 950, the scriptural literalists and the Sufis came to dominate cultural space, and the doctrine of individual responsibility and free will disappeared. Social conditions appear to underlie both the period of similarity among Muslim and Christian doctrines, and the period in which they diverged, the one towards the free-will pole, the other towards determinism. (This is treated in greater depth in Collins, The Sociology of Philosophies, 1998, chapters 8-9.)

In contrast to the activist religions of the West, free-will is almost never a consideration within polytheistic or animistic religions; nor is it important within the Hindu salvation cults, nor in Buddhism, Taoism, or Confucianism. Buddhism, for example, builds upon the concept of karma, chains of action which constitute worldly causality; a willful self is one of those illusions which binds one to the world of name-and-form. Confucians argued at length about whether humans are fundamentally good or fundamentally evil, but not whether they could choose between them. Closest to an exception is the idealist metaphysics formulated around 1500 by the Neo-confucian Wang Yang-ming. Yet although Wang's doctrine conceives of the world as composed of will, in a fashion similar to Fichte, Wang's emphasis is not on freedom but on the identity of thought and action in a cosmos consisting of the collective thoughts of all persons. These doctrinal comparisons show that free-will is insisted upon only where there is a high value placed upon the conception of the individual self. And that happens chiefly in Western social institutions.

Simmelian Numbers

3 -- The lowest sociological number. This is the cornerstone of Georg Simmel’s “The Significance of Numbers in Social Life” [1908]. 3 is the lowest number for social structures, because it makes possible coalitions of 2-against-1, creating a dimension which transcends the immediate 1-on-1. It also creates the sense of a group, since no one individual can destroy a group of 3 or more by leaving. John Levi Martin [Social Structures, 2009] argues that dyads are always experienced against the background of other possible relations. Dyadic withdrawal-- such as lovers turning to each other and ignoring the rest of the world-- is special precisely because there is a rest of the world, a third party that makes an outside against their inside. Sociologically, 2 is really derivative of 3.

ca. 7-10 -- The maximum group size for a multi-sided conversation. In informal sociable situations like a party or a group gathered around a dinner table, there is often a single conversation in which everyone is paying attention. Beyond size 7-10, the party breaks up into smaller groups carrying on their own conversations; or else it loses its informal character and turns into a single-speaker/audience (hub-and-spokes) structure-- for instance when someone makes a speech or is given an award, or a guest of honor monopolizes everyone’s attention. The number zone 7-10 is the borderline between informality and formality, as we observe it micro-interactionally.

ca. 30 -- The dividing line between the size of an audience where it is comfortable for a speaker to give an informal, spontaneous-sounding talk, and where it feels more appropriate to read a formal, written, pre-packaged speech. Below 20 or 30 people, it feels artificial to read a speech; above 30 or so, it is hard to talk casually -- what Durkheim called the demon of oratorical inspiration. The borderline is higher for professional speakers, but this means persons who have had a lot of practice and hence their speech is in fact pre-packaged. Announcers, actors, or modern-day politicians who sound comfortably casual before a large audience are putting on a frontstage performance of casualness based on a large amount of backstage training.

ca. 75 -- The Stark-Bainbridge breaking point for cult implosion. According to the analysis of Rodney Stark and William Sims Bainbridge [The Future of Religion, 1986] a religious cult recruited by a charismatic leader’s appeal can grow to about size 75; but to get beyond this size, it must delegate recruitment to disciples and their own networks. At size 75, the leader becomes smothered by the attentions of his/her followers, and is unable to spend much time adding more members. Cults of this kind implode-- they break off network connections with the surrounding society, turn in on themselves, and take their ideology to an extreme polarization between themselves and the world. This is organizational suicide for a social movement, since it can no longer grow; sometimes it literally leads to mass suicide.


3-6 to 1 ratio -- The most dangerous number. Photos, videos and narratives of fighting [Collins, Violence: A Micro-Sociological Theory, 2008] overwhelmingly show the pattern of a group of between 3 and 6 members beating up a single victim. This is the pattern in riots, police beatings, gang fights, bullying and other close-range violence. In a riot, there may be large numbers of other people standing at a distance, but the actual beating takes place by little clusters of 3-6 against an isolated one. 1-on-1 is not a very dangerous ratio; most such confrontations are standoffs, or if there is an audience who encourages the fight, it usually is a restricted and rule-bound “fair fight.” The most vicious beatings, the atrocities of piling on and overkill, happen in the 3-6 to 1 ratio. 3-on-2 or even 6 or 8-on-2 are not very dangerous ratios; the minority here has the solidarity of backup, and the majority can’t get into the frenzy of emotional domination that they achieve with a sufficiently outnumbered single victim.

[The main exception to the 3-6 to 1 ratio is in domestic violence, where emotional dominance is usually established by an adult male who is much bigger, stronger, and more vehement than his victims. The dangerous number-ratio holds in public violence. Why the domestic arena has a different structure needs further analysis.]


Group-to-group numbers: (Here we are not concerned with the size of a group but the number of groups interacting in a field.)

2 -- The number of mutual enemies or factions at the moment of violent confrontation. No matter how many different opposing gangs, ethnic groups, social movements, or armies there are, when it comes to actual fighting, the structure simplifies down to 2 sides. Hobbes’ war of all-against-all is purely mythical; it has never been observed in any serious violence. [Water-splashing fights or food fights are an exception, but these are non-serious and playful in tone.] The polarization of violent conflict into 2 sides has the effect that only one line of difference can be recognized while the fight is going on; other issues get dropped or pushed into the background. This applies also to other kinds of intense conflict, such as political factions in a moment of crisis; thus skilled politicians manipulate coalitions by making some of their enemies into allies whose issues are temporarily submerged.

Why intense conflict polarizes to 2 has not been well-explained; but violence is tense and confusing enough, and fighters seem to become too disoriented if they have to pay attention to more than one fight. (See also Violence, chapter 7, for evidence on the one-fight-at-a-time limitation on fights in bars and entertainment venues.)

3-to-6 -- The Law of Small Numbers in longterm intellectual attention space. My evidence on networks of master-pupil chains of philosophers and other intellectuals [Collins, The Sociology of Philosophies, 1998] shows that if there is a period in history where new ideas are produced, there are always between 3 and 6 networks linking the creative figures of one generation to the next. There are always at least two or three major figures at the same time; if a single network dominates, it is not creative (because creativity is negating what exists, taking up an oppositional position in a field). There is an upper limit of 6 such networks; in exceptional periods where more than 6 schools thrive at the same time, several of them fail to recruit new followers and die out in the following generation.

Some version of the Law of Small Numbers appears to exist in other fields, such as art, music and literature. But the numbers at the upper limit may differ. Politics appears to operate more like a severe conflict field, with a tendency towards polarization into 2 factions at a moment of crisis; what happens in more routine periods of action still needs to be formulated in theoretical terms. In economic production markets, Harrison White [Markets from Networks, 2002] argues that no single production firm can create a market without having at least one major rival to define the business they are in; he implies there is an upper limit in the form of a diminishing tail of market share, dropping off sharply above about 6 or 8 firms.

These formulations of a Law of Small Number (or family of such laws) are still primitive and need to take into account time patterns. Collins’s 3-6 Law of Small Numbers for intellectual networks deals with intergenerational networks which reproduce themselves for longer than 30 years. Numbers of competitors in an economic production market (such as personal computers in the 1980s) can be much larger; we need to specify the time period (which may be only a few years), and the dynamics that moves us from one time-spread to another. Political parties and social movements go through periods when there are many small contenders; how long does it take for them to winnow down to a small number? Stefan Klusemann’s research [After State Breakdown -- Dynamics of Multi-party Conflict, Violence, and Paramilitary Mobilization, UPenn Ph.D. 2010] shows how large numbers of competing revolutionary and paramilitary movements over a period of 10-15 years consolidate into dominants like the Bolsheviks and the Nazis, creating an authority structure with the ideal-typical number of state monopoly: One.


Numbers in the structure of the self:
3 -- The triadic structure of the self. George Herbert Mead formulated this as I, Me, and Generalized Other; Norbert Wiley [The Semiotic Self, 1994] uses Peirce to reshape the triad as I, You, Me, and to embed a second reflexive triad inside the primary triad of interior dialogue. Ogden and Richards’ The Meaning of Meaning, [1923] argued that any unit of significance must have the triangular structure of sign, object signified, and larger context of discourse-- a formulation made by different thinkers in various terminologies. The human self is reflexive because it incorporates a social viewpoint, along with its own action-viewpoint and an image of itself from outside. This underlines again Simmel’s point that 3 is the first sociological number. One might argue that an infant-with-mother is a primal dyad; but the baby grows into a social actor and human thinker by acquiring the 3-pointed structure of the self.

0 -- As Durkheim put it: “The individual, the zero of social life.”

Explaining why such Simmelian number-patterns exist will advance us deeply in sociological theory.