A Joyful Noise

22 March 1459:  The young sprig of the Habsburg family is born who grows up to be Maximilian I, Holy Roman Emperor.

Max practiced masterfully the art of the dynastic marriage both for himself and his descendants, significantly sweeping under either direct Habsburg sovereignty or collateral affiliation large swathes of Europe, most notably direct kingship over Hungary after the disaster of Mohacs in 1526.  It was Hungary which provided the “and royal” tag in the Habsburg “imperial and royal” descriptor after the Compromise of 1867.  On the other hand, it was in large measure Hungarian intransigence which forever derailed what feeble attempts Franz Joseph and his advisors made to drag the empire forward as a viable geopolitical force.  I forget now which German senior commander (or was it a chancellor? I’ve slept since then) observed during the Great War that Germany was “shackled to a corpse.”  Magyar refusal to entertain any measure which might impair their oppression of the crazy-quilt of ethnicities within Hungary has to bear a good portion of the responsibility for the truth of that statement.

Gentle Reader will perceive how easily that for which we strive mightily, and sacrifice nearly all to defend once in our possession, can turn out to be a poison chalice in the end, after all.  Be careful what you wish for, I suppose.

Max also is a pretty good example of the Habsburg penchant for eccentricity.  He spent a large amount of effort on a couple of lengthy epic poems as well as a novel.  The purpose, in addition to patting himself on the back for being An All-Round Swell Guy, was to glorify what he presented as the traditions of chivalry and more to the point, the Habsburgs’ role as principal exponents of ditto.  There is a fascinating history of the family which takes for its focus the means and media in which the successive Habsburg rulers used their representation in visual and written arts to establish, explicate, and fix in permanence their role and claims in the European power system.

History has been less impressed with Max as author than he might have desired.

What Maximilian did do, and what to this day remains as an enduring legacy, perhaps his only enduring legacy, is the direction he gave to one of his court flunkies in 1498 to go hire, as a permanent fixture at court, some musicians and young male singers.  Just over 500 years later the Wiener Sängerknaben — better known in English as the Vienna Boys Choir — is still going.  Roughly 100 strong, they of course perform concerts in and around Vienna; they also split into four separate touring groups and travel all over the world performing.  A couple of years ago, one of them visited the city near where I live and as a bucket-list item I took my mother to see them.  They put on a pretty good show.

In addition to concerts at home and abroad, they also play a significant part in the cultural life of what has as good a claim as any to the title “Music City”.  Here’s a video including them performing at the 1989 funeral of Zita, the last Empress of Austria-Hungary.

[Here I will confess to a bit of a personal preference.  I understand that musicians must perform what their audiences want to hear.  Thus I do not take it ill of the Sängerknaben that so much of the program they presented that evening we saw them was newer settings of newer things.  But I prefer a greater homage to the towering music of the past.  I mean, let’s face it:  Just about anyone who can carry a tune in a dump truck — and I own that I am not among them, not at all, even a bit, by any standard — can sling together a passable setting of “contemporary” music, showtunes, and so forth.  It’s just not all that challenging.  The great music of the past, however?  That takes a bit more in the way of chops.  I prefer the focus of the Thomanerchor, which is even older than the Sängerknaben (they trace their roots back to 1212, I think) and which concentrates above all on the music of their one-time Kantor, one J. S. Bach.  Not to take anything away from their colleagues in Vienna; it’s just that I sort of wish they’d devote their undoubted talents to challenges more worthy of them.  Purely personal taste.]

Perhaps Maximilian did achieve his earthly immortality, and through the medium of art.  It just wasn’t his own, or even about him.  Irony will out.

Go make a joyful noise, in memory of H.I.M. Maximilian.

Neptune’s Inferno; or, “If You Get Hit, Where Are You?”

I finished reading this morning, while camped out in front of the (closed) Turkish Airlines counter at Dulles (they have one single flight out of here, at 11:10 p.m., and they don’t open their counter for check-in until 7:20 p.m., and you can’t get through security without a boarding pass, which you can’t get without check-in, and did I mention that all the restaurants in Dulles are on the far side of security and I’ve been here since 4:00 a.m?), a book given to me for Christmas, Neptune’s Inferno, by James D. Hornfischer, a history of the naval battle for Guadalcanal, from early August through the end of 1942.

This is the third book of Hornfischer’s which I’ve read. I have his Ship of Ghosts, about the survivors of USS Houston. She was part of the ABDA fleet which was annihilated in the opening weeks of the war. She survived the first few battles only to come to grief in the Sunda Strait. She, in company with HMAS Perth, stumbled across the entire Japanese invasion fleet coming ashore in Java, including a destroyer force and squadron of heavy cruisers covering the transports. Both Allied ships were sunk, each taking roughly half her crew with her. Both captains were killed in the action, Houston’s by taking a shell splinter that just about eviscerated him. Houston’s survivors ended up in no small part working on the Burma-Siam railroad line the construction of which forms the setting for Bridge on the River Kwai. The thing about the battle was that it was so sudden – the Allied ships hadn’t expected to come across hostiles – and occurred in the middle of the night, that Houston and her consort effectively just disappeared, as far as Allied high command could tell. It wasn’t until the end of the war that it was known anyone had survived, and who.

A couple of vignettes from that book.

One of the eventual survivors from Houston had his battle station in the mast top, manning a heavy machine gun with a Marine sergeant. As the ship was heeling over, on her death ride and with the order to abandon ship having been given, the sailor was getting ready to drop into the water (by that point the top was well out over the water), and he noticed the Marine wasn’t. Come on, let’s go, was the thrust of his observations. The Marine just pointed out that he couldn’t swim. So over the sailor goes, striking out with might and main to avoid the suction when the ship went down. He later recalled that among his last glimpses of Houston was the sight of tracers still pouring forth from the mast top, as the Marine fought his station to the very last. You can’t teach that kind of tough.

The other vignette speaks volumes about how the Dutch (who owned Java as of the war’s beginning) were viewed by the locals, and how the Japanese were viewed (at least as of that time). Houston sank so close to the beach that many of the sailors who got off in time were able without too much trouble to swim ashore. The current in Sunda Strait is pretty ferocious, but since the swimmers were swimming perpendicular to it, those who weren’t swept out into the open ocean were able to make shore. To a man they were turned in to the invaders by the local villagers who found them hiding in the woods, and it wasn’t out of fear of the Japanese. The Dutch had behaved in the East Indies much as the Belgians had in the Congo, and with very similar results, in terms of how the native population reacted when they had the chance for regime change. In short, the Japanese Greater Southeast Asia Co-Prosperity Sphere was very much not looked upon as being a cynical euphemism by its purported beneficiaries.

The third book of Hornfischer’s I have is The Last Stand of the Tin-Can Sailors, the story of the destroyers and destroyer escorts screening the light carriers whose job it was, during the Battle of Leyte Gulf, to cover the landing forces. and provide in-shore close air support.  Admiral Halsey having been snookered into taking all his fleet carriers and all his heavy screening forces (he flew his flag in New Jersey, sporting nine 16″/50-cal guns) to chase — far to the north, well away from the critical focus of Halsey’s actual mission — Japanese carriers who weren’t carrying any planes – in other words, they were suicide decoys – all that was left to guard the San Bernardino Strait was a group of escort carriers, whose magazines were full of anti-personnel and other “soft” (in other words, not armor-piercing) ordnance), along with a squadron of destroyers and one of even smaller destroyer escorts. And here comes Admiral Kurita with the Center Force, consisting of the bulk of the remaining Imperial Japanese Navy heavies. Battleships and heavy cruisers. It actually took them two tries to get through the strait; it was on the first effort that Musashi was sunk (her sister, Yamato, didn’t go on her own death ride until later). Kurita had turned back but then reversed course after all and on the morning of October 25, 1944 (metaphor alert: this was the anniversary of Agincourt in 1415, when a badly-out-numbered Henry V opened a can of whip-ass and flat smeared it all over the French – we few, we happy few, we band of brothers, anyone?) all that stood between him and the helpless American invasion fleet at anchor, frantically unloading the invasion force, were a dozen or so tin cans, with the escort carriers several miles further off.

Hornfischer uses the story of USS Johnston (DD-557), commanded by Commander Ernest E. Evans, to construct the narrative framework of the story. He was from Oklahoma, half-Indian (and so of course his Academy nickname was “Chief”). When he took command of Johnston, he offered any man in the crew who wanted off a transfer, no questions asked.

On that October morning, by chance Evans happened to be the closest ship in the formation to the Japanese battle line as it came out of the strait. Without waiting for orders, he turned his destroyer to engage a line of battleships and cruisers. Maneuvering at flank speed, he engaged with such of his 5″ mounts as could be brought to bear, chasing the Japanese shell splashes (on the theory that your enemy will have corrected his fire control solution away from that spot so he won’t hit there again) and trying to get close enough to launch his torpedoes. Chasing shell splashes only works if your enemy doesn’t figure out what you’re doing, and if there are enough people shooting at you, then you’re out of luck in any event; there’s no place to dodge to where someone’s not likely to drop a 14″ round onto your unarmored deck. Which is what happened to Johnston. She started taking large-caliber shell hits.

Evans gave the order to launch the torpedoes and then turned away to open the range. By that time all Johnston’s 5″ mounts were out of commission, the ship had been badly holed, was on fire, and was losing speed. As she steamed away from the Japanese, she came upon the other small boys, likewise riding hell-for-leather to engage the enemy battleships with their destroyers and destroyer escorts. Notwithstanding he had nothing left to fire at the Japanese, Evans turned Johnston around and went back into the fight. After all, Kurita had no way of telling she was a sitting duck; every turret that fired at Johnston was a turret not firing at a ship still capable of action. When last seen, Evans was standing on Johnston’s fantail, severely wounded (as I recall, among other things, he had a hand shot off by that point), shouting rudder orders down a hatch into the rudder room where crewmembers were manhandling the rudder, all other steering control having been shot away.

Evans received a posthumous Medal of Honor. And the small-ship Navy acquired an immortal example of gallantry.

They’re called “tin cans,” by the way, because that’s how easily they open up. When I was on an Adams-class guided missile destroyer back in the day, we had an A-6 that was supposedly bombing our wake for practice put a practice bomb onto us instead. The idea is they drop these dummy bomblets that have a saltwater-activated smoke flare in the nose into your wake, 500 yards or so astern of you. They’re aiming at the centerline of your wake and it’s easy to see how good their aim is. Well, this ass-hat, in the words of the JAGMAN investigating officer’s report – which I saw – “released his bomb with a friendly ship filling his windscreen.” This practice bomb weighed less than 10 pounds and, except for the smoke flare in the nose, was completely inert. A chunk of metal, no more and no less. It went completely though our ship. It penetrated a bulkhead on the O-2 level, blew up the Mk-51 fire control radar’s power panel, penetrated the O-2 level deck in that space, crossed the small office space beneath that and went through the far bulkhead out into the open air, penetrated the O-1 level deck, went across the main passageway (almost taking out our chief boatswain’s mate), penetrated the inboard bulkhead of the chief petty officer’s mess, ripped up their refrigerator, penetrated the far bulkhead back into open air, and would have kept right on going over the side except it hit the inboard side of one of the davits for the captain’s gig, and bounced back into the scuppers.

Neptune’s Inferno, as mentioned, deals with the specifically naval engagements of the Guadalcanal campaign. The Marines ashore make an appearance only to the extent of their interaction with the navy, consisting mostly of their outrage when, two days after the Marines splashed ashore, Vice Admiral Frank Jack Fletcher (most recently seen relinquishing command of the American carriers to Raymond A. Spruance half-way through the Battle of Midway back in early June, 1942 , when his flagship, Yorktown, was put out of action and eventually sunk) took the carriers, which were pretty much all the flat tops the Navy had in August, 1942, away from the battle in order not to risk them against Japanese aircraft. It was a decision Admiral Earnest King never forgave him for (and for which he was relieved). From a strategic perspective it was the right choice. If those carriers had been put out of action at that point, the Navy’s operations in that entire portion of the Pacific would have been crippled. You can always get some reinforcements ashore, get some more supplies ashore. In fact the Japanese did more or less exactly that with the night-time runs of the “Tokyo Express”; because of the Marines’ Henderson Field on the island, and the back-up of the American carriers just out of reach of their land-based aircraft flying from Rabaul, they couldn’t make day-time landings or even use slower transport ships because they couldn’t get in, un-load, and be gone from the danger zone before the American aircraft would be back in the air the next morning. So they used destroyers . . . and managed to put well over 20,000 troops ashore, together with artillery and related supplies.

The Marines came to forgive the Navy, more less, when the light surface forces (destroyers and the new anti-aircraft cruisers, bristling with 5″ rapid-firing guns) showed a gleeful willingness to plow up great swathes of Japanese-bearing tropical jungle. They’d literally hose out corridors through the undergrowth with their gunfire. No less than Lt. Col. Lewis D. “Chesty” Puller expressed his gratitude after having observed the fun from one of the firing ships. The sub-title of this post is his reply to his host’s reaction when, just prior to going back ashore, he observed to the captain that he, Puller, wouldn’t have captain’s job for anything.  The captain was amazed; surely wouldn’t he prefer to have a shower and a bed when the day’s work was done?  Puller asked him when he got hit, where was he, and then pointed out, “When I get hit, I know where I am.”

And then after the night-time surface actions all the bodies would wash ashore.

In the end, for every Marine who died defending Guadalcanal dirt, three sailors died defending its waters. USS Juneau, her keel already broken by a torpedo strike and shot all to hell, was limping away the morning after the Night Cruiser Action, on November 14, 1942, when a submarine found her. She literally disappeared in a single flash of explosion. Out of her crew of almost exactly 700, all of ten men survived. Among the dead were the five Sullivan brothers, of Waterloo, Iowa.

For all the valor of the surface navy – and the naval fight was overwhelmingly a surface fight; the airplanes were mostly consumed (and they were consumed, as well) defending Henderson Field – the senior leadership really comes across as bumbling, in Hornfischer’s telling. Most of the action went down at night, an environment the Japanese had spent years aggressively training to own. And they did, even without the benefit of search or fire-control radar, both of which the Americans had in abundance, and which all but one of the OTCs (officer in tactical command: the guy out on the water who’s actually ordering the formation, steaming directions, and controlling – supposedly – the action) studiously ignored. It started with the Battle of Savo Island (a gob of island several miles to the northwest of Guadalcanal proper), when a fast-moving Japanese cruiser squadron got the jump on not one, but two American formations of cruisers and destroyers, and sent four out of five Allied cruisers (USS Quincy, USS Vincennes, USS Astoria, and HMAS Canberra) to the bottom in a maelstrom of fire lasting barely an hour from start to finish.

The eventual verdict on Savo Island (the waters between it and Guadalcanal acquired the nickname “Ironbottom Sound” by the time it was all over) was that the Americans simply had not been ready for combat, eight months after Pearl Harbor. They just didn’t know their craft. The Americans got a little of their own back off Cape Esperance when Rear Admiral Norman Scott was put in charge of a scraped-together force to challenge the night-time deliveries of the Tokyo Express. But for all of his drilling his ships in gunnery exercises (including off-set firing at each other, where two ships would shoot at each other’s wakes, or at target sleds towed by each other, much like that A-6 pilot was supposed to have done to my ship 40-odd years later), and all his aggressive instincts, even he couldn’t quite get it all in one sock, when it came to a real, live, shoot-em-up night action. He bungled some maneuvering signals, put his flag in a ship which did not have the 10-cm search radar (a vast improvement over its predecessor; it was actually useful for running a naval fight, as was later demonstrated), and before anyone knew it, what should have been a smoothly unfolding fight turned into a chaotic slug-fest, with individual commanders more or less picking their targets of opportunity and seeing how many rounds they could pump into them. Scott’s forces did manage sufficiently to cripple the sole Japanese battleship that she was scuttled. But it was otherwise an opportunity mostly lost.

Then the mistakes got worse. Rear Admiral Dan Callaghan, a real swell guy but a desk admiral, was put in charge of the cruisers, over Norman Scott, who – even if he’d stumbled a bit his first time out of the gate – at least had spent countless hours pondering the dynamics of modern naval action. There is not much indication that Callaghan did. He owed much of his advancement to senior rank to his connections, not least with FDR himself. During the Night Cruiser Action of November 13, 1942, he made an absolute pig’s breakfast of his formation, his handling of it, and his conducts of the battle. But he did have the decency to get killed that night, along with all but one of his staff and his flag captain (Cassin Young, who had won his own Medal of Honor at Pearl Harbor). Norman Scott was also killed that night. But the Americans bagged one of the two battleships the Japanese had sent.

In the aftermath of the Night Cruiser Action, the Americans had so few heavy surface forces left that Halsey finally decided to pull his two battleships – Washington and South Dakota – away from escorting carriers and transports, and shove them into Iron Bottom Sound. And not a moment too soon. Admiral Yamamoto had decided to try one final all-out push to destroy Henderson Field through naval gunfire (they’d made a pretty good run at back in September). This time the American commander, Rear Admiral Willis Lee, was a radar geek who knew exactly what use his radar could be. The Americans shot them all to hell and gone, saving Henderson Field and thereby guaranteeing that the Japanese simply could not maintain their forces on the island.

By the time the Japanese evacuated, many of their units had only a handful of men left who were not so starved or sick or both as to be completely out of action.

What I found, other than the very well-written narration, interesting about the book is the portrayal of William Halsey. In Last Stand, Halsey comes off as a blustering buffoon, who was so gung-ho to Get Him Some Carrier Scalp that he abandoned what was actually his principal strategic function – safeguarding the Leyte Gulf invasion – and but for the courage of the small boys could have cost the Americans an enormous loss. Gentle Reader will also not overlook that it was also during this time and shortly thereafter that Halsey came within an ace of losing not one but two battle groups to typhoons, by reason of his mismanagement of refueling. In Neptune’s Inferno he comes across being something of a naval cross between Nathan Bedford Forrest and Omar Bradley. Perhaps it’s the difference between 1942 and 1944. By the time of Leyte Halsey had worn four stars for almost two years and was a fleet commander. Perhaps with Leyte he had risen to his level of incompetence.

In any event, Neptune’s Inferno is a tremendous read. Hornfischer does an excellent job of narrating surface naval action. This is more complicated than it sounds, I suggest. If you’re describing the Battle of Shiloh, for example, or the First Marne, you can hook your narrative onto place names that can easily be shown on a map in geographic relationship to each other. Not every author has this talent. The first time I tried to read August 1914 I gave up because Solzhenitsyn’s description of the run-up to Tannenberg is nearly unintelligible without a map to refer to (and then some time later I discovered that – in the very back of the book, exactly where you would not look for it – his publisher had put just such a map; made all the difference in the world). In describing a naval surface action, however, all you’re left with is “port” and “starboard,” and it’s very difficult even to draw it out on a map because the relative positions of the ships to each other at any specific moment is of such critical importance. I think Hornfischer does as good a job of conveying the actual movements of the ships over the trackless water as well as anyone I’ve ever run across.

Can’t recommend too highly, in round numbers.

It Would Take a European to Concur in Both

The Frankfurt International Book Fair began recently.  It’s among the largest of its kind in the world and is regularly the setting for important doings in the world of literature and books.

This year’s fair was opened with an address from Salman Rushdie.  You’ll recall him; he was the author who found himself the subject of a fatwa in 1989 because some Islamic cleric didn’t like something he’d written.  For years he’s had to live quasi-underground, well-guarded.  Rushdie, by the way, is far from the only author who’s found himself the target of the Islamofascists;  Ayan Hirsi Ali, born Muslim and the victim of genital mutilation, has written extensively about we may gently call Islam’s woman problem.  There is now a price on her head.  To show their understanding and support for her ordeal and her courage in speaking plainly and publicly, in 2014 Brandeis University first offered and then withdrew, at the request of an unindicted terrorist co-conspirator organization (which is to say, the Council on American-Islamic Relations), the offer of an honorary degree.

Be all that as it may, Rushdie seems to have spoken pretty plainly, and in favor of freedom of expression.  The link above is to The New York Times write-up of his address.  It contains only the most bland of his statements:  “Limiting of freedom of expression is not just censorship; it is also an attack on human nature.”  True enough.  But it wouldn’t be the NYT we know and love so well if they didn’t suppress things that didn’t support The Narrative.

So let’s go to the Frankfurter Allgemeine Zeitung’s coverage.  Rushdie categorically denied that freedom of expression is a culturally-specific human value; it is, he says, “universal.”  In fact Rushdie characterized as “the greatest attack” on freedom of expression exactly that conceit of Western thinkers that the freedom is somehow specific to Western culture.  Ouch.  He specifically called out the rising tide of bullshit “trigger warnings” on American campuses and the general intent and effect of political correctness, which he firmly placed among attacks on freedom of expression.  And he apparently didn’t spare the examples, calling out the law students who don’t want to read case books and other materials that use the word “rape,” or the Columbia University (!!) undergraduates who object to reading classical poetry because it depicts the gods having their way with women.  And so forth.  Rushdie also called out the “remarkable alliance between parts of the European Left and radical Islamic thinkers.”  When an ideology — Islam — labels itself a religion, its enmity towards women, Jews, “and others” (homosexuals? Christians? apostates?), for some magical reason, gets “swept under the rug.”

Rushdie pointed out that while authors who are truly persecuted seldom survive, their art lives on.  He named the examples of Ovid in the Roman Empire, Osip Mandelstam’s death in GuLAG at the hands of Stalin, and one of Franco’s victims.  I will point out that he names no Western author . . . could that be because in fact we don’t kill our authors?  No matter how much they may bellyache about how awful it is to be black/Central  American/homosexual/female, etc?

In the FAZ‘s gloss, linked above, the author asserts that Rushdie’s address confronts the “error” that at the center of human are well-being and “the good life,” in which each may do as much of what he pleases as he will.  To demonstrate that this is an “error” the author cites us to the characters of slaves in Roman comedies.  They run the household, they go shopping, they celebrate; yet, they remain slaves, because everything is subject to the master’s reservation of approval (or not).  This demonstrates, so our newspaper article’s author, that freedom is not a hallmark of private action but rather of a political state of being.  And thus freedom of expression is the “test case” for freedom, because with “the impression that politics is more important begins self-enslavement.”  I do wish the editors had allowed the author to write at greater length, because I find those last sentences tantalizing.  Would it not be more correct to say that private actions are a hallmark of freedom?  In fact, the very notion of “private action” does not exist in the absence of freedom; Solzhenitsyn writes in his magnum opus of the politicization of sleep itself under Stalin.  What is more private than one’s opinions, formed from the processes of one’s own mind?  In other words, you cannot suppress opinion and expression without a receding, pro tanto, of freedom itself.

And here let’s pause again to point out that none of Rushdie’s points above made it into the NYT write-up.  Why not?  Well, what legacy media institution is more invested in precisely the kinds of self-censorship in the name of a political superstructure condemned by Rushdie than the left-extremists at the Gray Lady?  For them, the personal truly is political.

Well, so much for Salman Rushdie and his slap at the face of the apologists for Islamofascism.  From Tuesday’s FAZ we have another article, on a Pegida demonstration in Dresden.  The supra-headline is “Pegida radicalizes itself,” and for Exhibit A they trot out a photograph, at the linked article, of a toy gallows carried to the demonstration.  On it are two miniature hangman’s nooses, with — what? an effigy? a photograph? — no, with two placards reading “Reserved for Siegmar Gabriel” (actually they even misspelled his name: it’s “Sigmar”) and “Reserved for Angela Merkel” printed on them.  Take a real good look at the “gallows”:  You couldn’t hang a slab of bacon from it.  It’s a model, fer Chrissakes.

As Lutz Bachmann, the movement’s founder, correctly points out, every year during the Carnival parades around Germany there are many more explicit, and explicitly grisly depictions of currently-hated politicians.  Geo. W. Bush was a favorite target.

But hist! we must not allow this expression to stand, must we?  And sure enough, the prosecutor’s office is “investigating” the incident.  As of press time no name had been announced of who made or who brought or who was carrying the gallows and its — O! the horror — two placards.  And what is the alleged crime?  Breach of the peace through threat of criminal action, and encouragement to criminal action.  Really?  This toy gallows was being carried in the middle of a hetzed-up public demonstration; if the peace had been disrupted then precisely in what increment did that toy increase the disturbance?  And “encouragement”?  Where, exactly, is the encouragement?  Where exactly is there a statement that, “I’m going to hang Angela Merkel,” or “I want you to go fetch Siegmar Gabriel so I may hang him”?  How in the name of illogic can you get any further than, “I think Merkel and Gabriel should hang”?

Remind me again how this pursed-lipped investigation by the prosecuting attorney’s office squares with the paean to freedom of expression so praised coming from Salman Rushdie’s mouth?

It’s hard to escape the conclusion that, no less than for the NYT, the commitment of Europe to freedom of expression has to be written down in the “pious platitudes” column.

Indictment or Lament?

A very dear friend of mine, whom I met years ago in New York City, is an Artsy Person.  By that I mean he has overwhelmingly made his living in and around the visual and aural arts.  Back in the day his day job was as an animator, and he played drums in a band at night (jazz and swing, mostly).  I’d met him through the Navy Reserve.  I went to see his band once, and among my favorite memories is of him sitting behind his drum set, slinging sticks into the air and flailing away (he’d cringe to hear me use that verb), wearing a USS Guadalcanal ball cap and a black t-shirt with a huge Bugs Bunny head on it.  Wrap your mind around those two organizing principles and you were well on your way to knowing and loving this buddy of mine.  He’s since transferred to the National Guard where he plays in the 42nd Division concert and parade band.

I haven’t heard him mention working in animation for years now, from which I deduce that the trend he commented on all those years ago — a combination of computer animation and out-sourcing any residual drawing to scut-work hack-shops overseas — finally killed enough of the industry here that he couldn’t make a go of it any more.  For years he kept up his band; now that he and his wife have moved upstate he doesn’t play in that particular band any more, either.  But he’s still very much engaged with the State of the Art (pun intended), and so he puts stuff up on his Facebook page from time to time on the subject.  His most recent post is of this article:  “The Devaluation of Music: It’s Worse Than You Think,” from a blog called Medium.

The overall thrust of this article is that American society at least (the foreign market is not addressed) has forgot how to value music, and not just in a purely monetary sense.  The upper and nether millstones of paltry royalties from streaming services and digital piracy get a look-in, of course.  The article’s thrust, though, is that we as a society simply no longer put forth the effort to integrate what the author calls “the sonic art form” into the fabric of who we are individually.

Which is to say, the author paints and protests the elision of music as an art from our culture.

My buddy’s Facebook post was, “…and THIS, folks, is one reason western civilization is doomed. The suits run EVERYTHING these days. No wonder I am a culture snob…”  I think that, with one exception, he trivializes the article’s point.  [Here I should note that at some point during the past couple of decades, my buddy went from being a fairly economic and political conservative, as well as a social tolerant, to being a pretty flaming quasi-Marxist and sucker for PC demagoguery.  “The suits” are running and ruining everything is a steady background theme to much of his discourse.  He of course has a point, to some degree, but then it’s not an invalid point that the bills have to be paid by someone, and no one is in anything for free, and it’s the job of “the suits” to figure that part out.  I’ve never explored in depth with him the waystations on his journey, but the contrast between the friend I made and the friend I have is about as stark as you can imagine.  Emblematic:  About the first conversation with him that I can recall, all those years ago, he was ranting about how “the Masons” were controlling the world and everything was a Masonic conspiracy to X, Y, and Z.  He’s now a very committed Mason.]

The one exception mentioned is the pernicious influence of commercial radio.  From the article, in full, the relevant passage:

“It’s an easy target, but one can’t overstate how profoundly radio changed between the explosion of popular music in the mid 20th century and the corporate model of the last 30 years. An ethos of musicality and discovery has been replaced wholesale by a cynical manipulation of demographics and the blandest common denominator. Playlists are much shorter, with a handful of singles repeated incessantly until focus groups say quit. DJs no longer choose music based on their expertise and no longer weave a narrative around the records. As with liner notes, this makes for more passive listening and shrinks the musical diet of most Americans down to a handful of heavily produced, industrial-scale hits.”

Can’t argue with the author’s description of what happened, but I would suggest a more depressing take on it than his as to the why it happened.  The author seems to imply that how commercial radio has changed was the product of conscious choice, which implies, of course, that a conscious choice could be made to return to the Good Old Days.

I don’t think the author has given due consideration to the realities of the world that gave rise to those Good Old Days, and how that reality has changed since then.  Consider:  Until the rise of the 8-track tape in the mid-1970s, the radio was your only source of third-party entertainment in a car.  Around the house, unless you wanted to pop for a great big bulky CRT television or expensive vinyl record player (the el-cheapo ones produced crappy sound that made anything other than The Archies absolutely unbearable) in every room, if you wanted entertainment or even just background noise in any room outside your living room, your choice came down to . . . radio.  Because more people listened to radio, any given radio station could afford to specialize, or experiment, or really be what it felt like being, and still make a go of it attracting only a smaller percentage of the total listening market.

What started to change in the late 1970s and early 80s?  The 8-track player and even more importantly, the automobile cassette tape deck, for starts.  Now you had a highly portable, large capacity (90-minute cassette tapes, anyone?) medium for the music you wanted, without commercials or other interruptions, that you could start, stop, pause, and replay at will.  Tired of Miles Davis and want to get your Mozart on?  Push the eject button, flip open a jewel case, shove in the new cassette, and in a matter of seconds you’ve gone from 20th Century jazz to 18th Century classical.  Radio just can’t keep up with that.  Beginning in the early 1980s you had fairly economical high-quality portable stereos that you could strew around the house, with one in the kitchen, one in the laundry room, one in each bedroom, in the basement, in the garage, in the shop building.  I’ve never seen actual numbers, but I’d bet someone else’s monthly income that the proportion of the U.S. population that regularly listened to radio began to plummet.

Nowadays you have inexpensive flat-screen televisions, iPods and similar devices, most of which you can now plug into your car even if they’re not built-in standard on even low-end vehicles, high-quality sound coming out of your laptop or desktop, etc. etc. etc.  And of course you can access hours upon hours upon hours of music, organized to be heard however you choose (listen straight through albums in sequence, or shuffle among albums, or shuffle among individual tracks, and of course with the ability to start, stop, pause, and replay at the touch of a button), and all in a highly portable format.  I’d be surprised if the proportion of radio-listeners hasn’t dropped even further.  And all we’re talking about is music alternatives to broadcast music radio; how about talk radio, after all?  Or subscription satellite radio, with its hundreds of channels?

So what’s a radio station to do, which has to meet its bills?  You’ve got to capture a greater share of a smaller audience.  And how do you capture a greater share?  You go after what most people like most of, most of the time — what our author describes as “manipulation of demographics and the lowest common denominator,” to use the cacophemism.  That of course produces a feedback loop.  If you provide lowest-common-denominator fare, then the overall population’s preferences migrate toward that denominator, which means that there’s less to be gained from aiming outside that target area, which means that what’s provided gets even more relentlessly uniform.  And so forth.

Recognizing the truth of the article’s point that the proletarianization of broadcast radio is every bit as disastrous as presented, there’s a reason that enormous chunks of people quit listening:  Even a top-flight radio station simply cannot compete in control, quality, and choice with low-cost music storage and reproduction.  In my car’s CD player right now, I have Brahms, The Who, Don McLean, Jim Croce, Dietrich Buxtehude, and Mozart.  If I want to go back and listen to the Variations on a Theme by Haydn three times in a row, straight through, just because it almost moves me to tears, and then jump right on over to “Everybody Loves Me, Baby” because it makes me, a child of the 70s and 80s, chuckle, to be followed by “Gelobet seiest du, Herr Jesu Christ,” which was played at my wedding, and “Won’t Get Fooled Again,” which you can describe as the theme song of the Dear Leader Administration, I can do that, and there has never been and never will be any third-party provider/selector who can keep up with me.  The dynamic the author’s describing cannot be stopped or undone without going back to the days of the captive audience.  Very respectfully, I decline to endorse that proposal.

So much for the commercial radio angle, as to which my buddy’s complaint about “the suits” ruining everything is by and large valid.  Of course, whenever you complain about So-and-So Doing X, you must, if you are honest, describe what So-and-So ought to be doing other than X, and how So-and-So can make the house payment by doing Other-Than-X.  I’m not hearing that alternative universe outlined with any convincing detail.

The linked article then goes on to describe several other trends that he identifies as contributing to the de-valuing of music, and as to which I think he’s on very firm ground, but as to which I think the conclusions to be drawn are even more pessimistic than his own.  The author describes as “conflation” of music with other aural or video entertainment the trend of shoving music alternatives in with those other forms of entertainment.  Music is not presented as something precious in its own right, but rather as just one more item on an ever-lengthening menu of Stuff to Pay Attention To, More or Less.  Gentle Reader is reading this blog at the moment, no?  Gentle Reader could be watching a favorite movie streamed or on DVD, or be playing a video game either alone or live with other players around the globe, or be working on his/her own blog . . . or be listening to the sonic art form.  And all those options are just a click away from each other.

The article’s author decries the lack of what he calls “context,” or more prosaically, the absence of intelligent, useful, or thought-provoking liner notes to the music.  If Bach’s C minor Passacaglia is reduced to an icon on a screen, then without some extra programming there’s no way to pop open the liner notes (and this was a massive advantage of the CD format over others; you could get 20 pages or more of liner notes into the jewel case) and read as you listen.  Of course, this problem is actually among the most curable the author describes.  Computer memory is cheap, and with devices getting ever-more-closely linked to each other, both locally and over the internet, what would prevent me from writing the code to tap or right-click that icon on my screen to access not 20 pages, but an entire menu of “context”?  It could easily range all the way from scholarly treatment to comparative reviews (this performer’s interpretation of a classical piece, or a comparison of Miles Davis’s rendition of the piece on this recording relative to some other recording of the same piece) to fan-based reviews to suggestions for further listening and so forth?  Every piece a portal, in other words?

Another trend the author identifies is what he characterizes as “anti-intellectualism,” which he treats thusly:

“Music has for decades been promoted and explained to us almost exclusively as a talisman of emotion. The overwhelming issue is how it makes you feel. Whereas the art music of the West transcended because of its dazzling dance of emotion and intellect. Art music relates to mathematics, architecture, symbolism and philosophy. And as such topics have been belittled in the general press or cable television, our collective ability to relate to music through a humanities lens has atrophied. Those of us who had music explained and demonstrated to us as a game for the brain as well as the heart had it really lucky. Why so many are satisfied to engage with music at only the level of feeling is a vast, impoverishing mystery.”

I do like his phrase “dance of emotion and intellect.”  Jacques Barzun’s magisterial From Dawn to Decadence: 1500 to the Present: 500 Years of Western Culture has an extensive discussion of the emergence of this dance in the late 18th and early 19th Centuries.  I think the author’s spot-on with his observation about music being presented as a talisman of emotion, and how that presentation has adversely affected the intellectual component of the experience.  I disagree with him, however, that it’s a mystery why this is satisfying to so many people.

I know nothing of the author’s politics, of course, but unless he’s really, really an outlier in the arts world, he’s probably several standard deviations to the left of the bulk of the U.S. population.  The elevation of feeling and emotion — what makes me feel good about myself — is at the core of leftist politics.  From third-wave feminism to environmentalism to the “war on poverty” to social justice warriors, “micro-aggressions,” “safe spaces,” and so forth, the common denominator in all is that the political policies which grow out of these movements invariably do two things: (i) they make the actual problems worse, but (ii) they allow the proponent to feel good about himself for supporting them, and to trumpet his membership among the Saved.  Leftism today is simply no longer about results on the ground, but rather a quasi-religious series of rites of purification and sanctification the design of which is to signal the proponent’s moral superiority.

Like it or not, American politics and public discourse is well to the left of where it had been before the FDR administration.  William Graham Sumner’s lecture, “The Forgotten Man,” was mainstream political discourse back in the day.  Find me anyone widely regarded in the public sphere since 1932 who could, or would, pen the following:

“When you see a drunkard in the gutter, you are disgusted, but you pity him. When a policeman comes and picks him up you are satisfied.v You say that ‘society’ has interfered to save the drunkard from perishing. Society is a fine word, and it saves us the trouble of thinking to say that society acts. The truth is that the policeman is paid by somebody, and when we talk about society we forget who it is that pays. It is the Forgotten Man again. It is the industrious workman going home from a hard day’s work, whom you pass without noticing, who is mulcted of a percentage of his day’s earnings to hire a policeman to save the drunkard from himself. All the public expenditure to prevent vice has the same effect. Vice is its own curse. If we let nature alone, she cures vice by the most frightful penalties. It may shock you to hear me say it, but when you get over the shock, it will do you good to think of it: a drunkard in the gutter is just where he ought to be. Nature is working away at him to get him out of the way, just as she sets up her processes of dissolution to remove whatever is a failure in its line. Gambling and less mentionable vices all cure themselves by the ruin and dissolution of their victims. Nine-tenths of our measures for preventing vice are really protective towards it, because they ward off the penalty.”

Modern political discourse would categorically declare itself “horrified” (which is to day, its emotions would be excited) at the proposition that we should leave the drunkard in his gutter, the gambler in his den.  And from that “horror” it then proceeds immediately to the conclusion that we have an affirmative obligation to mulct that Forgotten Man (or someone, anyone other than the person demanding we “rescue” the drunk) to “save” the drunk or the gambler.  This is government by emotion, not intellect.  It requires an intellectual effort to confront the truth and implications of Sumner’s moral point that the actual, measurable effect of much of what government does to “prevent” the consequences of private misfortune — all too often the results of years, and in many cases generations, of bad private decision-making — actually protect and perpetuate it by enabling the people making those bad decisions to keep on as usual.  It requires a moral effort to ask who pays the price, and in what form, and what portion of that payer’s prospects and future are taken from him because we have forced him to pay.  And of course, it’s not just the drunkard or the guy shooting craps behind the gas station, nowadays.  Now it’s everybody and his cousin, and the more zeroes come with the bad decisions, the more likely it is that the people being protected will have the ear of government.

In short, we have managed to create an entire society that has been taught to introduce the conclusions of its reasoning with, “I feel . . . ”  We are instructed, and have been for generations, that what matters is the desire behind a policy, not its actual effect, overall, on a society of 300-plus million people.  It is relentlessly hammered into us that the appropriate frame of reference for judging whether Program X is working is not whether it produces more people who need Program X in order to survive, but rather that more people are surviving on Program X (in other words, the program’s own pernicious effects are treated as proof positive of its merits).  It is then any surprise that we apply such reference frameworks to other areas of life?

I’ll note you needn’t ascribe the trend, as I do, to the dominance of leftism in particular in American society.  In point of fact both American mainstream political parties long ago conceded the central socialist premise.  The individual human is a building block to which is assigned a place in a structure designed by someone else, which will serve functions determined by someone else, and all for the greater glory of some abstract higher ideal determined by someone else.  In the late Middle Ages they built, all over Europe, magnificent stone cathedrals which reached higher into the sky than any other human hands had ever reached (in fact, for centuries they remained the tallest structures ever built by men), to the greater glory of God.  We now want to “build” “society” to the greater glory of whatever specific version of society it is that we favor.

I suppose you could trace the idea that each member of “society” is nothing more than a tool, a stone, in the structure back to the French levee en masse, which was at first a defensive mechanism but which rapidly morphed into an army of conquest for the “liberation” of Europe from the ancien regime wherever it was to be found.  But it found its first true application in Imperial Germany’s nationalistic militarism, and then — as Hayek pointed out in The Road to Serfdom — the passion for “planning” spread to the rest of Europe, then to Britain.  It first washed ashore here in the Wilson administration, receded during the 1920s, and took firm root with FDR.

What is the relevance of my thoughts to this author’s point about the talismanic use of “feelings”?  Well, if you’re going to use a man — and socialism is about nothing other than using men — for your own purposes rather than his own, it sure does help if he doesn’t think too carefully about what it is that’s happening to him.  How do you keep him from thinking, though?  Well, ever since the Romans hit on the notion of bread and circuses, it’s been recognized that what you need to do, and most all that you need to do, is to occupy with sensations — with feelings — the psychic space that might otherwise be taken up with thought.  After all, I can control your sensations much more readily than I can your thoughts.  I can underwrite your housing, I can subsidize your trip to the grocery store, I can just hand you $X per month to piss away as you choose, I can take your children off your hands, tell you that it’s now the responsibility of my employees (we’ll call them “teachers”) to make sure Junior doesn’t turn out to be a homicidal boor, assure you that he and everyone else in his class is unique and uniquely above average, and so forth.  I can plunder the Forgotten Man of his last thread of garment to do this; it’s why it’s so easy for you to forget him.

The article’s author includes what the cynic in me wants to characterize as the “inevitable” lament about music instruction’s demise in public schools.  He may have something of a point, but then I really have to question how much of a point it is that he has.  I mean, so much of what we recognize as the towering great music of Western culture took form in an era before massive public education in the first place, and when formal education was commonly broken off at ages we would now consider abhorrently young, and large portions of such primary and secondary education as did exist was conducted in circumstances in which the only music being made was from the human voice (and maybe an out-of-tune piano).  How many of the giants of early 20th Century America — the men (and a few women) who jerked entire new musical universes from the very earth — even got to high school in the first place, let alone finished?  Plainly music in the schoolroom is not necessary for the creation; you can easily falsify that proposition.

Is it necessary for the valuing of the music being created, though?  I’m not sure our author is on any firmer ground there.  For whom were these musicians playing?  Who made up their bread-and-butter audience?  Again, until after World War II a huge portion of the American population, even in cities, who actually went to the venues where the new musical forms were being hammered out (and by the way, those venues weren’t the great urban concert halls . . . they were the jook joints, the church socials, school halls, and so forth) would not have received more than bare-bones schooling.

If not the live audiences, who were the people who listened remotely, to the very first radio stations?  In the early 1990s there came out a documentary history of bluegrass music, High Lonesome, which I’m proud to say I’ve got on DVD somewhere.  There is a segment in which they talk of the explosive impact that radio had on these remote settlements.  You could rig your car’s battery to a home-made radio, run a wire out to an old bed frame outside for an antenna, and pick up stations as far away as WLS in Chicago (I still recall the Wow! of tuning into their AM station back in the early 1970s, all the way down here, late at night).  Radio and the music you could hear on it were . . . exotic.  There you had, right there in your living room where you could put your hands on it, this box which would reach out and pull from the thin air sounds from hundreds of miles away, sounds which could take you anywhere, anywhere at all in the entire world.  For people who’d been born, grown up, and grown old in a circle of 20 miles (or even narrower than that, for the mass of city dwellers in large cities like New York . . . hundreds of thousands of them would seldom have strayed off Manhattan Island, or out of Brooklyn or the Bronx, or the South End, or wherever their grandparents had fetched up off the boat, during their entire lives) it must have been nothing short of intoxicating.  And that which intoxicates us seizes our souls, as the religious objection to alcohol and drugs has long recognized.

So what changed?  World War I changed; millions of American men in fact didn’t stay down on the farm, after they’d “seen Paree.”  Harry Truman was only the most famous of them.  Movies changed.  The physical dislocations of the Great Depression changed.  The demise of gang labor in the South changed.  [Among the least studied mass migrations in history is of American blacks from the South into the rest of the country, beginning in the years just before the Great War, and becoming a flood during and afterwards; Rising Tide: The Great Mississippi Flood of 1927 and How It Changed America is a very good introduction to a small slice of that trend.]  And then World War II came along and burst the American universe into what Forrest Gump called “a go-zillion” pieces.

So what? Gentle Reader asks.  What does all this recitation have to do with leeching an appreciation for music from American culture?  Well, what is the common theme of all of the things I’ve pointed out?  It is this:  The atomization of control over one’s immediate physical circumstances.  From tenement to townhouse to tract house to suburb.  From grain field to grunting shift work to mindless repetition on the assembly line to what’s becoming known as the gig economy.  From hearing no music but what you and your family could sing to the scraping of a fiddle, to cramming into a stuffy venue on uncomfortable seats to barreling down the highway in your car with the radio going, to rolling up the car windows and popping in a different cassette to punching a button to change CDs to telling your MP3 player to shuffle among all 1750 songs on your playlist.  To maybe once or twice a year seeing a play put on by some down-at-the-heels hack-faded actors to watching a movie once a month on a huge screen stretched across Main Street (how my mother used to see movies in the 1930s in small-town Indiana), to air conditioned movie palaces to multi-screen megaplexes where every member of the family can watch what blows his skirt to punching up Netflix on each of the four screens in your house and everybody gets to choose from 750 different movies.

And here I circle around to rejoin our article’s author.  Why has America forgot how to value music?  Because music has lost its preciousness to us.  Once upon a time music was the only entertainment the bulk of the population had.  There is a reason, after all, that almost all dirt-poor, oppressed, or traumatized groups developed incredibly rich musical traditions:  the Irish, the Germans during the 30 Years War, the Scots Irish both at home and here, the Eastern European Jews, American blacks, the rural South, Hungarian peasants.  Music was the one thing that the landlord couldn’t rack-rent you on; the church couldn’t tithe it out of your hands; the lord couldn’t force-labor it away from you; the slave driver couldn’t lash it out of your back; you could take it with you when you were expelled from the umpteenth country in succession; you could jam it into the hold of an immigrant ship.  The factory owner couldn’t shut it off from you in a lock-out.  The tax collector couldn’t padlock it or seize it.  Music was the one pleasure you could make yourself, that you could enjoy without having to worry about one more mouth to feed or losing that week’s rent money.

So of course people appreciated music more.

What has changed?  What has changed is human liberation from massive and profound privation, privation which modern Americans born after, say, 1960, cannot even imagine.  Granted, the enslavement of privation has been replaced in popular culture with a poor simulacrum of true human freedom (see my above comments about socialism’s modern substitute for Rome’s bread and circuses), but the fact remains that we — even the poorest among us — are surrounded with pleasures (or what pass for pleasures) undreamt-of to even our parents’ generation.

And now I will diverge from our author, once again.  If what is necessary to restore the uniquely precious significance of music to the broad mass of the American population is to return to the physical circumstances of the centuries in which it possessed that significance, then I cannot follow our author.  I am willing to do without the music.  What right do I have to demand the impoverishment of hundreds of millions of my fellow humans so that I may enjoy the pleasures of a new musical experience?

In bemoaning the demise of music’s place in the American soul, and in glossing over the contrast between the world in which it maintained that place and the America in which it struggles to keep it, our author betrays — perhaps inadvertently (remember I know zilch about his politics) — how profoundly the socialist premise has soaked into our collective understanding.  You should suffer so that Music (or “social justice” or “diversity” or “the environment” or the “dictatorship of the proletariat” or whatever) may flourish.  Or more pointedly:  You should toil in drudgery so that I may relish the satisfaction of Society as I conceive it should be.

The Five Year Plan demands it, after all.

 

The Quartet: Fascinating, With a Caveat

I just finished reading Joseph J. Ellis’s The Quartet: Orchestrating the Second American Revolution, his history of the — and there is no other word for it — scheming which attended the process by which the United States under the Articles of Confederation was transformed into the United States under the Constitution.  I’ve also read Ellis’s His Excellency: George Washington, a very useful biography and one which sheds some interesting light on the man Ellis (in The Quartet) calls the “Foundingest” of all the Founding Fathers; his Passionate Sage: The Character and Legacy of John Adams; and, if memory doesn’t fail me, his Founding Brothers: The Revolutionary Generation.

I have to say I enjoyed all of them, particularly the Washington biography and The Quartet.  He has an easy, very accessible style and he’s not afraid to make editorial comments.  They are, after all, his books, and a biographer or historian who has nothing to come right out and say beyond the bare factual narrative isn’t much of writer.  Of course, what facts the writer chooses to include or omit also says something about him, but bald statements of characterization aren’t out of place either.  Just don’t try to hide them, is all I ask.

The Washington book I found interesting because Ellis spends a great deal of time addressing the Great White Elephant in the Room, namely Washington’s Auseinandersetzung (show me a better English word for it and I’ll use it) with the institution of slavery and the relations between the races.  Hadn’t known, just for example, that up to a full 20-25% of the Continental Army was at any given time what they’d refer to as “dark green” soldiers (all soldiers being green, you see; in the navy all sailors are blue, and some are light blue and some are dark blue) in today’s army.  This experience with blacks as fighting men changed Washington profoundly, much as it did so many of the Union soldiers in the Civil War.  You simply can’t watch a man stand up to artillery pounding or gales of small arms fire and be immune to the idea that he’s just as good as you are.  [Aside:  This is why it is so historically significant that it was the U.S. armed forces which, first among all public institutions and voluntarily, de-segregated.]

It was during the war that Washington stopped selling slaves.  By the time he died a large (comparatively) number of his slaves were well past working age.  I can’t recall off the top of my head if Ellis actually uses the expression “retirement home” or an equivalent, but it’s certainly the impression that emerges from the book.  Martha Washington, notably, never changed her own attitudes about slavery or slaves.  And Ellis highlights the fact that a significant number of what we think of as “Washington’s” slaves were actually Martha’s, inherited from her father.  Washington, as I recall, was his executor, and as Martha’s husband was legally charged with the safe-keeping of her property . . . including her slaves.  This conundrum played itself out in Washington’s final act on the subject:  As is well known, he freed his own slaves at his death (nearly alone among the Founding Fathers who were slave owners), but he did not have the legal authority to free Martha’s, and so didn’t.

But on to The Quartet.  Gentle Reader will recall that I have previously written here and here about Washington’s Farewell address, his (written) valedictory to the nation he had done so much to establish.  In both previous posts I’ve mentioned the curious fact that Washington spends something like eight paragraphs addressing the calamity of disunion and the need to resist all who would insidiously suggest fracturing of the union as being the way to go . . . but nowhere breathes so much as a word to the effect that the Constitution itself simply does not permit secession.  In beginning The Quartet I’d been very keen to see what light Ellis threw on the subject, whether it would have come up in the Convention debates or in the ratification process.  [Aside:  Ellis does answer a question for me, namely whether anyone has actually studied in detail the ratification debates in all the states.  There in fact has been someone — one person — who has done so, and unfortunately I can’t call his name from memory.]  But Ellis is silent on the point, so we can’t tell from his book whether the issue was discussed or not.  He does attach, as an appendix, the full text of the Articles of Confederation, which the Constitution replaced.  Interestingly, that document does, in Article XIII, expressly provide, “And the Articles of this Confederation shall be inviolably observed by every State, and the Union shall be perpetual[.]”

There is it, in plain Anglo-Saxon; in fact, the statement that “the Union shall be perpetual” is in there not once, but twice, just a few lines apart.  Search as you may, but no similar statement is to be found in the Constitution or any amendment to it.  Lest Gentle Reader be tempted to read the provisions of the Articles of Confederation by implication into the Constitution, Ellis makes it very plain that the Constitution did not amend or supplement the Articles, but replaced them in toto.  It represented, as Ellis clearly demonstrates, not merely a change in text but a fundamental re-ordering of the very nature of the union from a confederacy of equals, in which each “Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated to the United States, in Congress assembled” (Article II, in its entirety), to a nation-state in which the states are specifically subordinate entities, although not as fully subordinate as James Madison originally desired them to be.  He had in fact, in the Virginia Plan for the Convention, specifically proposed that the federal executive be given an express veto over state statutes and other laws.

All of which only heightens the interest in the omission.  It certainly goes a long way towards under-cutting the argument that the secessionists of 1861 were not only morally abhorrent for their defense of chattel slavery, but also legally and indisputably traitors to their country.  I suppose one might say the omission of 1787 was supplied at bayonet point from 1861-65.  In all events, the nature of the union has now and forever been resolved, and I for one am happy at the outcome, however good-faith the argument on the point may have been at the time.

Back to the book.  The actual “quartet” Ellis refers to are Washington, Madison, Hamilton, and John Jay.  The first three are of course well-known.  The fourth, Jay, is known as the third member of the triumvirate who wrote the essays now known as The Federalist, the most cogent arguments for ratification of the Constitution (although as Ellis points out, they were targeted specifically at New York’s ratification convention and in fact do not seem at the time to have garnered much if any attention beyond that state), and among lawyers as the first Chief Justice.  History wonks will also remember him as the negotiator of the Jay Treaty of 1794 with Great Britain (which finally removed the British from the frontier forts they’d kept occupying, the 1783 Treaty of Paris notwithstanding), and the principal negotiator, with Franklin, of the 1783 treaty itself.  Ellis shares the vignette of Jay in conference with the Spanish envoy (it must be remembered that Spain and France were allied at the time against Great Britain); the Spaniard drew a line with his finger on a map, from the Great Lakes more or less due south to Florida (Spanish at the time), to indicate that as the western boundary of the United States, everything to the west presumably going to Spain.  The Americans had been given explicit instructions by the Continental Congress to conduct all negotiations in consultation with France, which thus meant subject to Spanish veto.  Jay then took his own finger and traced the Mississippi River.  That evening he went to Franklin’s lodgings, awoke him, and convinced him to disregard their instructions in respect of France, and to make a separate peace with Britain.  Had Jay not succeeding in convincing Franklin, or had they knuckled under to Spain’s demands, the history of the entire world for the last 225-plus years would have been not just different, but radically different.

In any event, Ellis recounts how each of the four, by his own route, arrived at the conviction that the Articles of Confederation just were not going to do, and in fact that they were so hopeless as to be beyond salvage by mere amendment.  Washington and Hamilton of course had personal knowledge of the system’s failure to support the army in the field.  Jay got to experience the futility of the system as foreign minister, when the Europeans, who could read the Articles just as well as anyone else, more or less laughed in his face when he purported to represent a “United States of America” that they could see did not in fact exist.  Indeed, it not only did not exist de jure, but as Ellis also shows, it likewise had no place in the sentiments of the ordinary people.  Folks simply did not think of themselves as being “Americans” in the sense of belonging to any greater polity than their own state, if their vision extended even that far.

I won’t recount in detail either the machinations of the Constitutional Convention itself, or the ratification process.  In fact, Ellis doesn’t spend any terribly great amount of time on the ratification process, except in respect of Madison’s stage-managing (or trying to) the order of ratification among the states.  Short version:  By deferring votes in the large, questionable states until near the end of the process, the likelihood was increased that those states would be presented with an accomplished political fact of ratification, and they’d vote to join so as not to be left out.  And that’s pretty much how it worked in practice.  To reiterate, I’d have appreciated much more exploration of the extent, if any, to which issues like potential secession got aired out.

My caveats?  Well, Ellis displays his good leftish credentials in two places in the book.  The first (p. 172) comes at the tail-end of his discussion of what he describes as an “ambiguity” about where the balance of sovereignty was located by the document eventually submitted for ratification.  Key statement:

“The multiple compromises reached in the Constitutional Convention over where to locate sovereignty accurately reflected the deep divisions in the American populace at large.  There was a strong consensus that the state-based system under the Articles had proven ineffectual, but an equally strong apprehension about the political danger posed by any national government that rode roughshod over local, state, and regional interests . . . .”

From the above statement, the truth of which I think Ellis does an excellent job demonstrating, he then hikes his leg and lets a glaring non sequitur in church:  “In the long run — and this was probably Madison’s most creative insight — the multiple ambiguities embedded in the Constitution made it an inherently ‘living’ document.”

Very respectfully, Prof. Ellis, it is nothing of the kind.  For starts, the truly revolutionary nature of the Constitution was precisely that it was written.  Ellis correctly demonstrates the core nature of the Articles as being a treaty among equals.  The Constitution was something different; it established, to a limited extent, a hierarchical relationship between the states and this new animal, the United States of America.  But most importantly, the states’ relations among each other and with the new national state was spelled out in writing.  There was a reason, after all, why monarchs violently resisted granting written constitutions, all the way down to 1905 in Russia:  A written document pins the sovereign down.  With a written document you can point to a specific clause or word or phrase and say to the government, “Look here, Buster; it says right here you cannot do that.”

The notion of a “living document” — in the sense that Ellis is using it — is very, very much a 20th Century phenomenon, and it is specifically a judicial creation from wholecloth.  The Founding Generation would have looked at you as if you were speaking Tagalog if you had suggested that what they’d come up with was a “living document” in which judges got to make things up as they went along (“evolving standards of decency”), and under which a president such as Dear Leader claims an inherent executive authority to act to impose law for no better reason than he cannot get Congress to act as he sees fit on issues which are important to him (“I’ve got a pen, and I’ve got a phone”), and Congress can prescribe how much water your toilet uses (1.0 gal/flush, anyone?).  I’ll go so far as to state that had you tried to sell the Constitution as a “living document” in 1787-88, you’d never have got nine states to ratify; in fact, I question whether the populace of any state would have been so daft.

Secondly, the mere fact that the Constitution abandoned the state-centered structure of the Articles but rejected the All-Powerful National State which Madison had gone into the Convention advocating emphatically does not mean that the answer to the question, “Where does sovereignty lie?” is a forever mutable response.  It is perfectly possible for the answers (and there can be many) to that question to lie at multiple points between those poles, depending on which issue or question you’re asking.  Just for example, the states are prohibited from making war or peace, or coining money.  That’s specifically reserved to the federal government.  On the other hand, the regulation of “Commerce with foreign Nations, and among the several States, and with the Indian Tribes,” while extremely broad, is not, and cannot with honesty be read to constitute, a grant of authority to Congress (to say nothing of the executive) to prohibit a man from feeding his own family with the produce of his own land.  And yet that’s precisely what the Supreme Court said the Commerce Clause does.  I’m still waiting to hear anyone make a convincing case that, had you told the farmers of any of the 13 states that they were ceding authority to Congress to dictate what they could and could not grow on their own land to feed their own children, the Constitution would have stood a ghost of a chance of ratification.  The fact that a group of sophists on the bench can articulate a rationale which, as long as you don’t actually press on it with any force, supports such an outcome does not mean that outcome was contemplated by the men who drafted or voted on the Constitution as among the permissible.  The argument that everything is both necessary and proper to accomplish some hypothetical purposed which allegedly by some remote chain of causation (think: the schoolbook example of the butterfly flapping its wings off the coast of Africa, which results in a Category 5 hurricane coming ashore at Gulfport, Mississippi) is an argument which renders superfluous the entire text of Article I Section 8.  If that argument has any validity then Section 8 could have been written simply as, “Congress shall have all Powers to enact such Legislation as it shall deem expedient.”

As if to emphasize the extent to which Ellis doesn’t Get It, he offers us this:  “Madison’s ‘original intention’ was to make all ‘original intentions’ infinitely negotiable in the future.”  Got that?  Just because it says you can’t be president unless you’re 35, it doesn’t really mean that.  Just because it says each state gets to elect two senators, a state — let’s say, Alabama — can go ahead and elect three, and have them seated.  Just because it says, “No Tax or Duty shall be laid on Articles exported from any State,” and just because Article I Section 8 gives Congress the authority to “lay Taxes, Duties, Imposts and Excises,” (and requires that such be “uniform throughout the United States”), that wouldn’t stop Dear Leader from levying a tax on tobacco shipped from North Carolina to Amsterdam, but excusing tobacco grown in northern California from that tax.  Can private property be taken for public use without “just compensation”?  According to Ellis, the answer is yes, if you can get either a majority in Congress, or the president acting without Congress, to decide to do it.  Because “infinitely negotiable.”  Right now there is a lawsuit pending in which the House of Representatives is suing Dear Leader over the “Affordable” Care Act’s spending of money.  Remember this one:  “No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law”?  Well, it seems that at least some provisions of the ACA produce just that outcome: expenditures not authorized by law.  According to Ellis, that prohibition is “infinitely negotiable” for all time.  Why, one wants to ask Ellis, did the drafters include a provision (Article V) for the document’s amendment, if nothing in it had any now-and-forevermore meaning anyway?  “Living documents” require no amendment; all they require is a consensus that it doesn’t mean that anymore.  Like Brown v. Board of Education, presumably.  What exactly, under the leftish framework, would prohibit Congress and the president from deciding that Brown was decided entirely wrong and well, gosh darn it, we’re going back to “separate but equal”?

Bless the dear professor’s heart.  He puts in a good word for collectivism/corporatism/fascism, but really can’t bring it off.  Not to an intelligent audience, in any event.

The second place where Ellis goes to bat for the leftists occurs beginning on page 211.  He gives Madison’s original draft of what became the Second Amendment.  The two clauses of the text we know (“A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”) were inverted in the original draft, with the “necessary to the security” starting out as “being the best security of a free country.”  Madison’s draft also included a specific clause excusing what we would know as conscientious objectors from “render[ing] military service in person.”  Ellis just refers to “some editing in the Senate,” and laconically observes that it became the Second Amendment.  He provides no clue as to what the substance of that “some editing” might have been.

According to Ellis, Madison’s draft was merely “to assure those skeptical souls that the defense of the United States would depend on state militias rather than a professional, federal army.”  According to Ellis, Madison’s draft makes clear that the right to keep and bear arms was “not inherent but derivative, depending on service in the militia.”  Good leftist talking point.  He’s got some problems, of course, starting with the simple text itself.  The amendment, even in its original draft, does not speak of the states being free to arm their militias; nor does it provide that the right of militia members to keep and bear arms shall not be subject to unreasonable restriction; nor grant the states the right to compel militia service.

If you look at Madison’s first draft, it consists of two independent clauses separated by a subordinate clause.  Let’s try this as a catechism.

Q:  What “shall not be infringed”?

A:  A right.

Q:  What right?

A:  To keep and bear arms.

Q:  Whose right?

A:  The right “of the people.”

Simple enough.  But perhaps Madison (and more importantly, the rest of Congress) really meant “the states” when writing “the people”?  Plausible, until you consider that in four other instances in the Bill of Rights the expression “the people” is used.  The First Amendment protects “the right of the people peaceably to assemble.”  Now read that to substitute “states” for “the people” and what result do you get?  The Fourth Amendment protects the “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”  Same exercise:  Are the states to secure against unreasonable searches and seizures?  Say it with a straight face, Prof. Ellis.  The Ninth Amendment provides that the enumeration of “certain rights” shall not be construed to “deny or disparage others retained by the people.”  I guess you could read that to mean “the states,” but then what to make of the Tenth Amendment, which of course provides for the reservation of all rights neither granted to the U.S. nor prohibited to the states “to the States respectively, or to the people.”  If the leftish reading of the Second Amendment is correct, then the Tenth Amendment can mean “to the States respectively, or to the states.”  You just cannot get around the fact that in every other instance where the Bill of Rights refers to a right “of the people,” either is preservation or its reservation, the reference is plainly to individual humans.

Well, maybe “shall not be infringed” really means “shall not be subject to unreasonable restriction”?  Why, then, does that “unreasonable” qualifier appear in the Fourth Amendment but not the Second?  But what of the subordinate clause about well-regulated militias?  That’s very nice, but that phrase has neither subject nor verb.  Structurally it bears the same relationship to the grammatically operative portion of the text that the Preamble bears to the overall document.  Actually, that’s not quite true:  The Preamble does contain a subject, verb, and direct object:  “We the People . . . do ordain and establish this Constitution for the United States of America.”  This is in marked contrast to the prefatory clause of the Second Amendment.

So far as I am aware there has never been serious suggestion that the language of the Preamble operates to qualify or limit the scope or operation of any substantive provision of the document.  Does Congress only have authority to regulate commerce among the several states if and to the extent reasonably necessary to “provide for a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity”?  Of course not; it has all authority “necessary and proper” to regulate that commerce for any purpose not prohibited by the balance of the Constitution.  Any at all.  Or read the Preamble as a qualifier to the judicial power granted to the Supreme Court and such subordinate courts as Congress may establish.  How is that going to work?

[Purely as an aside, I’d note that — except for those boobs on the bench, of course — no one makes an argument that the Free Exercise Clause, or the right of peaceable assembly, or the freedom of the press are subject to any purpose-based restriction, as is argued by the leftists about the Second Amendment.  Nor is the “unreasonable searches and seizures” clause of the Fourth Amendment so read as to provide that hiding one’s criminal activity is not a legitimate object of that protection.  In fact, the Second Amendment is the subject of its very own interpretive scheme under the leftish project.  Curious, isn’t that?]

I’d also observe that what Ellis is arguing for is not only the “original intent” which he just 39 pages before disparaged in favor of a “living document,” but he’s arguing for the “original intent” as contained in a draft that never made it into the document.  Priceless; but, it illustrates rather well the leftish principle that all means are permissible to the Party, because what the Party line is at the moment is by definition the Truth.

Again, dear Prof. Ellis takes a mighty swing at the bat for his Party, but comes up with air.  I was a bit disappointed that he didn’t work in something about Global Climate Change or how Citizens United is just such a horrible decision because Koch Brothers.  Or something like that.

Notwithstanding his gratuitous introduction of 20th Century political theory into 18th Century politics — and let me allow that I think Ellis is entirely correct in his portrayal of the Convention and ratification process as being as much or more about practical politics than it was the implementation of a theory — I still highly recommend this book.  It grates to have to read a book like this with one’s bullshit filters at high alert, but nowadays when there’s no such thing as a politics-free zone, I guess we’ll just have to learn to live with writing like this.

The Quartet does a marvelous job of showing just how unlikely a prospect was the transformation of the United States from a maelstrom of co-equal sovereigns to a multi-polar entity almost serendipitously adapted to the task of subduing and populating the better part of an entire continent.

Read it for the story of a political miracle, not for its legal analysis.

The Things You Learn

One of my favorite books is William Manchester’s The Arms of Krupp.  I have it in paperback and it’s been read enough that my copy is falling apart.  Once day I suppose I’ll hunt up a hardcover copy on Amazon, but that’s a priority that’s going to have to wait.  I have a few of Manchester’s other books, including his now-completed (posthumously, by his hand-picked editor) biography of Churchill — The Last Lion — and the last book, I think, that he ever wrote himself, A World Lit Only by Fire, a book about the world and plane of human understanding shattered by Magellan’s voyage.

At the risk of understatement, in the Krupp history Manchester avoids the pitfall of falling in love with his subject.  Rather the opposite; in fact, at least some contemporaneous reviews — here, for example — took him to task for erring too far in the other direction.  A few years ago, a Harold James published a new history of the family and its company, Krupp: A History of the Legendary German Firm (here I am violating one of my informal rules (hey, it’s my blog, right?), namely that of not linking to books that I have not read), which has been favorably contrasted — here and here, for example — to what is now perceived as Manchester’s lop-sided portrayal of the family and its doings.

All that is as it may be, as the English say.

I wanted to focus on a person who figures prominently in the latter part of Manchester’s book, a boy name of Berthold Beitz.  Beitz was brought in as the front-man of the firm in the 1950s.  He’d been head of an insurance company after the war.  Here it is helpful to understand the outsized role that insurance companies play in the German economy and in society.  Let’s just say that insurance occupies a much more honored niche in both than is the case here.  Manchester portrays Beitz as being almost a cartoonish wanna-be American.  Using first names.  Glad-handing.  Everything big, loud, and overdone.  Very much contrary to how the family and firm had done business before.

The family and firm had need just at that time (1953) of a front-man.  Alfried Krupp, the last sole proprietor, was then still somewhat in bad odor, he having been caught with a large number of dead slave laborers about his person.  Manchester’s book is in fact dedicated to the nameless dead children in the cemetery at Buschmannshof, in Voerde-bei-Dinslaken, who were born to Krupp’s slave laborers, died, and were buried there.  His father, Gustav Krupp von Bohlen und Halbach — who was not even a born Krupp; the Kaiser himself gave Gustav the Krupp name upon his marriage to Bertha (for whom the Big Bertha siege gun of the Great War was nicknamed) — was to have been one of the defendants at the first Nuremberg trials, sitting in the dock with Goering, Heydrich, Sauckel, and the rest of them.  That’s how egregious their behavior was.  But by the end of the war Gustav was a drooling imbecile and in fact had in 1942 (I think; it may have been the next year) given the entire firm to his son Alfried.  For whatever reason the Allies never tumbled to that fact, and so Alfried, under whom the worst of the firm’s wartime atrocities occurred (Manchester even cites to an occasion on which the S.S. complained of how Krupp was treating its slave laborers), escaped a hanging court.

So Beitz was brought in as the first outsider to have a decisive voice in the firm’s running.  Manchester portrays him has more or less running it into a ditch, over-extending it with questionable dealings with Third World countries and Warsaw Pact countries, the abilities and willingness to pay of which were all dicey at the time and proved to be the firm’s undoing.  Again, according to Manchester (it’s been several years since I re-read the book), the firm began doing an ever-greater percentage of its business in places where a prudent vendor would have given serious thought to the merits of up-front payment.  And then of course those same “developing” (a misnomer: they didn’t “develop”; the West developed them, and paid through the nose for the privilege) countries welshed on enormous contracts, which drove the firm from private ownership.  Ended up going public, a step which the Founder, Alfred (his parents gave him the English spelling of the name) had vehemently opposed.  Of course, to complete the irony, Krupp and Thyssen have now merged (look at the next elevator Gentle Reader rides in).  Thyssen was Alfred Krupp’s arch-enemy back in the day.

The merger, by the way, was Beitz’s doing.  He stayed with the firm for 60 years, and died July 30, 2013, just shy of his 100th birthday.

What I didn’t know until I read his obituary in the Frankfurter Allgemeine Zeitung (sorry, their archives are pay-walled) was that he was inducted into Yad Vashem for his actions in saving Jews during the war.  He’d been in charge of a large petroleum facility in the Ukraine, sufficiently high up that he had the power to designate workers as critical war workers.  He also was sufficiently lofty to receive advance notice of proposed round-ups and liquidations.  And so he began using his critical-worker designation powers willy-nilly.  In favor of all manner of people, including children.  He and his wife also hid Jews in their home.  According to the Wikipedia write-up here, he was eventually credited with saving on the order of 800 Jews from extermination, for which he was honored by Yad Vashem as Righteous Among the Nations.  It is, I understand, the highest accolade that the children of Abraham can bestow upon a Gentile.

I can think of no higher recognition than to be recognized in one’s own lifetime as Righteous Among the Nations.  Has a biblical ring to it which sort of chokes one up, upon reflection.  I think what impresses as significant is the mental image of the individual standing on his own, alone, among the nations of all the earth, all acknowledging his virtue and courage (part of the selection criteria for Yad Vashem is that the person must have acted as he did at peril of his own life, and for the purpose of saving the lives of Jews).

I don’t know whether Beitz’s war-time rescue activities were widely known when Manchester was writing (his book dates to the late 1960s, which means it would have been researched and written towards the middle of the decade).  Would knowledge of that have altered how he was portrayed in the book?  I’d sure hope so, given how negatively he is shown.

The take-away from all this is that it’s going to be a long, long time before the last is written or spoken upon any of us.

Farewell and rest in peace, Berthold Beitz, Righteous Among the Nations.

It’s Why You Don’t Paint in Primary Colors Only

The world does not come and never has come in exclusively primary colors.  Fact.  If you try to paint the world, either as it now exists, as it used to exist, or as it may in the future exist, solely in primary colors, you’re simply not going to produce a useful depiction of reality.

Thinking in a manner similar to painting in primary colors likewise does not permit you to form a usefully accurate understanding of the world.  I say “usefully accurate” because the world is just too complicated a place for anyone fully to comprehend everything important about it.  Not going to happen, not in terms of the present, the past, or the future.  Fact.  Every level of cognitive engagement with the world is a simplification.  Pretty much every last one of us uses — whether consciously or not — sorting mechanisms, decisional algorithms, categories of perception that are both under- and over-inclusive.  You can easily recognize the guy who doesn’t use those mental tools to navigate reality:  He’s the guy standing on the street corner who doesn’t know whether to shit or go blind, because every last impression he takes in, every last decision he makes, requires him to start from scratch.

So much for my daily statement of the obvious.  I’m pretty good at it, wouldn’t you say?

Race.  It’s like sniffing glue for the thoughtful and law-abiding.  We know that the preoccupation with race, the endless agonizing and hashing over its meaning, its history, its sociological, economic, and political implications, it is little more than poisonous to both our society and our polity, no matter what group the person contemplating or yammering on about it happens to be from.  And yet — that street thug, gun-running, perjurious criminal Eric Holder to the contrary notwithstanding — we can’t stop talking about it.  You’d think that race, either in the abstract or in its concrete setting here in the U.S., where the public discussion has dated at least since the 1780s, when the Quakers were presenting petitions to the Confederation Congress and that Congress was outlawing slavery in the Northwest Ordinance, is something about which there is bugger all new left to say.  For myself, I cannot recall the last time I heard anything said about race that was both interesting and true that I hadn’t heard countless times before.

This article strikes me as just another installment.  “What a Truly Honest Discussion of Race Would Look Like,” over at Townhall.com, is a good reminder that the subject of human bondage is much greater than the story of sub-Saharan Africans who got scooped up and carted off (so to speak) to the English colonies in North America.  Those who would pretend that it is are painting in primary colors.

I ought not disparage the article’s author for pointing out what most any person with the least understanding of world history already long since knows.  I shouldn’t do it because there are so few people who have any curiosity to acquire the least understanding of that history.  So when the author points out that the very word “slave” derives from precisely the same word as “Slav,” and that that’s no accident because for so many centuries that’s what Slavs were viewed as, it might enlighten no small number of people.  I wish he’d mentioned that the slave markets of Constantinople were very much going concerns as late as 1867, when Mark Twain visited the city.  He cites to several studies (presumably scholarly) about the institution of slavery in North Africa.  There the slave-masters were not sub-Saharan Africans but the mish-mash of Arabs and other ethnic groups spread along the littoral, all of them having more or less two things in common: (i) they were fanatical Muslims, and (ii) they made their living from piracy and plunder.  I’m not sure, though, that slavery in that area of the world has much to teach simply because you could escape slavery by turning Muslim.  The status of slave was not an inherited condition; in fact, I’m not sure that slaves there were even really permitted to reproduce to any marked extent (I’d be fascinated to see more on that subject).

The article’s author cites to that tiresome professor of grievance studies, Henry Louis Gates, for the observations that most of the actual enslavement — that is, the forcible conversion of free men and women into permanent captives held to involuntary labor — was the work of sub-Saharan Africans.  The pitiful survivors of the Middle Passage, in other words, were slaves well before they ever reached the coast and saw their first slave ship.  Our author also quotes the figure of 388,000 who “were shipped to America.”  Wait a minute:  Is he talking about the colonies that later became the U.S?  If so then I can perhaps accept that 388,000 number.  But I mean, really, what does it matter whether it was 388,000 or 388,000,000?  They and their descendants were in fact held in bondage and that bondage was in fact in the form of chattel slavery (as opposed to serfdom; the African slaves were never glebae adscripti).  I’m not aware of any context in which the Meaning of African Slavery in North America can be a function of the precise or even imprecise number of Africans shipped here.  By like token what can it possibly matter that free Africans voluntarily came to North America as early at 1513?  Or that, in Central America and Florida, at least, thousands of slaves escaped to become Cimaroons?  If the point is that not all of black experience is captured in the arc of chattel slavery, then . . . well, not all of British experience during World War II is captured during the weeks of the London Blitz.  So what’s your point?

More interesting, because it undercuts the primary-color palette of white-people-bad-black-people-good (the sort of horse shit trafficked in by that charlatan Leonard Jeffries), is the mention of the black slave owners of the American South.  Yes, there were some.  It that connection, however, it’s important to bear in mind that a large number, if not nearly all — of them would have been free blacks who bought their wives and children out of bondage (and if the particular state’s laws forbade manumission, then the wife’s and children’s legal status as slave would not have changed).  There were some very large-scale black slave-owners, however, mostly in South Carolina and New Orleans.  Way back in college I wrote a term paper on, among others, a biography of one of them, a William Ellison, who started life as a slave, learned the trade of cotton gin manufacture and repair, bought his own freedom, and by his death was in the 95th percentile of all slave owners.  Black Masters: A Free Family of Color in the Old South is a very interesting read, not only for just the main story, but also as a cross-bearing on the rest of the slave system.

The article also talks about the unfree white laborers who until the later 1600s formed the bulk of the unfree population of Virginia (South Carolina wasn’t settled until the late 1660s-70s; Charleston was founded in 1670 and Boone Hall, the famous avenue-of-oaks joint, dates only to 1682).  As related in Edmund S. Morgan’s American Slavery, American Freedom, the transition from predominately white to eventually-exclusively black unfree labor was gradual and had a great deal to do with economics, health, and land settlement laws.  Not to put too fine a point on it, but until a newly-arrived unfree laborer could be expected to survive what they euphemistically called “seasoning,” there was no reason to pay fee simple prices for a slave when you could take a seven-year lease on an Irish girl who’d be dead long before you had to give her her freedom and enough goods to set up housekeeping.  You also got “headrights” — 50 acres of land — for each indentured servant you brought over (it was your land, though, and not the servant’s).  Until 1699 in Virginia you also got headrights for slaves imported; but by that time slavery had thoroughly established itself as the overwhelmingly dominant labor system.

Indentured servants were in fact subject to many if not most of the awful conditions the slaves experienced.  You could in most colonies legally maim an indentured servant — chop off a toe or a finger — for minor transgressions.  I’m not aware that you could legally kill an indentured servant, while on the other there was little if any practical limitation on killing a slave.  I’m sure that technically killing a slave was illegal homicide, but I’d be surprised to find out it was enforced in any but the most sickeningly egregious cases.

All in all, this article reminds me more than a little of the discussion of the history of slavery in North America set out in The Redneck Manifesto, a book that would be a great deal more interesting if the author understood some very basic facts about economics.  His early chapters on the joint experience of poor whites and black slaves in 17th Century Virginia are worth a read (even though his later unhinged rants about fiscal and economic policy and law suggests a grain of salt be taken with those earlier chapters as well).  In Goad’s telling, it was Bacon’s Rebellion (1676), pitting the unfree and downtrodden against the planter elite, which awoke that elite to the necessity of dividing the blacks and the whites from each other.  According to him, the laws penalizing what we can generically describe as “fraternization” between the groups date from the aftermath of the rebellion, and the history of race relations since has been the systematic and basically fraudulent effort to prevent poor whites and poor blacks from combining, either economically or politically, to threaten the elites’ hegemony.  That may be the case; it’s been 30 years since I last read Morgan in detail, and the better part of 15 years since I read Goad.  And certainly more than one author has described very well how one of the side effects of slavery was the creation and perpetuation of an entire class of absolutely dirt-poor, un-landed, prospect-less whites (the expression “white trash” originated in the slave quarters to describe them).  But on one point Goad is entirely correct:  The plantation elite had every intention of dominating Virginia’s society and economy, and they had no intention at all of sharing that power with anyone of any color or condition of servitude.

But for all the tu quoque in this article, what is the point?  You just can’t get around the fact that the experience of sub-Saharan Africans and their descendants in North America has been qualitatively different from that of any other group, and that the implications of that history are still playing themselves out.  I disagree with most of the left-extremists on just how those implications are playing out.  But just as the experiences of aboriginal Americans today would be unthinkable without the history of the reservation system, so also the present-day experiences of the Africans’ descendants would be unthinkable had their ancestors come here and lived here as free men.  Wherever else we would be, it wouldn’t be where we are.

So while it’s good to remind people occasionally that you can’t paint in primary colors, what does that tell me about how to understand a painting?

The Long Tail Lashes Again

In statistics there is an observable distribution phenomenon known as the “long tail.”  I’ve seen different definitions of it as an economic proposition, and its implications for business and marketing have been the subject of a book, The Long Tail: Why the Future of Business is Selling Less of More (note: this link violates one of my informal rules on this humble blog, viz. I do not link to books I have not read).  But very briefly stated, the “long tail” phenomenon as a matter of economics is the pattern whereby the total market (measured by income, or turnover, or whatever other measure of “success” you choose) is concentrated among a very small number of the population at the top, while by far the greatest portion of the population exists at much, much lower levels of whatever you’re measuring.  It’s called a “long tail” because that’s what it looks like if you graph it out.

The intriguing aspect of the long tail is that it is observable across nearly every avenue of economic activity you can name.  It’s highly visible in professional sports, where for every Peyton Manning or Tom Brady you’ll have dozens upon dozens of third-string tackles who maybe see a play or two a game and whose careers are over in three to five years, their knees shot and their brains addled from all the hits.  And those sods will never make a tenth annually what the “franchise players” make.  Factor in the endorsement income that a Peyton Manning makes and compare that to Sidney Schmo whose job in life is to be more or less a live blocking dummy for the starting offensive line, and ol’ Sid will not make in his life what Peyton makes in a year.

Or take a look at income distribution among lawyers.  Over at MarginalRevolution there are actually two graphs, one showing the 2010 distribution and the other showing the 1991 distribution.  Even in 1991 there was an observable tail, but by 2010 you had a tiny number with massive income, nearly no one in the middle, and then a huge gob way down at the bottom of the scale.  This specific pattern is not new at all.  When Daniel Webster announced an intention to quit teaching school and become a lawyer, he was warned off because the field was too crowded (too crowded?  back in the early 1800s?  seriously?) and he’d never make any money.  Webster’s reply has remained famous:  “There is always room at the top.”  Which is true enough, I suppose.

And now, from Britain, we discover than even writing is not exempt from the long tail phenomenon.  In Britain, according to a recently-released study, the top 5% of authors (measured by income) scooped up 42.3% of all income earned by all authors.  The median income — the amount separating the top 50% from the bottom 50% — was £10,432, which is apparently below minimum wage for Britain.  That bottom 50%, by the way, earned a whacking total of 7% of all the income earned.  Put differently, the top 5% of earners raked in right at six times the amount the bottom half did.  The commenters to the report of the study seem to break into two groups: (i) those who decry someone like J. K. Rowling making all that money while “artists” starve in their holes, and (ii) those who tell the first group to shut up and write something that someone wants to read.

I can see genuine merit in both viewpoints.  Much of what gets published these days really is tripe and nothing more, made to be “consumed” and tossed out to the next church fund-raiser.  It is justly galling to know oneself to be a finer craftsman than those one sees enjoying a degree of success one strongly suspects — with reason — one will never enjoy oneself.  On the other hand I really have no patience for the crowd that fancies itself “transgressive” or “engaged” or just simply cranks out thinly-veiled identity “narrative” crap, thinks itself artistic, and damns the world if we don’t agree.  If you really think that being a “creative” artist means your job is forever to épater la bourgeoisie, don’t be surprised when la bourgeoisie shows no interest at all in plonking down its hard-earned for your output.  If you want to write collections of short stories about women behaving poorly to the men in their lives and acting proud of it (this one conforms to my rule; I actually read this book many years ago . . . it was . . . well, it was precisely what you would have expected from its title), then I’ll remind you:  You just kissed off 49% of the human population as potential readers of your book.  And so forth.  Even good books, fascinating books — by which I mean to say the sort of books I link to in the course of this li’l ol’ blog — just generally don’t sell all that many copies, and the authors correspondingly tend to have what we can call “day jobs,” unless and until they hit that magic level where the writing fuels itself.

Writing — and the other creative/performing arts as well — are by no means the only self-congratulatory occupation to experience the ugly side of the long tail.  At one end, we have a tiny, tiny group of professors like Paul Krugman, who euchred the City University of New York into paying him well into six figures for doing not much at all other than pour forth his bile about conservatives in general or Republicans in particular.  And at the other end you have thousands upon thousands of part-time “adjunct” faculty who will never have tenure, will never have any employment benefits, will never have any hope of teaching a truly interesting course, or being offered a job more permanent than next year’s contract renewal.  People like Krugman make a handsome living decrying “income inequality.”

The long tail pattern holds true even in larger contexts.  Consider, if you will, how much of the aggregate wealth of the world is engrossed by the populations of the West, versus how much of the world’s population that works out to be.  Here’s a map dividing, just for illustrative purposes, the world into seven separate areas, in each of which are contained one billion people.  Notice that both American continents and Australia only make one billion, and to get the Europeans (inclusive of European Russia) up to the one billion mark you have to lump them in with all of the Middle East.  When you consider that “the West” is usually a short-hand reference to Western Europe, North America, and Australia, and then look at that linked map, you realize that “the West” accounts for maybe one-seventh — 14.3% — of the world’s entire population.  I’d have to say, just guessing, that we 14.3% of the population probably enjoy — create, in fact — something along the lines of 70% of the world’s aggregate wealth.  Now look at where that wealth is concentrated within those Western societies, and you see what the long tail looks like with spikes on it.

All of which gives, or should give, us pause when we hear politicians undertaking somehow to reverse a nearly universally observable statistical pattern.  Sure you may do something about “income inequality,” and you may also invent an anti-gravity belt.  You’ll just have to pardon me if I don’t buy a lot of shares on margin with you.

Bang the Tin Drum Slowly

Günter Grass has died, at the age of 87.

Not quite 30 years ago I read The Tin Drum (in the original).  Haven’t read it since, but the ol’ boy’s death suggests I might ought to re-read it.  I also saw the film version a number of years ago, but in all honesty I can’t say I recall much about the movie.

The Tin Drum is set in and around Danzig (as it then was), a city whose 20th Century past was, to put it mildly, troublous.  That part of Europe — where what had been Poland for centuries was finally partitioned out of existence in 1795 — had long been a mish-mash of ethnicities, and Danzig was no exception.  The novel begins before the war and ends after the war, in an insane asylum in what had by that time become West Germany.

Grass’ own life arc mirrored the turbulent history of his home town.  Born too late to serve in the Wehrmacht during its triumphant years, by the time he was subject to compulsory service the war had irretrievably turned against Germany.  His first, unsuccessful brush, with military service was when he attempted to volunteer for the U-boat service in 1944.  He was turned down, most likely because of his age (he’d just turned 17), thereby setting himself up to survive the war.  Had he been accepted for U-boat service there is a strong likelihood he would not have lived; of the 40,000-odd men who served aboard the boats, almost exactly 30,000 never came home.  By 1943 Germany had lost the Battle of the Atlantic.  In March, 1943, the Allies sunk over 40 U-boats in one month.  Doenitz withdrew them from the North Atlantic patrol after that and from then through the end they were hunted beasts; many boats didn’t even complete a single patrol before their destruction.

Shortly after being turned down for the U-boat service he was drafted into the Waffen-SS, where he served in an armored unit from February, 1945 until his wounding on April 20.  He was captured by the Americans (again a fortuitous circumstance: most of the Germans captured by the Soviets were sent to their deaths in the Gulag) and eventually released a year or so after the war.  By then Danzig had become Gdansk and the Poles, to whom it was turned over, had ejected all ethnic Germans (in fairness, the Soviets had ejected the Poles from the 150 or so miles of Poland that Stalin took as part of the post-war Great Carve-Up).  Grass fetched up in the Ruhr district, where for a time he worked in a mine and later did an apprentice as a stonemason.  He began writing in the 1950s; The Tin Drum was published in 1959.

For years he was a reliably left-wing voice, although he did speak against the most radical elements, at least in terms of their aim of immediate socialist revolution.

In 2006 the facts about his service in the Waffen-SS came to light.  In all his prior and very public statements he’d never mentioned it.  Not a few people took him to task for it, precisely because he had been such a prominent critic of Germany’s engagement with its Nazi past.  In truth he ought to have known better than to let something like that lie fallow for so long.  If he actually was drafted, and unless he did things in uniform he’d just as leave we didn’t know about, then there was no reason to have buried his past.  If anything you’d think it would have made him a more credible, more effective advocate for his public positions.

Was Grass a volunteer or a draftee?  I have no way of knowing whether any draft papers or other illuminating documents would have survived this long.  What did his unit do while he was on active service with it?  If it was on the Eastern Front it most likely spent most of its time getting shot to pieces by overwhelming Soviet forces.  But was it involved in massacring a few civilians on its way out of town?  I haven’t seen anything one way or the other.  You’d think that, given how Grass suppressed a biographical phase that the ordinary viewer would see as highly significant — one way or the other — someone would have taken the time to dig up the facts.  That is, after all, how Kurt Waldheim came to grief.  His unit was known to have been in the Balkans during his service and it was easily discovered what it had been up to during that period.  It didn’t bear the light of day very well.  [Aside: I still remember seeing Waldheim’s campaign posters from 1986 in Vienna, when he was running for president:  “An Austrian the World Trusts”.  Cue Inspector Clouseau:  Not any more.]  I may be entirely wrong:  That investigation may already have been undertaken and discovered that there’s a whole lot of absolutely nothing at all to see.  If that’s the case, however, then why did he bury his past so long?

Grass expressed some trepidation about German reunification, a sentiment in which he was hardly alone, either in the world at large or even within Germany itself.  Konrad Adenauer was far from the last German not entirely to trust his countrymen with their own power.  Among Americans, I still recall a professor of mine, who’d fought in the U.S. Army during the war, laconically observing that he got “a very peaceful feeling” when he contemplated the existence of a forcibly divided Germany.

Nonetheless, the collapse of the international communist experiment and the unwinding even of large aspects of the European social democracy model left Grass, like many on the left, casting about for some point of relevance.  In the U.S. we see the left-extremists clustering around two overall approaches to the problem:  The first is to embrace the descent into irrelevance, as with the “social justice,” “micro-aggression” would-be thought police.  The other is doubling down on the 1930s-vintage neo-communist expansion of the state, as with the EPA’s nascent attempt to regulate your back-yard hamburger grill.  In Europe it’s taken, and is taking, the form of collaborating in the Islamization of the continent, and its hand-maiden, hatred of Israel.

In April, 2012, Grass published “Was gesagt warden muß,” (“What must be said”) a so-called “prose poem” in which he takes issue with Germany’s delivery of a nuclear-capable submarine to Israel.  He claims to fear that Israel may assert a right to an alpha strike on Iran, in order to prevent its development of nuclear capability.  He asserts that a nuclear-capable Israel endangers a fragile world peace.  He claims to speak now, because he is tired of the hypocrisy of the West.  And so forth.  The piece is short; here’s a translation of it in The Guardian.  Read it all.

Left unsaid by Grass is any mention that of the two states he specifically names, one — Iran — has adopted for its formal policy the extermination of the other, its “wiping from the map,” and the killing of as many of its citizens as possible; the other — Israel —  for whom Iran has such sanguinary and explicit intentions, has adopted no such policy in respect of any other nation or people.  One of the two nations — Iran — at that time was, and remains today, a known sponsor of some of the most bloodthirsty islamo-fascist terror groups in the world, almost all of whom expressly address their violence against the United States and its interests.  The other is not a sponsor of international terrorist groups.  One of the two nations — Iran — hangs homosexuals from construction cranes, stones adulteresses to death, and regularly practices torture on its own population.  The other — Israel — does not.  One of the two nations — Iran — sentences Christians to prison or death for practicing or preaching their faith.  The other — Israel — has in its parliament political parties representing its minority ethnic populations.  One of the two states Grass mentions gives every reason to fear its possession of any weapon of mass destruction.  The other has never.  One state — Iran — has never been the object of an attack by its united neighbors with the intent of eradicating it.  The other — Israel — has repeatedly weathered these attacks.

There is no other way to characterize Grass’ point:  Iran and Israel are morally equivalent quantities.  The attack of either on the other would be equally worthy of condemnation.  The attack of either on the other is equally to be feared (although, you know, Israel has, you know, never actually, you know . . . attacked Iran).  The world, presumably, would be equally injured by the extinction of either.  The attack on Iran by an Israel fearful that the mullahs mean precisely what they say about wiping Israel from the map, and Germany’s having enabled any of that attack, would splash a further taint of guilt on an already guilty-ridden land which could never be washed clean.

At the risk of understatement:  I am profoundly uninterested in any person, in any ideology, in any theology which cannot tell any material difference between the Iran of the mullahs and Israel, the only functioning democracy in that entire area of the globe.

Maybe his poem was nothing more than a desperate grasp for relevance in a world in which his chosen politics has been refuted pretty thoroughly by the march of time.  Certainly his later bleat in favor of Greece, and how awful it is that the rest of Europe, and Germany in particular, are just being such meanie-pokers to decline to shovel sand down a rat hole indefinitely, argues in favor of that hypothesis.  Or maybe it could be something more sinister.  Maybe it has something to do with why Grass chose for some 60 years to cover up his service in the SS.

In any event, we have lost another anti-Western voice from the world’s babble.  Whatever his talents as a writer may have once been, he won’t be missed.