Verdun

One hundred years ago this past Sunday, in the early morning hours, hundreds of German artillery pieces, ranging in size from field guns to enormous siege guns, cut loose on the forts protecting the French town of Verdun.

The objective of the German army was, in the words of the chief of the General Staff, to “bleed France white.” In other words, as originally conceived, it wasn’t really so much designed to capture the town of Verdun – the Germans really had no pressing need for it – as to draw as many French soldiers as possible into a massive killing zone. Because Verdun was much more important to the French not to lose than it was to the Germans to take, it introduced a fundamental asymmetry into each side’s calculations. At least that was Falkenhayn’s plan originally.

Without boring Gentle Reader with a recitation of all the back-and-forth which reduced the landscape around Verdun to a pock-marked wasteland where the very soil itself was poisoned by the chemical residue of all the explosives, to say nothing of being an enormous bone yard, let us just say that Falkenhayn lost sight of his initial strategic insight, which was to break one of two Western Front opponents, by inflicting on it casualties it was unable to bear, enabling him then to defeat the other. Had he stuck to his original concept of the battle he might well have accomplished just that. The French were willing to squander any amount of their soldiers’ lives to hold that place, and had the Germans sat back and shelled them into oblivion while keeping just enough ground pressure to bear to make sure the French remained engaged, they might well have inflicted the kind of grossly disproportionate casualties necessary to make it all work. Recall that while Germany outnumbered the British or the French separately, they never between fall 1914 and March 1918 had overall numeric superiority over both together. Hence the idea of crushing one and then the other (this wasn’t especially original; Napoleon tried the same gambit in the Waterloo campaign, Jackson illustrated it masterfully in the Valley Campaign in 1862, and Ludendorff tried it in the spring, 1918 offensives).

But Falkenhayn, encouraged by the amount of ground and the number of forts his troops in fact did capture in the battle’s early phases, changed his objective. Instead of contenting himself with slaughtering Frenchmen at a highly disproportionate rate, he decided he’d grasp the territory. He of course managed to kill enough Frenchmen that, by the time the battle was over in late 1916, the French army had only one offensive left in it (the Nivelle offensive of 1917), after which time it mutinied and was more or less finished as an offensive force. But he also managed to slaughter a vast number of his own troops trying to take a place he’d initially had enough sense to realize he didn’t need to take. And in doing so he finished the German army in the west as an offensive weapon until it was reinforced with the troops from the Eastern front released by the Soviet surrender in 1918. The difference, as we now know, was that the horrific French losses, and the terrible British losses on the Somme in 1916 (which offensive was launched in no small measure precisely to take the pressure off of Verdun) were to be made good by hordes of American doughboys. Germany’s every loss was a soldier who wasn’t going to get replaced.

Put a bit metaphorically, Falkenhayn originally conceived the notion of tossing a hand grenade between his enemy’s legs from a distance, but then decided he’d just as well hand-carry the same to its target. With predictable results.  The battle blunted the offensive power of the western Germany armies and cost Falkenhayn his job.  As his replacement the kaiser ushered in the team of Ludendorff and Hindenburg to the top of the German command structure.  Once there they dug themselves in, so to speak, and so consolidated their control over Germany and its war effort that by the end of the war the kaiser was no more than a cipher, rubber-stamping decisions handed to him, passing out medals to the survivors, and going for rides in the countryside around headquarters.

With Hindenburg and Ludendorff in place, the last chance for a conclusion of the war other than one through collapse (by one side or the other) vanished.  Those two were true believers in ultimate victory; they believed their army could do anything.  It was the army which assured the kaiser that it could win the war before American troops arrived in large enough numbers to make a difference, leading directly to the approval for resumption of unrestricted submarine warfare in February, 1917.  It was the army which propped up Austria-Hungary as that nation collapsed in on itself after the Brusilov offensive in summer 1916. And in the end it was Hindenburg’s statement that he could no longer guarantee the loyalty of the army which induced the kaiser to slip across the border to the Netherlands on November 9, 1918.

On the French side, the “victory” at Verdun became one of the — I’m tempted to say “founding myths,” but really it wasn’t a myth — loci of inter-war French politics and society.  It’s no accident that it was the victor of Verdun, Marshall Petain, who was dragged out of retirement to head the Vichy government.  For a good treatment of the battle and what it meant to the France of 1916 and the France that survived the war, you can do much worse than this.

So the failure of Germany at Verdun has a claim to be among the most momentous results of the Great War, not so much for the tactical decision obtained (the French kept the town and what was left of the surrounding forts) as for the changes it wrought in the overall complexion of the war.

Verdun is now firmly established as part of that infamous group of battles in which the commanders blindly fed men into a meat grinder on the supposition that if they stuffed enough in, fast enough, eventually they’d jam the works and bring it to a stop. Loos, the Somme, Passchendaele, Verdun, Gallipoli, the Nivelle offensive: Their very names have become bywords for callous disregard of the human lives entrusted to one. The commanders who kept those offensives going for weeks and months after it was abundantly clear that there was no prospect of victory on any basis justifying the slaughter have rightly been damned by posterity.

And this gets me to a quibble about what I suggest is the historical revisionism of General Grant’s talents as a strategist and/or a tactician. Once upon a time Grant was viewed as a plodding butcher, a Douglas Haig with a cigar sticking out of his face. That view arose chiefly as a result of his campaign in Virginia from 1864 through the end of the war. That’s not the fashionable view of him, these days. More recent books tend towards a much more hagiographical approach to his conduct of the war in the East. I freely concede his resolution of the Vicksburg campaign was every bit as audaciously brilliant as it has ever been made out to be. But of his signal victories other than Vicksburg – Ft. Donelson, Shiloh, and Chattanooga – the first was a siege where he was ferried to within a few miles of his objective by the navy; the second was a “victory” only in the sense that he didn’t get his ass run backwards into the Tennessee River by the end of the first day, and then when he was reinforced overnight outnumbered the Confederates by a sufficient margin that they weren’t able to remain on the field; the third wasn’t really his doing anyway (or any other general’s, for that matter), but rather that of the private soldiers in the Army of the Cumberland (which hadn’t been Grant’s army in any event) who, at Missionary Ridge, decided they’d had enough of looking at the Rebels on that damned ridge and took matters into their own hands, driving them from the field in disarray. [N.b. The only two times in the entire war that a Confederate army was driven from the field in disorder – Chattanooga and Nashville – it was the Army of the Cumberland both times, under the command of General George Thomas, whose talents Grant apparently went out of his way to disparage, unfairly in the opinion of at least one biographer of Thomas.]

I will also give Grant full credit for understanding that the war was not so much about conquering territory as it was about destroying the South’s ability to continue resistance. This was an insight which seems largely to have escaped the powers running the war in the East. I won’t excoriate the commanding generals alone, because they were working under the intrusive gaze of the entire Washington power establishment. It might well be that, although any one or the other of them had it figured out, the geo-political reality of the relative situations of Richmond and Washington effectively prevented any such general from transforming that strategic insight into operational plans. Or maybe not. Sherman in the West also understood the war in that sense, but he then took that comprehension to the next level after Atlanta. Neutralizing Atlanta as a transportation, supply, and communications hub effectively destroyed the Confederacy as a going concern west of the mountains. Sherman’s sequel, the March to the Sea, was nothing else than a conclusive demonstration to the people of the South, in the most immediate manner possible, that their country had lost the ability to keep a massive army from strolling across an entire state, taking its sweet time to do so, and burning and plundering everything in its path. Any Southerner who did not, by the time Sherman reached Savannah at Christmas 1864, understand the war was lost had to have been singularly obtuse.

So how did Grant go about realizing his strategic insight on the battlefield? Well, at some point during the year, more or less, that Grant was in the East, someone pointed out to him that his army and Lee’s were like the Kilkenny Cats, who fought so viciously that each ate the other up. Grant famously observed, “My cat has the longer tail,” meaning, of course, that he could consume Lee’s army and still have some of his at the end of the day. And that is exactly what he set out to do. He was entirely willing to accept horrendous casualties (by spring, 1865 the Army of the Potomac had suffered over a 100% casualty rate) on the only condition that his soldiers in dying kept killing Confederates. Which they did.

And in the end, the longer tail of Grant’s army won out.

Were there other ways to have accomplished the destruction of Lee’s army without grinding his own into a bloody pulp? There must have been; there almost always is. Grant’s preferred method to keep Lee too busy hemorrhaging soldiers to get up to mischief was to keep him engaged, day after day, week after week. Sure, that works. But you can tear an army to pieces by keeping it on the move and denying it any opportunity of rest and re-supply.

Much of Lee’s strategic maneuvering in 1862 and 1863 had been a direct response to his inability to supply his army without continual fresh territory to plunder. He couldn’t stay in one place more than a brief spell because his men would strip the countryside bare, and the Confederate government had no way to provide him the supplies which would have permitted him to live other than off the immediate vicinity. By 1864 nearly all Virginia was stripped bare. Grant had the men to move on sufficiently broad and widely dispersed fronts that there is no way Lee could have responded to all of them and protected his supply base at Richmond (which was, in addition to being the capital, a major center of what little industrial capacity the South enjoyed). Grant also had the massive transport and supply systems of the North at his back. Is it so unthinkable that by launching offensives on enough different fronts he could have leveraged Lee out, away from the Richmond-Petersburg line, and forced him so to divide his forces that, even with Lee’s famous defensive capacities, his army would have collapsed by division, and all the while trying to live off land that had repeatedly been plucked clean during the war up until then?

Such a strategy of maneuver would have taken a great deal of shoe-leather, and not a few trains and wagons. But by that time the North was cranking out such things in quantities never before seen in human history. No. Grant chose the simple method of making Lee out-bleed him. In contrast, after Kennesaw Mountain (for a good working description of what that fight was like, read Ambrose Bierce’s description), Sherman never again launched his men in a frontal assault.

All of which is to say that I don’t buy the recent praise for Grant’s abilities as a field commander. As a strategist, yes. As an organizer of the movement of some of the largest field forces in history (I think Napoleon’s Grand Army that invaded Russia in 1812 may have been larger than the U.S. Army in the East, but not by much), certainly. But as a commander who understood how to accomplish his purposes by other means than drowning his opponent in his own men’s blood, not so much.

A good friend of mine went, a number of years ago, to the battlefield at Verdun. Large areas of the countryside are still pock-marked by interlocking shell craters. It’s grown up now, but there are still places where the soil is too contaminated to till. My friend went to the ossuary they built. As you might imagine in a battle in which so much of the action consisted of massive artillery bombardment, huge numbers of the dead were so blow to shreds that there is no way ever to sort them out. In many case of course there wouldn’t even have been enough for a burial. So they brought all the miscellaneous bones together and built a large hall over them. There are windows, through which you can peer at the remains of some 130,000 unidentified French and German soldiers.  As more bones are discovered each year, they are added to the pile.

My friend described the ossuary at Verdun as being the most eery place he’d ever been. I can image; it is not in many places in the world, or at many times, that such massive collective evidence in presented of the horrors of which mankind is capable. The liberated Nazi concentration camps would have been such places. The killing fields in Cambodia might have been. Verdun is another.

And so we pass another grim anniversary date.

Neptune’s Inferno; or, “If You Get Hit, Where Are You?”

I finished reading this morning, while camped out in front of the (closed) Turkish Airlines counter at Dulles (they have one single flight out of here, at 11:10 p.m., and they don’t open their counter for check-in until 7:20 p.m., and you can’t get through security without a boarding pass, which you can’t get without check-in, and did I mention that all the restaurants in Dulles are on the far side of security and I’ve been here since 4:00 a.m?), a book given to me for Christmas, Neptune’s Inferno, by James D. Hornfischer, a history of the naval battle for Guadalcanal, from early August through the end of 1942.

This is the third book of Hornfischer’s which I’ve read. I have his Ship of Ghosts, about the survivors of USS Houston. She was part of the ABDA fleet which was annihilated in the opening weeks of the war. She survived the first few battles only to come to grief in the Sunda Strait. She, in company with HMAS Perth, stumbled across the entire Japanese invasion fleet coming ashore in Java, including a destroyer force and squadron of heavy cruisers covering the transports. Both Allied ships were sunk, each taking roughly half her crew with her. Both captains were killed in the action, Houston’s by taking a shell splinter that just about eviscerated him. Houston’s survivors ended up in no small part working on the Burma-Siam railroad line the construction of which forms the setting for Bridge on the River Kwai. The thing about the battle was that it was so sudden – the Allied ships hadn’t expected to come across hostiles – and occurred in the middle of the night, that Houston and her consort effectively just disappeared, as far as Allied high command could tell. It wasn’t until the end of the war that it was known anyone had survived, and who.

A couple of vignettes from that book.

One of the eventual survivors from Houston had his battle station in the mast top, manning a heavy machine gun with a Marine sergeant. As the ship was heeling over, on her death ride and with the order to abandon ship having been given, the sailor was getting ready to drop into the water (by that point the top was well out over the water), and he noticed the Marine wasn’t. Come on, let’s go, was the thrust of his observations. The Marine just pointed out that he couldn’t swim. So over the sailor goes, striking out with might and main to avoid the suction when the ship went down. He later recalled that among his last glimpses of Houston was the sight of tracers still pouring forth from the mast top, as the Marine fought his station to the very last. You can’t teach that kind of tough.

The other vignette speaks volumes about how the Dutch (who owned Java as of the war’s beginning) were viewed by the locals, and how the Japanese were viewed (at least as of that time). Houston sank so close to the beach that many of the sailors who got off in time were able without too much trouble to swim ashore. The current in Sunda Strait is pretty ferocious, but since the swimmers were swimming perpendicular to it, those who weren’t swept out into the open ocean were able to make shore. To a man they were turned in to the invaders by the local villagers who found them hiding in the woods, and it wasn’t out of fear of the Japanese. The Dutch had behaved in the East Indies much as the Belgians had in the Congo, and with very similar results, in terms of how the native population reacted when they had the chance for regime change. In short, the Japanese Greater Southeast Asia Co-Prosperity Sphere was very much not looked upon as being a cynical euphemism by its purported beneficiaries.

The third book of Hornfischer’s I have is The Last Stand of the Tin-Can Sailors, the story of the destroyers and destroyer escorts screening the light carriers whose job it was, during the Battle of Leyte Gulf, to cover the landing forces. and provide in-shore close air support.  Admiral Halsey having been snookered into taking all his fleet carriers and all his heavy screening forces (he flew his flag in New Jersey, sporting nine 16″/50-cal guns) to chase — far to the north, well away from the critical focus of Halsey’s actual mission — Japanese carriers who weren’t carrying any planes – in other words, they were suicide decoys – all that was left to guard the San Bernardino Strait was a group of escort carriers, whose magazines were full of anti-personnel and other “soft” (in other words, not armor-piercing) ordnance), along with a squadron of destroyers and one of even smaller destroyer escorts. And here comes Admiral Kurita with the Center Force, consisting of the bulk of the remaining Imperial Japanese Navy heavies. Battleships and heavy cruisers. It actually took them two tries to get through the strait; it was on the first effort that Musashi was sunk (her sister, Yamato, didn’t go on her own death ride until later). Kurita had turned back but then reversed course after all and on the morning of October 25, 1944 (metaphor alert: this was the anniversary of Agincourt in 1415, when a badly-out-numbered Henry V opened a can of whip-ass and flat smeared it all over the French – we few, we happy few, we band of brothers, anyone?) all that stood between him and the helpless American invasion fleet at anchor, frantically unloading the invasion force, were a dozen or so tin cans, with the escort carriers several miles further off.

Hornfischer uses the story of USS Johnston (DD-557), commanded by Commander Ernest E. Evans, to construct the narrative framework of the story. He was from Oklahoma, half-Indian (and so of course his Academy nickname was “Chief”). When he took command of Johnston, he offered any man in the crew who wanted off a transfer, no questions asked.

On that October morning, by chance Evans happened to be the closest ship in the formation to the Japanese battle line as it came out of the strait. Without waiting for orders, he turned his destroyer to engage a line of battleships and cruisers. Maneuvering at flank speed, he engaged with such of his 5″ mounts as could be brought to bear, chasing the Japanese shell splashes (on the theory that your enemy will have corrected his fire control solution away from that spot so he won’t hit there again) and trying to get close enough to launch his torpedoes. Chasing shell splashes only works if your enemy doesn’t figure out what you’re doing, and if there are enough people shooting at you, then you’re out of luck in any event; there’s no place to dodge to where someone’s not likely to drop a 14″ round onto your unarmored deck. Which is what happened to Johnston. She started taking large-caliber shell hits.

Evans gave the order to launch the torpedoes and then turned away to open the range. By that time all Johnston’s 5″ mounts were out of commission, the ship had been badly holed, was on fire, and was losing speed. As she steamed away from the Japanese, she came upon the other small boys, likewise riding hell-for-leather to engage the enemy battleships with their destroyers and destroyer escorts. Notwithstanding he had nothing left to fire at the Japanese, Evans turned Johnston around and went back into the fight. After all, Kurita had no way of telling she was a sitting duck; every turret that fired at Johnston was a turret not firing at a ship still capable of action. When last seen, Evans was standing on Johnston’s fantail, severely wounded (as I recall, among other things, he had a hand shot off by that point), shouting rudder orders down a hatch into the rudder room where crewmembers were manhandling the rudder, all other steering control having been shot away.

Evans received a posthumous Medal of Honor. And the small-ship Navy acquired an immortal example of gallantry.

They’re called “tin cans,” by the way, because that’s how easily they open up. When I was on an Adams-class guided missile destroyer back in the day, we had an A-6 that was supposedly bombing our wake for practice put a practice bomb onto us instead. The idea is they drop these dummy bomblets that have a saltwater-activated smoke flare in the nose into your wake, 500 yards or so astern of you. They’re aiming at the centerline of your wake and it’s easy to see how good their aim is. Well, this ass-hat, in the words of the JAGMAN investigating officer’s report – which I saw – “released his bomb with a friendly ship filling his windscreen.” This practice bomb weighed less than 10 pounds and, except for the smoke flare in the nose, was completely inert. A chunk of metal, no more and no less. It went completely though our ship. It penetrated a bulkhead on the O-2 level, blew up the Mk-51 fire control radar’s power panel, penetrated the O-2 level deck in that space, crossed the small office space beneath that and went through the far bulkhead out into the open air, penetrated the O-1 level deck, went across the main passageway (almost taking out our chief boatswain’s mate), penetrated the inboard bulkhead of the chief petty officer’s mess, ripped up their refrigerator, penetrated the far bulkhead back into open air, and would have kept right on going over the side except it hit the inboard side of one of the davits for the captain’s gig, and bounced back into the scuppers.

Neptune’s Inferno, as mentioned, deals with the specifically naval engagements of the Guadalcanal campaign. The Marines ashore make an appearance only to the extent of their interaction with the navy, consisting mostly of their outrage when, two days after the Marines splashed ashore, Vice Admiral Frank Jack Fletcher (most recently seen relinquishing command of the American carriers to Raymond A. Spruance half-way through the Battle of Midway back in early June, 1942 , when his flagship, Yorktown, was put out of action and eventually sunk) took the carriers, which were pretty much all the flat tops the Navy had in August, 1942, away from the battle in order not to risk them against Japanese aircraft. It was a decision Admiral Earnest King never forgave him for (and for which he was relieved). From a strategic perspective it was the right choice. If those carriers had been put out of action at that point, the Navy’s operations in that entire portion of the Pacific would have been crippled. You can always get some reinforcements ashore, get some more supplies ashore. In fact the Japanese did more or less exactly that with the night-time runs of the “Tokyo Express”; because of the Marines’ Henderson Field on the island, and the back-up of the American carriers just out of reach of their land-based aircraft flying from Rabaul, they couldn’t make day-time landings or even use slower transport ships because they couldn’t get in, un-load, and be gone from the danger zone before the American aircraft would be back in the air the next morning. So they used destroyers . . . and managed to put well over 20,000 troops ashore, together with artillery and related supplies.

The Marines came to forgive the Navy, more less, when the light surface forces (destroyers and the new anti-aircraft cruisers, bristling with 5″ rapid-firing guns) showed a gleeful willingness to plow up great swathes of Japanese-bearing tropical jungle. They’d literally hose out corridors through the undergrowth with their gunfire. No less than Lt. Col. Lewis D. “Chesty” Puller expressed his gratitude after having observed the fun from one of the firing ships. The sub-title of this post is his reply to his host’s reaction when, just prior to going back ashore, he observed to the captain that he, Puller, wouldn’t have captain’s job for anything.  The captain was amazed; surely wouldn’t he prefer to have a shower and a bed when the day’s work was done?  Puller asked him when he got hit, where was he, and then pointed out, “When I get hit, I know where I am.”

And then after the night-time surface actions all the bodies would wash ashore.

In the end, for every Marine who died defending Guadalcanal dirt, three sailors died defending its waters. USS Juneau, her keel already broken by a torpedo strike and shot all to hell, was limping away the morning after the Night Cruiser Action, on November 14, 1942, when a submarine found her. She literally disappeared in a single flash of explosion. Out of her crew of almost exactly 700, all of ten men survived. Among the dead were the five Sullivan brothers, of Waterloo, Iowa.

For all the valor of the surface navy – and the naval fight was overwhelmingly a surface fight; the airplanes were mostly consumed (and they were consumed, as well) defending Henderson Field – the senior leadership really comes across as bumbling, in Hornfischer’s telling. Most of the action went down at night, an environment the Japanese had spent years aggressively training to own. And they did, even without the benefit of search or fire-control radar, both of which the Americans had in abundance, and which all but one of the OTCs (officer in tactical command: the guy out on the water who’s actually ordering the formation, steaming directions, and controlling – supposedly – the action) studiously ignored. It started with the Battle of Savo Island (a gob of island several miles to the northwest of Guadalcanal proper), when a fast-moving Japanese cruiser squadron got the jump on not one, but two American formations of cruisers and destroyers, and sent four out of five Allied cruisers (USS Quincy, USS Vincennes, USS Astoria, and HMAS Canberra) to the bottom in a maelstrom of fire lasting barely an hour from start to finish.

The eventual verdict on Savo Island (the waters between it and Guadalcanal acquired the nickname “Ironbottom Sound” by the time it was all over) was that the Americans simply had not been ready for combat, eight months after Pearl Harbor. They just didn’t know their craft. The Americans got a little of their own back off Cape Esperance when Rear Admiral Norman Scott was put in charge of a scraped-together force to challenge the night-time deliveries of the Tokyo Express. But for all of his drilling his ships in gunnery exercises (including off-set firing at each other, where two ships would shoot at each other’s wakes, or at target sleds towed by each other, much like that A-6 pilot was supposed to have done to my ship 40-odd years later), and all his aggressive instincts, even he couldn’t quite get it all in one sock, when it came to a real, live, shoot-em-up night action. He bungled some maneuvering signals, put his flag in a ship which did not have the 10-cm search radar (a vast improvement over its predecessor; it was actually useful for running a naval fight, as was later demonstrated), and before anyone knew it, what should have been a smoothly unfolding fight turned into a chaotic slug-fest, with individual commanders more or less picking their targets of opportunity and seeing how many rounds they could pump into them. Scott’s forces did manage sufficiently to cripple the sole Japanese battleship that she was scuttled. But it was otherwise an opportunity mostly lost.

Then the mistakes got worse. Rear Admiral Dan Callaghan, a real swell guy but a desk admiral, was put in charge of the cruisers, over Norman Scott, who – even if he’d stumbled a bit his first time out of the gate – at least had spent countless hours pondering the dynamics of modern naval action. There is not much indication that Callaghan did. He owed much of his advancement to senior rank to his connections, not least with FDR himself. During the Night Cruiser Action of November 13, 1942, he made an absolute pig’s breakfast of his formation, his handling of it, and his conducts of the battle. But he did have the decency to get killed that night, along with all but one of his staff and his flag captain (Cassin Young, who had won his own Medal of Honor at Pearl Harbor). Norman Scott was also killed that night. But the Americans bagged one of the two battleships the Japanese had sent.

In the aftermath of the Night Cruiser Action, the Americans had so few heavy surface forces left that Halsey finally decided to pull his two battleships – Washington and South Dakota – away from escorting carriers and transports, and shove them into Iron Bottom Sound. And not a moment too soon. Admiral Yamamoto had decided to try one final all-out push to destroy Henderson Field through naval gunfire (they’d made a pretty good run at back in September). This time the American commander, Rear Admiral Willis Lee, was a radar geek who knew exactly what use his radar could be. The Americans shot them all to hell and gone, saving Henderson Field and thereby guaranteeing that the Japanese simply could not maintain their forces on the island.

By the time the Japanese evacuated, many of their units had only a handful of men left who were not so starved or sick or both as to be completely out of action.

What I found, other than the very well-written narration, interesting about the book is the portrayal of William Halsey. In Last Stand, Halsey comes off as a blustering buffoon, who was so gung-ho to Get Him Some Carrier Scalp that he abandoned what was actually his principal strategic function – safeguarding the Leyte Gulf invasion – and but for the courage of the small boys could have cost the Americans an enormous loss. Gentle Reader will also not overlook that it was also during this time and shortly thereafter that Halsey came within an ace of losing not one but two battle groups to typhoons, by reason of his mismanagement of refueling. In Neptune’s Inferno he comes across being something of a naval cross between Nathan Bedford Forrest and Omar Bradley. Perhaps it’s the difference between 1942 and 1944. By the time of Leyte Halsey had worn four stars for almost two years and was a fleet commander. Perhaps with Leyte he had risen to his level of incompetence.

In any event, Neptune’s Inferno is a tremendous read. Hornfischer does an excellent job of narrating surface naval action. This is more complicated than it sounds, I suggest. If you’re describing the Battle of Shiloh, for example, or the First Marne, you can hook your narrative onto place names that can easily be shown on a map in geographic relationship to each other. Not every author has this talent. The first time I tried to read August 1914 I gave up because Solzhenitsyn’s description of the run-up to Tannenberg is nearly unintelligible without a map to refer to (and then some time later I discovered that – in the very back of the book, exactly where you would not look for it – his publisher had put just such a map; made all the difference in the world). In describing a naval surface action, however, all you’re left with is “port” and “starboard,” and it’s very difficult even to draw it out on a map because the relative positions of the ships to each other at any specific moment is of such critical importance. I think Hornfischer does as good a job of conveying the actual movements of the ships over the trackless water as well as anyone I’ve ever run across.

Can’t recommend too highly, in round numbers.

Carousel of History?

We may hope not.

Over at Instapundit, a link, via Ed Driscoll, to a piece by one of my favorite linkees (is that a word, even?), viz. Victor Davis Hanson, “A Tale of Two Shootings“.

[N.b.  Hanson, whom I’m mostly familiar with via the internet, is a very accomplished classical historian, with a heavy sideline in military history.  I recently read — it was borrowed, so I had to return it, much to my chagrin — his The Soul of Battle: From Ancient Times to the Present Day, How Three Great Liberators Vanquished Tyranny, a comparative history of Epimanondas’s conquest of Sparta, Sherman’s march through Georgia, and Patton’s march through France in 1944.  Fascinating stuff.]

Be all that as it may, Hanson looks at two shootings:  the first, in 2014 of the violent criminal Michael Brown, in Ferguson, Missouri, and the second of Kathryn Steinle, in San Francisco.  Brown was black; Steinle was white.  Brown had just committed a robbery; Steinle was walking down a pier with her father.  Brown had just attacked and attempted to seize the weapon of the police officer who had matched him to a minutes-old radio alert of the robbery, and was shot dead in his tracks , from the front, while charging the officer.  Steinle was shot dead in the back while . . . well, while walking with her father, minding her own business.  Brown was shot by a police officer; Steinle was shot by a multiple-convicted felon whose very presence in the United States constituted a crime.  The police officer who shot Brown was white; the convicted felon who shot Steinle was Mexican, an illegal alien.

After Brown was killed in the midst of his attempted third felony of that day (first: robbery; second: attacking and attempting to steal weapon from law enforcement officer; third: second attempt to attack and steal weapon from same), Dear Leader’s administration and his political allies very carefully stoked the fires of racial hatred, and Ferguson burned.  After Steinle was shot dead by the felon who was very intentionally released by the City of San Francisco in spite of a request by federal authorities that they hold him until he could be deported (this would have been his sixth deportation), there were . . . crickets.

Hanson has the temerity once more to point out the very different treatment of the two killings, one indisputably justified (Brown’s), and the other (Steinle’s) indisputably an abomination, all but engineered by the left-extremists in the San Francisco city government.

Maybe VDH didn’t want to violate Godwin’s Law, which holds that the longer an internet discussion goes on, the closer to 1.0 approaches the probability that someone will make an explicit comparison to the Nazi era.  But since Hanson put up his post yesterday, and today is November 9, I’m going to do the belly-flop for him.

On November 9, 1938, Germany exploded.  Well, to be more precise, a segment of Germany exploded.  That segment was the segment represented by synagogues and Jewish businesses.  They were torched, their owners and congregants beaten, in many cases beaten to death.  There was so much broken glass in the streets from smashed windows that the Germans knew it as “Kristallnacht,” or “crystal night.”  Here’s the Wikipedia entry, for those curious.

Why did Victor Davis Hanson’s post on the political reaction, and the carefully orchestrated violence, in response to Michael Brown’s death put me in mind of November 9, 1938?  Because Kristallnacht too was a highly orchestrated orgy of violence in response to a single killing.  Ernst vom Rath was a German diplomat stationed in Paris.  On the morning of November 7, 1938, a Polish Jew then living in Paris a teenager, Herschel Grynszpan (he had fled Germany in 1936; after his arrest he stated that he acted to avenge the news that his parents were being deported from Germany back to Poland), shot him five times.  Rath died on November 9, by which time the Nazi powers had had time to organize “spontaneous” demonstrations of outrage inside Germany.

The destruction of November 9, 1938, was no less “spontaneous” than the observances surrounding the announcement that officer Darren Wilson, the police officer who successfully defended himself from Michael Brown, would not be indicted for any criminal offense.

Carousels are circular.  Stand in one place long enough and everything you’ve seen before you’ll see again.  Sort of makes you wonder, doesn’t it, what else from the 1930s and 40s we’re going to see again in the coming years?  Holodomor?  Molotov-Ribbentrop?  Munich? (Dear Leader sure made a run at that last by handing the Iranian mullahs a green light for nuclear weaponry.)  Greater Southeast Asia Co-Prosperity Sphere?

Sobering thinking, it is.

[N.b.  I don’t know whether I’ve pointed it out before on this ‘umble blog, but November 9 is a date pregnant with significance in German history.  In 1918, the German republic was proclaimed and the Kaiser abdicated; in 1923, the Beer Hall Putsch failed; in 1938, they put on Kristallnacht; in 1940, Neville Chamberlain, the man who more than any other enabled Hitler to become the continental-scale monster he did, finally died; and, in 1989, the Berlin Wall, the physical embodiment of the war’s outcome, came down.  Can’t make this stuff up.]

 

Indictment or Lament?

A very dear friend of mine, whom I met years ago in New York City, is an Artsy Person.  By that I mean he has overwhelmingly made his living in and around the visual and aural arts.  Back in the day his day job was as an animator, and he played drums in a band at night (jazz and swing, mostly).  I’d met him through the Navy Reserve.  I went to see his band once, and among my favorite memories is of him sitting behind his drum set, slinging sticks into the air and flailing away (he’d cringe to hear me use that verb), wearing a USS Guadalcanal ball cap and a black t-shirt with a huge Bugs Bunny head on it.  Wrap your mind around those two organizing principles and you were well on your way to knowing and loving this buddy of mine.  He’s since transferred to the National Guard where he plays in the 42nd Division concert and parade band.

I haven’t heard him mention working in animation for years now, from which I deduce that the trend he commented on all those years ago — a combination of computer animation and out-sourcing any residual drawing to scut-work hack-shops overseas — finally killed enough of the industry here that he couldn’t make a go of it any more.  For years he kept up his band; now that he and his wife have moved upstate he doesn’t play in that particular band any more, either.  But he’s still very much engaged with the State of the Art (pun intended), and so he puts stuff up on his Facebook page from time to time on the subject.  His most recent post is of this article:  “The Devaluation of Music: It’s Worse Than You Think,” from a blog called Medium.

The overall thrust of this article is that American society at least (the foreign market is not addressed) has forgot how to value music, and not just in a purely monetary sense.  The upper and nether millstones of paltry royalties from streaming services and digital piracy get a look-in, of course.  The article’s thrust, though, is that we as a society simply no longer put forth the effort to integrate what the author calls “the sonic art form” into the fabric of who we are individually.

Which is to say, the author paints and protests the elision of music as an art from our culture.

My buddy’s Facebook post was, “…and THIS, folks, is one reason western civilization is doomed. The suits run EVERYTHING these days. No wonder I am a culture snob…”  I think that, with one exception, he trivializes the article’s point.  [Here I should note that at some point during the past couple of decades, my buddy went from being a fairly economic and political conservative, as well as a social tolerant, to being a pretty flaming quasi-Marxist and sucker for PC demagoguery.  “The suits” are running and ruining everything is a steady background theme to much of his discourse.  He of course has a point, to some degree, but then it’s not an invalid point that the bills have to be paid by someone, and no one is in anything for free, and it’s the job of “the suits” to figure that part out.  I’ve never explored in depth with him the waystations on his journey, but the contrast between the friend I made and the friend I have is about as stark as you can imagine.  Emblematic:  About the first conversation with him that I can recall, all those years ago, he was ranting about how “the Masons” were controlling the world and everything was a Masonic conspiracy to X, Y, and Z.  He’s now a very committed Mason.]

The one exception mentioned is the pernicious influence of commercial radio.  From the article, in full, the relevant passage:

“It’s an easy target, but one can’t overstate how profoundly radio changed between the explosion of popular music in the mid 20th century and the corporate model of the last 30 years. An ethos of musicality and discovery has been replaced wholesale by a cynical manipulation of demographics and the blandest common denominator. Playlists are much shorter, with a handful of singles repeated incessantly until focus groups say quit. DJs no longer choose music based on their expertise and no longer weave a narrative around the records. As with liner notes, this makes for more passive listening and shrinks the musical diet of most Americans down to a handful of heavily produced, industrial-scale hits.”

Can’t argue with the author’s description of what happened, but I would suggest a more depressing take on it than his as to the why it happened.  The author seems to imply that how commercial radio has changed was the product of conscious choice, which implies, of course, that a conscious choice could be made to return to the Good Old Days.

I don’t think the author has given due consideration to the realities of the world that gave rise to those Good Old Days, and how that reality has changed since then.  Consider:  Until the rise of the 8-track tape in the mid-1970s, the radio was your only source of third-party entertainment in a car.  Around the house, unless you wanted to pop for a great big bulky CRT television or expensive vinyl record player (the el-cheapo ones produced crappy sound that made anything other than The Archies absolutely unbearable) in every room, if you wanted entertainment or even just background noise in any room outside your living room, your choice came down to . . . radio.  Because more people listened to radio, any given radio station could afford to specialize, or experiment, or really be what it felt like being, and still make a go of it attracting only a smaller percentage of the total listening market.

What started to change in the late 1970s and early 80s?  The 8-track player and even more importantly, the automobile cassette tape deck, for starts.  Now you had a highly portable, large capacity (90-minute cassette tapes, anyone?) medium for the music you wanted, without commercials or other interruptions, that you could start, stop, pause, and replay at will.  Tired of Miles Davis and want to get your Mozart on?  Push the eject button, flip open a jewel case, shove in the new cassette, and in a matter of seconds you’ve gone from 20th Century jazz to 18th Century classical.  Radio just can’t keep up with that.  Beginning in the early 1980s you had fairly economical high-quality portable stereos that you could strew around the house, with one in the kitchen, one in the laundry room, one in each bedroom, in the basement, in the garage, in the shop building.  I’ve never seen actual numbers, but I’d bet someone else’s monthly income that the proportion of the U.S. population that regularly listened to radio began to plummet.

Nowadays you have inexpensive flat-screen televisions, iPods and similar devices, most of which you can now plug into your car even if they’re not built-in standard on even low-end vehicles, high-quality sound coming out of your laptop or desktop, etc. etc. etc.  And of course you can access hours upon hours upon hours of music, organized to be heard however you choose (listen straight through albums in sequence, or shuffle among albums, or shuffle among individual tracks, and of course with the ability to start, stop, pause, and replay at the touch of a button), and all in a highly portable format.  I’d be surprised if the proportion of radio-listeners hasn’t dropped even further.  And all we’re talking about is music alternatives to broadcast music radio; how about talk radio, after all?  Or subscription satellite radio, with its hundreds of channels?

So what’s a radio station to do, which has to meet its bills?  You’ve got to capture a greater share of a smaller audience.  And how do you capture a greater share?  You go after what most people like most of, most of the time — what our author describes as “manipulation of demographics and the lowest common denominator,” to use the cacophemism.  That of course produces a feedback loop.  If you provide lowest-common-denominator fare, then the overall population’s preferences migrate toward that denominator, which means that there’s less to be gained from aiming outside that target area, which means that what’s provided gets even more relentlessly uniform.  And so forth.

Recognizing the truth of the article’s point that the proletarianization of broadcast radio is every bit as disastrous as presented, there’s a reason that enormous chunks of people quit listening:  Even a top-flight radio station simply cannot compete in control, quality, and choice with low-cost music storage and reproduction.  In my car’s CD player right now, I have Brahms, The Who, Don McLean, Jim Croce, Dietrich Buxtehude, and Mozart.  If I want to go back and listen to the Variations on a Theme by Haydn three times in a row, straight through, just because it almost moves me to tears, and then jump right on over to “Everybody Loves Me, Baby” because it makes me, a child of the 70s and 80s, chuckle, to be followed by “Gelobet seiest du, Herr Jesu Christ,” which was played at my wedding, and “Won’t Get Fooled Again,” which you can describe as the theme song of the Dear Leader Administration, I can do that, and there has never been and never will be any third-party provider/selector who can keep up with me.  The dynamic the author’s describing cannot be stopped or undone without going back to the days of the captive audience.  Very respectfully, I decline to endorse that proposal.

So much for the commercial radio angle, as to which my buddy’s complaint about “the suits” ruining everything is by and large valid.  Of course, whenever you complain about So-and-So Doing X, you must, if you are honest, describe what So-and-So ought to be doing other than X, and how So-and-So can make the house payment by doing Other-Than-X.  I’m not hearing that alternative universe outlined with any convincing detail.

The linked article then goes on to describe several other trends that he identifies as contributing to the de-valuing of music, and as to which I think he’s on very firm ground, but as to which I think the conclusions to be drawn are even more pessimistic than his own.  The author describes as “conflation” of music with other aural or video entertainment the trend of shoving music alternatives in with those other forms of entertainment.  Music is not presented as something precious in its own right, but rather as just one more item on an ever-lengthening menu of Stuff to Pay Attention To, More or Less.  Gentle Reader is reading this blog at the moment, no?  Gentle Reader could be watching a favorite movie streamed or on DVD, or be playing a video game either alone or live with other players around the globe, or be working on his/her own blog . . . or be listening to the sonic art form.  And all those options are just a click away from each other.

The article’s author decries the lack of what he calls “context,” or more prosaically, the absence of intelligent, useful, or thought-provoking liner notes to the music.  If Bach’s C minor Passacaglia is reduced to an icon on a screen, then without some extra programming there’s no way to pop open the liner notes (and this was a massive advantage of the CD format over others; you could get 20 pages or more of liner notes into the jewel case) and read as you listen.  Of course, this problem is actually among the most curable the author describes.  Computer memory is cheap, and with devices getting ever-more-closely linked to each other, both locally and over the internet, what would prevent me from writing the code to tap or right-click that icon on my screen to access not 20 pages, but an entire menu of “context”?  It could easily range all the way from scholarly treatment to comparative reviews (this performer’s interpretation of a classical piece, or a comparison of Miles Davis’s rendition of the piece on this recording relative to some other recording of the same piece) to fan-based reviews to suggestions for further listening and so forth?  Every piece a portal, in other words?

Another trend the author identifies is what he characterizes as “anti-intellectualism,” which he treats thusly:

“Music has for decades been promoted and explained to us almost exclusively as a talisman of emotion. The overwhelming issue is how it makes you feel. Whereas the art music of the West transcended because of its dazzling dance of emotion and intellect. Art music relates to mathematics, architecture, symbolism and philosophy. And as such topics have been belittled in the general press or cable television, our collective ability to relate to music through a humanities lens has atrophied. Those of us who had music explained and demonstrated to us as a game for the brain as well as the heart had it really lucky. Why so many are satisfied to engage with music at only the level of feeling is a vast, impoverishing mystery.”

I do like his phrase “dance of emotion and intellect.”  Jacques Barzun’s magisterial From Dawn to Decadence: 1500 to the Present: 500 Years of Western Culture has an extensive discussion of the emergence of this dance in the late 18th and early 19th Centuries.  I think the author’s spot-on with his observation about music being presented as a talisman of emotion, and how that presentation has adversely affected the intellectual component of the experience.  I disagree with him, however, that it’s a mystery why this is satisfying to so many people.

I know nothing of the author’s politics, of course, but unless he’s really, really an outlier in the arts world, he’s probably several standard deviations to the left of the bulk of the U.S. population.  The elevation of feeling and emotion — what makes me feel good about myself — is at the core of leftist politics.  From third-wave feminism to environmentalism to the “war on poverty” to social justice warriors, “micro-aggressions,” “safe spaces,” and so forth, the common denominator in all is that the political policies which grow out of these movements invariably do two things: (i) they make the actual problems worse, but (ii) they allow the proponent to feel good about himself for supporting them, and to trumpet his membership among the Saved.  Leftism today is simply no longer about results on the ground, but rather a quasi-religious series of rites of purification and sanctification the design of which is to signal the proponent’s moral superiority.

Like it or not, American politics and public discourse is well to the left of where it had been before the FDR administration.  William Graham Sumner’s lecture, “The Forgotten Man,” was mainstream political discourse back in the day.  Find me anyone widely regarded in the public sphere since 1932 who could, or would, pen the following:

“When you see a drunkard in the gutter, you are disgusted, but you pity him. When a policeman comes and picks him up you are satisfied.v You say that ‘society’ has interfered to save the drunkard from perishing. Society is a fine word, and it saves us the trouble of thinking to say that society acts. The truth is that the policeman is paid by somebody, and when we talk about society we forget who it is that pays. It is the Forgotten Man again. It is the industrious workman going home from a hard day’s work, whom you pass without noticing, who is mulcted of a percentage of his day’s earnings to hire a policeman to save the drunkard from himself. All the public expenditure to prevent vice has the same effect. Vice is its own curse. If we let nature alone, she cures vice by the most frightful penalties. It may shock you to hear me say it, but when you get over the shock, it will do you good to think of it: a drunkard in the gutter is just where he ought to be. Nature is working away at him to get him out of the way, just as she sets up her processes of dissolution to remove whatever is a failure in its line. Gambling and less mentionable vices all cure themselves by the ruin and dissolution of their victims. Nine-tenths of our measures for preventing vice are really protective towards it, because they ward off the penalty.”

Modern political discourse would categorically declare itself “horrified” (which is to day, its emotions would be excited) at the proposition that we should leave the drunkard in his gutter, the gambler in his den.  And from that “horror” it then proceeds immediately to the conclusion that we have an affirmative obligation to mulct that Forgotten Man (or someone, anyone other than the person demanding we “rescue” the drunk) to “save” the drunk or the gambler.  This is government by emotion, not intellect.  It requires an intellectual effort to confront the truth and implications of Sumner’s moral point that the actual, measurable effect of much of what government does to “prevent” the consequences of private misfortune — all too often the results of years, and in many cases generations, of bad private decision-making — actually protect and perpetuate it by enabling the people making those bad decisions to keep on as usual.  It requires a moral effort to ask who pays the price, and in what form, and what portion of that payer’s prospects and future are taken from him because we have forced him to pay.  And of course, it’s not just the drunkard or the guy shooting craps behind the gas station, nowadays.  Now it’s everybody and his cousin, and the more zeroes come with the bad decisions, the more likely it is that the people being protected will have the ear of government.

In short, we have managed to create an entire society that has been taught to introduce the conclusions of its reasoning with, “I feel . . . ”  We are instructed, and have been for generations, that what matters is the desire behind a policy, not its actual effect, overall, on a society of 300-plus million people.  It is relentlessly hammered into us that the appropriate frame of reference for judging whether Program X is working is not whether it produces more people who need Program X in order to survive, but rather that more people are surviving on Program X (in other words, the program’s own pernicious effects are treated as proof positive of its merits).  It is then any surprise that we apply such reference frameworks to other areas of life?

I’ll note you needn’t ascribe the trend, as I do, to the dominance of leftism in particular in American society.  In point of fact both American mainstream political parties long ago conceded the central socialist premise.  The individual human is a building block to which is assigned a place in a structure designed by someone else, which will serve functions determined by someone else, and all for the greater glory of some abstract higher ideal determined by someone else.  In the late Middle Ages they built, all over Europe, magnificent stone cathedrals which reached higher into the sky than any other human hands had ever reached (in fact, for centuries they remained the tallest structures ever built by men), to the greater glory of God.  We now want to “build” “society” to the greater glory of whatever specific version of society it is that we favor.

I suppose you could trace the idea that each member of “society” is nothing more than a tool, a stone, in the structure back to the French levee en masse, which was at first a defensive mechanism but which rapidly morphed into an army of conquest for the “liberation” of Europe from the ancien regime wherever it was to be found.  But it found its first true application in Imperial Germany’s nationalistic militarism, and then — as Hayek pointed out in The Road to Serfdom — the passion for “planning” spread to the rest of Europe, then to Britain.  It first washed ashore here in the Wilson administration, receded during the 1920s, and took firm root with FDR.

What is the relevance of my thoughts to this author’s point about the talismanic use of “feelings”?  Well, if you’re going to use a man — and socialism is about nothing other than using men — for your own purposes rather than his own, it sure does help if he doesn’t think too carefully about what it is that’s happening to him.  How do you keep him from thinking, though?  Well, ever since the Romans hit on the notion of bread and circuses, it’s been recognized that what you need to do, and most all that you need to do, is to occupy with sensations — with feelings — the psychic space that might otherwise be taken up with thought.  After all, I can control your sensations much more readily than I can your thoughts.  I can underwrite your housing, I can subsidize your trip to the grocery store, I can just hand you $X per month to piss away as you choose, I can take your children off your hands, tell you that it’s now the responsibility of my employees (we’ll call them “teachers”) to make sure Junior doesn’t turn out to be a homicidal boor, assure you that he and everyone else in his class is unique and uniquely above average, and so forth.  I can plunder the Forgotten Man of his last thread of garment to do this; it’s why it’s so easy for you to forget him.

The article’s author includes what the cynic in me wants to characterize as the “inevitable” lament about music instruction’s demise in public schools.  He may have something of a point, but then I really have to question how much of a point it is that he has.  I mean, so much of what we recognize as the towering great music of Western culture took form in an era before massive public education in the first place, and when formal education was commonly broken off at ages we would now consider abhorrently young, and large portions of such primary and secondary education as did exist was conducted in circumstances in which the only music being made was from the human voice (and maybe an out-of-tune piano).  How many of the giants of early 20th Century America — the men (and a few women) who jerked entire new musical universes from the very earth — even got to high school in the first place, let alone finished?  Plainly music in the schoolroom is not necessary for the creation; you can easily falsify that proposition.

Is it necessary for the valuing of the music being created, though?  I’m not sure our author is on any firmer ground there.  For whom were these musicians playing?  Who made up their bread-and-butter audience?  Again, until after World War II a huge portion of the American population, even in cities, who actually went to the venues where the new musical forms were being hammered out (and by the way, those venues weren’t the great urban concert halls . . . they were the jook joints, the church socials, school halls, and so forth) would not have received more than bare-bones schooling.

If not the live audiences, who were the people who listened remotely, to the very first radio stations?  In the early 1990s there came out a documentary history of bluegrass music, High Lonesome, which I’m proud to say I’ve got on DVD somewhere.  There is a segment in which they talk of the explosive impact that radio had on these remote settlements.  You could rig your car’s battery to a home-made radio, run a wire out to an old bed frame outside for an antenna, and pick up stations as far away as WLS in Chicago (I still recall the Wow! of tuning into their AM station back in the early 1970s, all the way down here, late at night).  Radio and the music you could hear on it were . . . exotic.  There you had, right there in your living room where you could put your hands on it, this box which would reach out and pull from the thin air sounds from hundreds of miles away, sounds which could take you anywhere, anywhere at all in the entire world.  For people who’d been born, grown up, and grown old in a circle of 20 miles (or even narrower than that, for the mass of city dwellers in large cities like New York . . . hundreds of thousands of them would seldom have strayed off Manhattan Island, or out of Brooklyn or the Bronx, or the South End, or wherever their grandparents had fetched up off the boat, during their entire lives) it must have been nothing short of intoxicating.  And that which intoxicates us seizes our souls, as the religious objection to alcohol and drugs has long recognized.

So what changed?  World War I changed; millions of American men in fact didn’t stay down on the farm, after they’d “seen Paree.”  Harry Truman was only the most famous of them.  Movies changed.  The physical dislocations of the Great Depression changed.  The demise of gang labor in the South changed.  [Among the least studied mass migrations in history is of American blacks from the South into the rest of the country, beginning in the years just before the Great War, and becoming a flood during and afterwards; Rising Tide: The Great Mississippi Flood of 1927 and How It Changed America is a very good introduction to a small slice of that trend.]  And then World War II came along and burst the American universe into what Forrest Gump called “a go-zillion” pieces.

So what? Gentle Reader asks.  What does all this recitation have to do with leeching an appreciation for music from American culture?  Well, what is the common theme of all of the things I’ve pointed out?  It is this:  The atomization of control over one’s immediate physical circumstances.  From tenement to townhouse to tract house to suburb.  From grain field to grunting shift work to mindless repetition on the assembly line to what’s becoming known as the gig economy.  From hearing no music but what you and your family could sing to the scraping of a fiddle, to cramming into a stuffy venue on uncomfortable seats to barreling down the highway in your car with the radio going, to rolling up the car windows and popping in a different cassette to punching a button to change CDs to telling your MP3 player to shuffle among all 1750 songs on your playlist.  To maybe once or twice a year seeing a play put on by some down-at-the-heels hack-faded actors to watching a movie once a month on a huge screen stretched across Main Street (how my mother used to see movies in the 1930s in small-town Indiana), to air conditioned movie palaces to multi-screen megaplexes where every member of the family can watch what blows his skirt to punching up Netflix on each of the four screens in your house and everybody gets to choose from 750 different movies.

And here I circle around to rejoin our article’s author.  Why has America forgot how to value music?  Because music has lost its preciousness to us.  Once upon a time music was the only entertainment the bulk of the population had.  There is a reason, after all, that almost all dirt-poor, oppressed, or traumatized groups developed incredibly rich musical traditions:  the Irish, the Germans during the 30 Years War, the Scots Irish both at home and here, the Eastern European Jews, American blacks, the rural South, Hungarian peasants.  Music was the one thing that the landlord couldn’t rack-rent you on; the church couldn’t tithe it out of your hands; the lord couldn’t force-labor it away from you; the slave driver couldn’t lash it out of your back; you could take it with you when you were expelled from the umpteenth country in succession; you could jam it into the hold of an immigrant ship.  The factory owner couldn’t shut it off from you in a lock-out.  The tax collector couldn’t padlock it or seize it.  Music was the one pleasure you could make yourself, that you could enjoy without having to worry about one more mouth to feed or losing that week’s rent money.

So of course people appreciated music more.

What has changed?  What has changed is human liberation from massive and profound privation, privation which modern Americans born after, say, 1960, cannot even imagine.  Granted, the enslavement of privation has been replaced in popular culture with a poor simulacrum of true human freedom (see my above comments about socialism’s modern substitute for Rome’s bread and circuses), but the fact remains that we — even the poorest among us — are surrounded with pleasures (or what pass for pleasures) undreamt-of to even our parents’ generation.

And now I will diverge from our author, once again.  If what is necessary to restore the uniquely precious significance of music to the broad mass of the American population is to return to the physical circumstances of the centuries in which it possessed that significance, then I cannot follow our author.  I am willing to do without the music.  What right do I have to demand the impoverishment of hundreds of millions of my fellow humans so that I may enjoy the pleasures of a new musical experience?

In bemoaning the demise of music’s place in the American soul, and in glossing over the contrast between the world in which it maintained that place and the America in which it struggles to keep it, our author betrays — perhaps inadvertently (remember I know zilch about his politics) — how profoundly the socialist premise has soaked into our collective understanding.  You should suffer so that Music (or “social justice” or “diversity” or “the environment” or the “dictatorship of the proletariat” or whatever) may flourish.  Or more pointedly:  You should toil in drudgery so that I may relish the satisfaction of Society as I conceive it should be.

The Five Year Plan demands it, after all.

 

Happy Birthday, Trofim Lysenko

Today is Trofim Lysenko’s birthday; he was born on this date in 1898.

Never heard of him?  Don’t worry, most in the West haven’t.

He was Stalin’s pet scientist.  Decided Mendel was wrong about inherited traits.  According to Lysenko, you could alter genetics by environmental influence.  Very handy, that, when you’re trying to convince Stalin that you can grow grain in climates and seasons in which it won’t grow.  In Ithaca, New York, in 1932, one of his fellow Soviet scientists reported, with a straight face:  “The remarkable discovery recently made by T D Lysenko of Odessa opens enormous new possibilities to plant breeders and plant geneticists of mastering individual variation. He found simple physiological methods of shortening the period of growth, of transforming winter varieties into spring ones and late varieties into early ones by inducing processes of fermentation in seeds before sowing them.”

The fellow from whom that last quotation comes, Nikolai Vavilov, paid with his life for his subsequent disagreement with Lysenko.  He was arrested in 1940, sentenced to death in 1941, and died — apparently of starvation — in GuLAG in 1943.

Lysenko came up with all manner of whack-job pseudo-scientific claptrap, and rammed it down the throat of Russian science with a bayonet.  According to the Wikipedia write-up, dissent from his theories was formally outlawed in 1948.  Solzhenitsyn ran across several — including Vavilov — who similarly paid with their hides for the sin of crossing the politically decreed “scientific” orthodoxy of Trofim Lysenko.

Lysenko’s ascendancy lasted through the late 1950s.

Why is it important that we recall Trofim Lysenko today?  When we have mainstream politicians and widely-regarded pundits openly calling for the criminalization of disagreement with the theory of anthropogenic global warming — or “climate change” or whatever it’s called this month — we must remember that we are listening to the intellectual and moral heirs of Lysenko.  This is all the more so when someone points out that, from analysis of U.S. climate data from 1880 to the present, over 90% of the U.S. data which is presented to “prove” AGW has been monkeyed with, and is not, in fact, the raw data.  It’s been estimated, modeled, or just made up.  From the linked article’s conclusion:

“The US accounts for 6.62% of the land area on Earth, but accounts for 39% of the data in the GHCN network. Overall, from 1880 to the present, approximately 99% of the temperature data in the USHCN homogenized output has been estimated (differs from the original raw data). Approximately 92% of the temperature data in the USHCN TOB output has been estimated. The GHCN adjustment models estimate approximately 92% of the US temperatures, but those estimates do not match either the USHCN TOB or homogenized estimates.”

From the e-mails and documents released as part of what’s come to be called “ClimateGate” (I wonder if Liddy et al. are tortured in their sleep by this plague of -gate nonsense terms visited on us year in and year out), Gentle Reader will perhaps recall that the University of East Anglia’s Climate Research Unit brought someone in to try to reproduce the raw historical data on which it — and most of the rest of the climate science world — relies.  The problem, it seems, is that they’ve so thoroughly corrupted their data, and were so careless in preserving their original data, that it’s impossible to replicate their results.  That’s probably an over-simplification, but the key bit is that after two or so years of trying their own numbers guy threw up his hands in despair and quit.  Said it couldn’t be done.

Thus what we’re left with is a mountain of corrupt historical data, current data that is likewise manipulated to match models’ predictions, contradictory real-world observations (shrinking ice cover at the north latitudes, and record increases in the southern, shrinking glaciers, 17-year non-warming periods when all the models tell us that, with carbon dioxide levels relentlessly increasing, we should be absolutely cooking) . . . and scientists and politicians carping on how we just need to turn over more money and more power to them, and all will be made well.  Oh, I did forget to mention that we also got to see, as part of ClimateGate, numerous climate scientists scheming behind the curtains to stack journals’ editorial boards and peer review processes to suppress publication of scientific literature skeptical of their conclusions?

And we’re supposed to use the coercion of the criminal law system to punish anyone who dares to question the politically established orthodoxy?  Remind me again how that worked out for Soviet science.

Trofim Lysenko is dead and in his grave, but his ghost stalks the halls of climate science to this day.

[Update 05 Oct 2015]:  As if on cue, two European research foundations, one French and the other German, recently released a study on the production of isoprene in the uppermost film of the ocean surface.  I’m no chemist, nor of course a climatologist, but isoprene, it seems, has a strong effect on cloud formation, and cloud formation is intimately connected with a cooling effect on the global climate.  (Yes, that’s grossly simplified, but then if you want to read the full study, here’s the link).  The study, by the way, was funded by a grant from the European Research Council, not the Koch brothers.

Up until now, the assumption has been that isoprene is formed by plankton in sea water.  But let’s get it from the horse’s mouth:  “Previously it was assumed that isoprene is primarily caused by biological processes from plankton in the sea water. The atmospheric chemists from France and Germany, however, could now show that isoprene could also be formed without biological sources in surface film of the oceans by sunlight and so explain the large discrepancy between field measurements and models. The new identified photochemical reaction is therefore important to improve the climate models.”

How big a discrepancy?  “So far, however, local measurements indicated levels of about 0.3 megatonnes per year, global simulations of around 1.9 megatons per year. But the team of Lyon and Leipzig estimates that the newly discovered photochemical pathway alone contribute 0.2 to 3.5 megatons per year additionally and could explain the recent disagreements.”

In other words, a newly-discovered photochemical, abiotic source of an important aerosol precursor looks as though it may be contributing up to almost 200% more isoprene globally than current climate models assume.  Note the low end of the estimate, by the way.  I wouldn’t suppose that the global output of this newly-discovered source would remain stable year-on-year.  But when you need to update your climate models (which still cannot explain the 17-year “hiatus” in observed global warming) to account for up to triple the previously-assumed amount of an input that counteracts the principal effect of your model’s core variable (carbon dioxide in the atmosphere), I suggest two thoughts for the curious-minded:  1.  What else do the models inaccurately assume or simply not account for at all, and is the failure attributable to scientific malfeasance or garden variety ignorance of a phenomenally complex process?  2.  Does not the climate alarmists’ dancing around these models as if they were sacred totems have more than a slight whiff of the Israelites’ worship of the golden calf?

But remember, class:  There are public figures in the United States who want to use the physical coercive power of the criminal law system to suppress disagreement with these climate models.

Trofim Lysenko rides again.

[Update: 11 March 2016]:  Didn’t believe me, did you, Gentle Reader?

Turns out the U.S. Attorney General has taken a serious look at prosecution of oil companies for daring to disagree on the very unsettled state of whether and to what extent fossil fuels cause “climate change,” at least to the extent that the climate is in fact changing in ways and at speeds that cannot be explained by reference to the earth’s climatological history.   Video at the link.  With bonus for invocation of dear ol’ Trofim.

Did I Miss the Coverage?

You know, the big news story where a supertanker ran aground somewhere in the Great Lakes and spilled 230,000 barrels of dumb-ass into the water?

From Chicago we have WGN television wishing everyone a Happy Yom Kippur, displaying a yellow star with the German “Jude” (“Jew”) on it.  The star was the same star which Germany made the Jews wear up until their extermination.  Mind you, this isn’t some crappy little community access channel in some backwoods hamlet.  This is The Television Station run by one of the country’s largest broadcasters.  Layers of editors and fact-checkers, dontcha know.  And their excuse?  From their general manager and news director:  “WGN General Manager Greg Easterly and News Director Jennifer Lyons said the picture came from its image bank, and they ‘failed to recognize that the image was an offensive Nazi symbol.'”

No guys, the swastika you can call “an offensive Nazi symbol” (you mean there’s such a thing as a Nazi symbol that’s not offensive?); the star they made little Jewish children wear sewn to their clothes so the SA thugs would know whom to beat to death in the streets in broad daylight is a specific reminder of the most determined effort — “so far,” we now have to add, in light of Dear Leader’s handing the keys to the nuclear arsenal to Iran — made to exterminate an entire people.  The swastika was worn by millions — the Nazis shoved it onto everything — who had nothing to do with murdering the Jews or anyone else.  If you served in any public office, or if you were drafted into the armed services, you would have worn on your person somewhere that symbol.  Pope Benedict XVI would have worn it, however briefly, when he got roped into the fray in 1945.  You have, in other words, to read something else into the swastika’s symbolism to get to “offensive” (I mean, no one views symbolism of Imperial Germany as “offensive,” and they fought against us and lost a war just like the Nazis did).  But that yellow Star of David, with “Jude” blazoned across it, meant and means exactly and only one thing.  It was the device by which a murderous regime publicly marked its victims.  Do not, please, degrade its meaning by calling it “an offensive Nazi symbol.”

And across the little water, so to speak, we have from Ontario a member — vice-chair, in fact — of a school board (!!!) and a candidate for parliament, who a number of years ago made a joke about a photograph taken at Auschwitz.  She likened whatever it was to phallic symbols and . . . well, honestly, the point of her joke, which she made on a friend’s Facebook wall, escapes me.  I’m not sure if she was trying to send up pretentious artistic gobbledy-gook, parody the sexualization of every-damned-thing in daily life by the folks we now know as “social justice warriors,” or lampoon “the patriarchy” or whatever.  Someone doing oppo research discovered the post, got it out there, and the predictable shit-storm ensued.

I’m going to reserve judgment on the propriety of her joke.  Yes you can make the point that some things simply should not figure in humor.  Ever.  And if such is the case then Auschwitz is certainly on that list.  Even if you’re the sort who’s not willing to go that far, unless you’re clearly using that imagery to attack something contemptible (see my list of possible explanations of what she might have been getting at, above), and make sure it’s not even remotely debatable that you’re not laughing at or about Auschwitz and what it symbolizes, but rather your true target, then it still has to be considered in pretty bad taste.  And maybe even then you ought to come up with something else equally outlandish to use — you know, something that doesn’t have the stench of six million murder victims to it — to talk about phallic micro-aggressions of the patriarchy.  Or something like that.

But no:  What really has made my head explode was this statement coming from the vice-chair of a school board:  ““Well, I didn’t know what Auschwitz was, or I didn’t up until today,’ she said in an interview Tuesday night. Johnstone, who appears to be in her thirties, said she had ‘heard about concentration camps.’”

Jesusmaryjoseph, as the Irish would exclaim.

This depth of ignorance is just about beyond words.  What, I mean, can you say in response to someone who’s managed to get past elementary school without knowing at least what Auschwitz was and what occurred there?  It’s not like you have to know everything about the Holocaust, its causes and course.  Just like you don’t need to know everything written about the GuLAG in order to appreciate the Soviets’ starving and working to death tens of millions of their fellow citizens on trumped-up charges.  But how do you, in the 21st Century, construct a moral framework for your existence without tying the abstract “I’ve heard about concentration camps” to the concrete physical “and this is the most notorious surviving example; it really happened”?

Compare, by the way, the ignorance of the vice-chair of her local school board with the degree of engagement exhibited by this teen-aged girl from Alabama.

Depressing Predictability

From The New York Times, via Urgent Agenda, we have “What Happened to South African Democracy?” a depressing look at the reality of life in ANC-dominated South Africa.

First, some props to the ANC as it was run by a post-release Nelson Mandela.  I’m sure that “mistakes were made,” as the usual phraseology will have it, in the transition from apartheid to democracy; unless the Second Coming in Something Other Than Wrath happens, you cannot up-end the fundamental structure of any society without someone, somewhere, in some official capacity making some degree of a pig’s breakfast out of something.  So perfection is not the standard by which to judge how South Africa transformed itself.  Think only of the smooth, error-free process by which the U.S. transformed its formerly-slave-owning society to one in which slavery was, overnight (on an historical time horizon) outlawed, and you get sort of a notion of how sobering was the challenge for South Africans of all ethnicities.

But this is the Big Thing to keep in mind in thinking about how they responded to their challenges:  In South Africa they resisted the temptation to exact government-sanctioned vengeance.  Names were named, and deeds called by their correct labels, but there were no Soviet-style Revtribs or Cheka troikas doling out “revolutionary justice” in execution cellars.  I cannot recall which book has the picture, but in a history of the Soviet Union that I have somewhere, there is a picture of a Polish officer in the Russian Army, surrounded by his troops.  He’s hanging by one ankle from a tree branch, naked, and from his anus there protrudes a very long shaft of what is probably a lance of some description.  Being an officer he would of course have been some sort of nobleman, and his troops peasants.  His troops stand around, some looking at him hanging there, others at the camera.  Yes, the ANC had (and has) an ugly underside —  “necklacing,” for example, in which a bound victim has a car tire put about his neck, it is filled with gasoline, and then set alight — but in point of fact once the ANC came to power it chose a path other than as chosen by the communist states from whose doctrines its leaders had initially taken their inspiration (Mandela as of his arrest was a Marxist).  And for that they deserve a large measure of respect.

But it’s one thing for the dog to catch the car, and something entirely different what he does with the car once caught.  And he must be judged on both.

In this latter respect the ANC has squandered much, it seems, of its moral capital.  A good deal of that frittering has occurred as the fallout from governmental encroachment on individual liberties, usually as the result of the dynamics of patronage and the distortions it brings to policy.  To take but one example, there arises the question of leadership in villages which are still by and large tribal enclaves.  Should leadership be elective (democracy) or vested in tribal leadership (ethnic)?  For the central government the question is not just one of local sensibility.  You see, an Established leadership can be corrupted much more easily from the center than can an elective leadership.  And so we see the spectacle in South Africa of the attempt to foist non-elected leadership in the tribal areas.  From the NYT article:

“While sections of the political elite have tried to manipulate the politics of ethnicity to bypass democracy, many at the grass-roots level have opposed these moves. Popular opposition killed the Traditional Courts Bill. Last month, a community in the Eastern Cape won a court battle to elect its own leaders, rather than have them imposed. It cannot be right, the court agreed, that the people of the Transkei region ‘enjoyed greater democratic rights’ under apartheid ‘than they do under a democratically elected government.’”

The “Traditional Courts Bill” was an effort, sponsored by the prime minister, to create a separate legal system for what the article refers to as South Africa’s “Bantustans.”  Under that jolly little piece of legislation, unelected tribal chiefs would have been vested with authority as “judges, prosecutors and mediators, with no legal representation and no right of appeal.”  Hey! that’s why Nelson Mandela rotted all those years in prison, right?  So what’s going on?  This is what’s going on:

“Corruption expresses the way that state patronage has come to define politics. Politics in South Africa today ‘is devoid of political content,’ in the words of a former A.N.C. activist, Raymond Suttner. Instead, ‘it relates to who is rising or falling, as part of ongoing efforts to secure positions of power and authority.’ Using corrupt resources to win favors from different social groups and factions has helped entrench a dangerous cronyism in national politics.”

Gee whiz, who could have seen that coming?  I’ll tell you.  A British doctor who writes under the name Theodore Dalrymple.  I have a couple of his books, the first one I bought being Our Culture, What’s Left of It: The Mandarins and the Masses, a collection of essays.  Among them is “After Empire,” his description of his experiences as a newbie doctor in what was then Ian Smith’s Rhodesia.  As the Blogfather would say, by all means Read the Whole Thing, but here’s the guts of one of the article’s less encouraging observations:

“Unlike in South Africa, where salaries were paid according to a racial hierarchy (whites first, Indians and coloured second, Africans last), salaries in Rhodesia were equal for blacks and whites doing the same job, so that a black junior doctor received the same salary as mine. But there remained a vast gulf in our standards of living, the significance of which at first escaped me; but it was crucial in explaining the disasters that befell the newly independent countries that enjoyed what Byron called, and eagerly anticipated as, the first dance of freedom.

The young black doctors who earned the same salary as we whites could not achieve the same standard of living for a very simple reason: they had an immense number of social obligations to fulfill. They were expected to provide for an ever expanding circle of family members (some of whom may have invested in their education) and people from their village, tribe, and province. An income that allowed a white to live like a lord because of a lack of such obligations scarcely raised a black above the level of his family. Mere equality of salary, therefore, was quite insufficient to procure for them the standard of living that they saw the whites had and that it was only human nature for them to desire—and believe themselves entitled to, on account of the superior talent that had allowed them to raise themselves above their fellows. In fact, a salary a thousand times as great would hardly have been sufficient to procure it: for their social obligations increased pari passu with their incomes.”

And the same dynamic played out among the political classes after independence:

“It is easy to see why a civil service, controlled and manned in its upper reaches by whites, could remain efficient and uncorrupt but could not long do so when manned by Africans who were supposed to follow the same rules and procedures. The same is true, of course, for every other administrative activity, public or private. The thick network of social obligations explains why, while it would have been out of the question to bribe most Rhodesian bureaucrats, yet in only a few years it would have been out of the question not to try to bribe most Zimbabwean ones, whose relatives would have condemned them for failing to obtain on their behalf all the advantages their official opportunities might provide. Thus do the very same tasks in the very same offices carried out by people of different cultural and social backgrounds result in very different outcomes.”

I’m going to state that what that NYT article is describing is not much more than the playing out, on the South African stage, of the social dynamic Dalrymple observed all those years ago in Rhodesia.

Lest Gentle Reader get the impression that Dalrymple is just another White Man’s Burden sort of neo-colonialist who’s demonstrating for the Xth time that the wogs simply are incapable of self-government, I really encourage Gentle Reader to read the entire article.  Dalrymple’s very up-front in pointing out that the social dynamics which render the African nation-states peculiarly susceptible of political and economic corruption serve a very positive function in enabling the peasants — who still form the overwhelming majority of the populace — to survive in an environment that is hostile on any number of levels, all the way from its climate to its economic policy.  “Of course, the solidarity and inescapable social obligations that corrupted public and private administration in Africa also gave a unique charm and humanity to life there and served to protect people from the worst consequences of the misfortunes that buffeted them.”

And so what is Dalrymple’s “solution”?  Well, he doesn’t really offer one.  He does point out that the crux of the tragedy — and you cannot read that article and come away without the sensation that he perceives what he’s describing as a tragedy in its classical meaning — was the imposition of the national-state model on a continent whose social systems were not and remain not suited for that framework.

“In fact, it was the imposition of the European model of the nation-state upon Africa, for which it was peculiarly unsuited, that caused so many disasters. With no loyalty to the nation, but only to the tribe or family, those who control the state can see it only as an object and instrument of exploitation.”

This does not bode well for South Africa.  And it does not bode well for Africa in general.  As Thomas Sowell has pointed out in any number of books and essays, the history of the human species is a history of the exploitation of the lesser-organized by the greater-organized groups, whether it was 12th Century England swallowing 12th Century Ireland, or the 19th Century United States scattering to the winds the aboriginal populations (Gentle Reader will recall that Tecumseh’s coalition was well-nigh the only one of its kind, and it was only that coalition that was able, until he was killed at Fallen Timbers, to stave off the white tide . . . although on numbers alone the outcome was inevitable), or the 19th Century colonial powers gobbling up Africa itself.  Even a numerically smaller group can successfully challenge a larger, established group, if the disparities in political organizing capacity are there.  Think of how Rome became mistress of the entire Mediterranean world.

Now think what happened to the peoples of the former Austro-Hungarian empire, a state which fractured into constituent, mutually-hostile ethnic groupings.  Franz Joseph it was, I think, who allowed that upon dissolution of his empire all that would happen would be that all these groups so clamorous for independence would merely become the playthings of greater powers.  And so it occurred.  Unless Africa can find a way either to move from its present social structures to a set more suitable for maintenance of a nation-state, or alternatively find some Golden Mean to straddle the two worlds, then what is likely to happen to the people when these nation-states implode?

[As an aside, and as perhaps a post topic for another day, I’ll toss the question out to Gentle Reader to what extent any of the dynamics observed by Dalrymple in Rhodesia and elsewhere in Africa, and by the NYT’s man-on-the-ground in South Africa today, would have had any play if the U.S. had permitted its aboriginal tribes to remain as they were pre-Trail of Tears, living in a parallel legal universe, but otherwise among the majority population.  Extra-territoriality, in other words, the same system which the Western powers rammed down Imperial China’s throat.  No state which is in fact sovereign concedes extra-territoriality to any group; it is simply inconsistent with the assertion of sovereignty.  That’s a point I seldom see made in discussions about Jackson’s decision not to concede that to the Cherokee, and Supreme Court opinion be damned.  For that matter I’m not sure how you can square the 14th Amendment with the assertion that the Cherokee ought to have been allowed to remain as they were.  Either there is One Law for all, or you’re just pretending at Equal Protection.  And either there is a Supremacy Clause or there is not.  Imponderables.]

Birds of a Feather

The British Labor Party has just elected a new leader, Jeremy Corbyn.  He is, to put it mildly, not a mainstream politician.  A self-avowed socialist, he’s about as far-left as you can be in Britain and still find a constituency loony enough to send you to Westminster.  He’s so far to the left that even The Economist isn’t having him.  It describes him as “a politician who would exist, as he has in Westminster for the past decades, as a hard-line oddball on the fringes of any Western political arena,” and is so impolite as to ask, “Will Mr Corbyn, a man with links to unsavoury governments and international groups (he calls Hamas “friends”, presented a programme for Iran’s state television and recommends Russia Today, Vladimir Putin’s international propaganda network) be made privy to sensitive information about national security, as was his predecessor as leader of the opposition, Ed Miliband?”

What is truly alarming is that Corbyn won with 59% of the votes, on the first ballot.

Well, now ol’ Jeremy has done gone and farted in chapel, loudly.  At a memorial service for the RAF fighter pilots who quite literally saved Britain in 1940 from the Luftwaffe air superiority which would have enabled Hitler to move forward with Operation Sea Lion — the invasion of Britain — Jeremy Corbyn stood there, with loosened necktie and visibly unbuttoned collar, silent, while the rest of everyone present sang “God Save the Queen.”  Here’s the picture at the Frankfurter Allgemeine Zeitung’s report on the fiasco.  He had announced his intention to do so, what he called “respectful silence,” ahead of time.  He is, you see, an anti-monarchist (yeah . . . that’ll win your party elections in England), and didn’t want to taint himself by singing what is, after all, the lawfully established national anthem.

Here’s a bit of news, Jeremy.  This memorial wasn’t about you and your doctrinal purity.  It was about a group of terrifyingly young men, outnumbered and out-gunned, who were thrown into the scales in a last-ditch effort to keep some flicker of liberty alive in Europe.  They were all there was left, their governments — in thrall to pacifists like you, Jeremy — having ignored and in fact suppressed and lied about the activities of the Nazis for years.  The army was naked of arms; those had been left on the beach at Dunkirk.  The navy was ill-equipped for anti-air warfare and had to be kept intact to attack the invasion fleet if the air battle failed.  Bomber Command was without the means of attacking the Luftwaffe’s bases in France and the Low Countries.  Fighter Command was all there was left in the ranch.  You, Jeremy, are among the “so many” who owed “so much” to “so few.

It shouldn’t surprise anyone, really, that Corbyn shows such contempt for the men who fought and died so that people like Corbyn can moon around Westminster, instead of pacing the yard at Dachau.  It’s what leftists do; it’s who they are.  With respect, The Economist is dead wrong about one thing:  So far from being “on the fringe” at 1600 Pennsylvania Avenue, Corbyn isn’t so much one inch to the left of the current U.S. president.  In fact, I’m wondering when he will get his first invitation to the White House, and am eager to see the pomp and honors with which he is received and embraced, in contrast to, say, Prime Minister Netanyahu of Israel.

Corbyn and Dear Leader should get along famously.  Back in 2007 a then-unknown senator from Illinois stood and pointedly folded his hands below his waist while the national anthem was sung.  Our party operatives with bylines national mainstream media quickly buried the incident.  Anyone want to bet whether a Republican candidate would have got a free pass out of that?

I would express the pious hope that, having chosen someone so obviously inappropriate to lead them, Labor has consigned itself to electoral irrelevance for the time being.  But then, having just watched the U.S. Congress approve a plan to permit Iran to obtain nuclear weapons for the avowed purpose of exterminating our one ally in that entire Godforsaken corner of the globe, I cannot be so confident.

Something Upbeat, for a Change

I suppose it’s embarrassing that I have to use that post title.  Yeah, yeah, I know:  If you’re not outraged you’re not paying attention.  Nonetheless in looking back at many, if not most, of the posts I’ve put up over the years, I have to acknowledge that levity and good feelings are comparative strangers around here.

Today I make an exception.

Over Labor Day weekend the family and I hied us to the National Museum of the U.S. Air Force, located at Wright-Patterson Air Force Base in Dayton, Ohio.  It’s easy to get to, it has free parking and admission, and oh by the way, did I mention it rocks?  We spent Saturday afternoon, most of Sunday, and Monday morning there.  If that’s all you did you should still budget at least two full days if you want to see everything they have, and read all the explanatory material, and actually spend some time contemplating the exhibits, rather than just rushing on towards the next one.

The museum is set up in gigantic (I mean, like, really enormous . . . like multiple football field big) hangar-like buildings, each connected to the next via covered (and mercifully air-conditioned) passageways.  The most interesting exhibits are at floor level, although they also have many suspended above you, chiefly the (for me, at least) less interesting ones, like drones, air-launched missiles, small trainer and transport aircraft, and so forth.  The exception to the pattern is the exhibit hall for the ICBMs, which is set up to remind the visitor of a missile silo (round and very, very tall).  Obviously, most of the exhibits are United States warbirds, although they do have quite a number of German, several Japanese, and a few Soviet exhibits.  They’ve got a V-1 and a V-2, a Bf-109, MiG 15, MiG-29, etc.  Interesting stuff.

What they don’t have very much of is — with the exception of one specific exhibit hall, on which more later — individually historical aircraft, by which I mean specific airplanes that in and of themselves are historically significant.  By way of counter-example, the National Naval Aviation Museum, located at NAS Pensacola (and itself likewise worth the trip from wherever Gentle Reader might be) has the only known survivor of both Pearl Harbor and Midway; it has the NC-4 (the first airplane to fly the Atlantic); it has quite a bit of the bridge equipment from USS Enterprise (trivia note: the chap who founded Enterprise Rent-a-Car served in her during the war, and named his company after his ship), and so forth.  I can understand that:  Most of the historically significant land aircraft are going to be found at the Smithsonian, so the Air Force Museum is going to have to take second pick.  Illustrating that literally is the fact that Enola Gay, the airplane that dropped the first atomic bomb, on Hiroshima, is in the Smithsonian.  Bockscar, which dropped the second bomb, on Nagasaki, is in Dayton, viz:

20150905_141333

The major exception to the above pattern is the presidential gallery, in which they have a fistful of airplanes which served different presidents.  They have, for example, the last of the several airplanes nicknamed Sacred Cow, which ferried Roosevelt to the Yalta Conference.  It was a built-out C-54 and featured an elevator mounted in its tail to hoist FDR aboard in his wheelchair.

20150907_101009

They also have the airplane which brought Kennedy’s body back from Dallas.  Not wanting to shove his casket into the cargo hold, they sawed out a chunk of an aft bulkhead and wedged him in that way, with his widow making the trip sitting in a seat opposite.  They’ve got Truman’s plane, the Independence, as well as several smaller airplanes which served in different roles.  What they don’t have is a Marine One, which is understandable, it being Navy (you can see one at the Naval Aviation Museum if so inclined).  The presidential exhibit shares an off-site (for the time being; starting at the end of the month they’re going to move both to the main facility) hangar with their collection of experimental aircraft.  They’ve got the only surviving XB-70 (the other one crashed during test flight), the prototype of the XF-23, the competitor which lost out to what became the F-22 Raptor, and a raft of other things some of which you have a hard time imagining in the air.

What struck me — and here I am perhaps betraying an ignorance born of sloth — is the sheer variety of aircraft the U.S. has put into the air over the years.  Sure, everyone’s heard of the B-52, the B-24, the B-1, the P-47, the F-4 Phantom, and so forth.  But how about the RB-47, or the A-20?  Or the B-50 Hustler?  To say nothing of the inter-war aircraft?  The Air Force Museum has got ’em all.  Among the most impressive for me was the B-36 strategic bomber.  Again, although I’d heard of this one, I’d never really paid attention to it, considering it to be one of those stop-gap planes that we just shoved onto the flight line until we could get the B-52 in the air.  Well, it was our principal strategic weapons platform for most of the 1950s, and man alive! is it huge.  It’s got ten — count ’em — engines: six pusher propellers and four jet engines mounted in twin pods outboard of the props.  And did I mention the thing’s ginormous?

As with any exhibit of historical artifacts, you get a sensation of times which were in important ways profoundly different from our own.  F’rintsance, you kind of get a notion that the concept of “micro-aggression” hadn’t made it into the lexicon of the U.S. Army Air Force when you take a look at the nose art on their B-24 Liberator:

20150907_111210

Around the walls of the exhibition halls, as well as interspersed among the airplanes, they’ve got thematic exhibits of documents, artifacts, and so forth.  Some of them are personal to specific aviators, who either died in combat or who otherwise were of significance.  There are POW exhibits for both World War II and Vietnam.  There are exhibits on the strategic bombing campaigns over both Europe and Japan.  One omission I found interesting is the complete absence of any mention of Dresden.  They do have a small mention of the fire-bombing raid on Tokyo, which actually killed quite a number more than the raid on Dresden did.  I wonder if that’s because Dresden was principally an RAF Bomber Command show, with the 8th Air Force showing up the following morning to make the (burning) rubble bounce.

The museum also has an IMAX movie theater.  I didn’t go to see either of the two movies they were showing (one on D-Day and the other I forget what).  If you were to do that you’d need to budget additional time accordingly.

The passageways between the main exhibit halls are not wasted, either.  In one they have an exhibit on the Holocaust, including a listing of people in the Dayton area who either were survivors, or liberators, or who have been inducted into Yad Vashem as Righteous Among the Nations.  In another passageway there’s a really well-done exhibit on the Berlin Airlift, and in a third a collection of bomber jacket art.  In another area they have a really cool exhibit, complete with video, on Bob Hope and his 50-plus years of touring to take the troops’ minds off their troubles, even if only for a few moments.

All in all, it’s a wonderful time and I can’t recommend it too highly.

The Quartet: Fascinating, With a Caveat

I just finished reading Joseph J. Ellis’s The Quartet: Orchestrating the Second American Revolution, his history of the — and there is no other word for it — scheming which attended the process by which the United States under the Articles of Confederation was transformed into the United States under the Constitution.  I’ve also read Ellis’s His Excellency: George Washington, a very useful biography and one which sheds some interesting light on the man Ellis (in The Quartet) calls the “Foundingest” of all the Founding Fathers; his Passionate Sage: The Character and Legacy of John Adams; and, if memory doesn’t fail me, his Founding Brothers: The Revolutionary Generation.

I have to say I enjoyed all of them, particularly the Washington biography and The Quartet.  He has an easy, very accessible style and he’s not afraid to make editorial comments.  They are, after all, his books, and a biographer or historian who has nothing to come right out and say beyond the bare factual narrative isn’t much of writer.  Of course, what facts the writer chooses to include or omit also says something about him, but bald statements of characterization aren’t out of place either.  Just don’t try to hide them, is all I ask.

The Washington book I found interesting because Ellis spends a great deal of time addressing the Great White Elephant in the Room, namely Washington’s Auseinandersetzung (show me a better English word for it and I’ll use it) with the institution of slavery and the relations between the races.  Hadn’t known, just for example, that up to a full 20-25% of the Continental Army was at any given time what they’d refer to as “dark green” soldiers (all soldiers being green, you see; in the navy all sailors are blue, and some are light blue and some are dark blue) in today’s army.  This experience with blacks as fighting men changed Washington profoundly, much as it did so many of the Union soldiers in the Civil War.  You simply can’t watch a man stand up to artillery pounding or gales of small arms fire and be immune to the idea that he’s just as good as you are.  [Aside:  This is why it is so historically significant that it was the U.S. armed forces which, first among all public institutions and voluntarily, de-segregated.]

It was during the war that Washington stopped selling slaves.  By the time he died a large (comparatively) number of his slaves were well past working age.  I can’t recall off the top of my head if Ellis actually uses the expression “retirement home” or an equivalent, but it’s certainly the impression that emerges from the book.  Martha Washington, notably, never changed her own attitudes about slavery or slaves.  And Ellis highlights the fact that a significant number of what we think of as “Washington’s” slaves were actually Martha’s, inherited from her father.  Washington, as I recall, was his executor, and as Martha’s husband was legally charged with the safe-keeping of her property . . . including her slaves.  This conundrum played itself out in Washington’s final act on the subject:  As is well known, he freed his own slaves at his death (nearly alone among the Founding Fathers who were slave owners), but he did not have the legal authority to free Martha’s, and so didn’t.

But on to The Quartet.  Gentle Reader will recall that I have previously written here and here about Washington’s Farewell address, his (written) valedictory to the nation he had done so much to establish.  In both previous posts I’ve mentioned the curious fact that Washington spends something like eight paragraphs addressing the calamity of disunion and the need to resist all who would insidiously suggest fracturing of the union as being the way to go . . . but nowhere breathes so much as a word to the effect that the Constitution itself simply does not permit secession.  In beginning The Quartet I’d been very keen to see what light Ellis threw on the subject, whether it would have come up in the Convention debates or in the ratification process.  [Aside:  Ellis does answer a question for me, namely whether anyone has actually studied in detail the ratification debates in all the states.  There in fact has been someone — one person — who has done so, and unfortunately I can’t call his name from memory.]  But Ellis is silent on the point, so we can’t tell from his book whether the issue was discussed or not.  He does attach, as an appendix, the full text of the Articles of Confederation, which the Constitution replaced.  Interestingly, that document does, in Article XIII, expressly provide, “And the Articles of this Confederation shall be inviolably observed by every State, and the Union shall be perpetual[.]”

There is it, in plain Anglo-Saxon; in fact, the statement that “the Union shall be perpetual” is in there not once, but twice, just a few lines apart.  Search as you may, but no similar statement is to be found in the Constitution or any amendment to it.  Lest Gentle Reader be tempted to read the provisions of the Articles of Confederation by implication into the Constitution, Ellis makes it very plain that the Constitution did not amend or supplement the Articles, but replaced them in toto.  It represented, as Ellis clearly demonstrates, not merely a change in text but a fundamental re-ordering of the very nature of the union from a confederacy of equals, in which each “Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated to the United States, in Congress assembled” (Article II, in its entirety), to a nation-state in which the states are specifically subordinate entities, although not as fully subordinate as James Madison originally desired them to be.  He had in fact, in the Virginia Plan for the Convention, specifically proposed that the federal executive be given an express veto over state statutes and other laws.

All of which only heightens the interest in the omission.  It certainly goes a long way towards under-cutting the argument that the secessionists of 1861 were not only morally abhorrent for their defense of chattel slavery, but also legally and indisputably traitors to their country.  I suppose one might say the omission of 1787 was supplied at bayonet point from 1861-65.  In all events, the nature of the union has now and forever been resolved, and I for one am happy at the outcome, however good-faith the argument on the point may have been at the time.

Back to the book.  The actual “quartet” Ellis refers to are Washington, Madison, Hamilton, and John Jay.  The first three are of course well-known.  The fourth, Jay, is known as the third member of the triumvirate who wrote the essays now known as The Federalist, the most cogent arguments for ratification of the Constitution (although as Ellis points out, they were targeted specifically at New York’s ratification convention and in fact do not seem at the time to have garnered much if any attention beyond that state), and among lawyers as the first Chief Justice.  History wonks will also remember him as the negotiator of the Jay Treaty of 1794 with Great Britain (which finally removed the British from the frontier forts they’d kept occupying, the 1783 Treaty of Paris notwithstanding), and the principal negotiator, with Franklin, of the 1783 treaty itself.  Ellis shares the vignette of Jay in conference with the Spanish envoy (it must be remembered that Spain and France were allied at the time against Great Britain); the Spaniard drew a line with his finger on a map, from the Great Lakes more or less due south to Florida (Spanish at the time), to indicate that as the western boundary of the United States, everything to the west presumably going to Spain.  The Americans had been given explicit instructions by the Continental Congress to conduct all negotiations in consultation with France, which thus meant subject to Spanish veto.  Jay then took his own finger and traced the Mississippi River.  That evening he went to Franklin’s lodgings, awoke him, and convinced him to disregard their instructions in respect of France, and to make a separate peace with Britain.  Had Jay not succeeding in convincing Franklin, or had they knuckled under to Spain’s demands, the history of the entire world for the last 225-plus years would have been not just different, but radically different.

In any event, Ellis recounts how each of the four, by his own route, arrived at the conviction that the Articles of Confederation just were not going to do, and in fact that they were so hopeless as to be beyond salvage by mere amendment.  Washington and Hamilton of course had personal knowledge of the system’s failure to support the army in the field.  Jay got to experience the futility of the system as foreign minister, when the Europeans, who could read the Articles just as well as anyone else, more or less laughed in his face when he purported to represent a “United States of America” that they could see did not in fact exist.  Indeed, it not only did not exist de jure, but as Ellis also shows, it likewise had no place in the sentiments of the ordinary people.  Folks simply did not think of themselves as being “Americans” in the sense of belonging to any greater polity than their own state, if their vision extended even that far.

I won’t recount in detail either the machinations of the Constitutional Convention itself, or the ratification process.  In fact, Ellis doesn’t spend any terribly great amount of time on the ratification process, except in respect of Madison’s stage-managing (or trying to) the order of ratification among the states.  Short version:  By deferring votes in the large, questionable states until near the end of the process, the likelihood was increased that those states would be presented with an accomplished political fact of ratification, and they’d vote to join so as not to be left out.  And that’s pretty much how it worked in practice.  To reiterate, I’d have appreciated much more exploration of the extent, if any, to which issues like potential secession got aired out.

My caveats?  Well, Ellis displays his good leftish credentials in two places in the book.  The first (p. 172) comes at the tail-end of his discussion of what he describes as an “ambiguity” about where the balance of sovereignty was located by the document eventually submitted for ratification.  Key statement:

“The multiple compromises reached in the Constitutional Convention over where to locate sovereignty accurately reflected the deep divisions in the American populace at large.  There was a strong consensus that the state-based system under the Articles had proven ineffectual, but an equally strong apprehension about the political danger posed by any national government that rode roughshod over local, state, and regional interests . . . .”

From the above statement, the truth of which I think Ellis does an excellent job demonstrating, he then hikes his leg and lets a glaring non sequitur in church:  “In the long run — and this was probably Madison’s most creative insight — the multiple ambiguities embedded in the Constitution made it an inherently ‘living’ document.”

Very respectfully, Prof. Ellis, it is nothing of the kind.  For starts, the truly revolutionary nature of the Constitution was precisely that it was written.  Ellis correctly demonstrates the core nature of the Articles as being a treaty among equals.  The Constitution was something different; it established, to a limited extent, a hierarchical relationship between the states and this new animal, the United States of America.  But most importantly, the states’ relations among each other and with the new national state was spelled out in writing.  There was a reason, after all, why monarchs violently resisted granting written constitutions, all the way down to 1905 in Russia:  A written document pins the sovereign down.  With a written document you can point to a specific clause or word or phrase and say to the government, “Look here, Buster; it says right here you cannot do that.”

The notion of a “living document” — in the sense that Ellis is using it — is very, very much a 20th Century phenomenon, and it is specifically a judicial creation from wholecloth.  The Founding Generation would have looked at you as if you were speaking Tagalog if you had suggested that what they’d come up with was a “living document” in which judges got to make things up as they went along (“evolving standards of decency”), and under which a president such as Dear Leader claims an inherent executive authority to act to impose law for no better reason than he cannot get Congress to act as he sees fit on issues which are important to him (“I’ve got a pen, and I’ve got a phone”), and Congress can prescribe how much water your toilet uses (1.0 gal/flush, anyone?).  I’ll go so far as to state that had you tried to sell the Constitution as a “living document” in 1787-88, you’d never have got nine states to ratify; in fact, I question whether the populace of any state would have been so daft.

Secondly, the mere fact that the Constitution abandoned the state-centered structure of the Articles but rejected the All-Powerful National State which Madison had gone into the Convention advocating emphatically does not mean that the answer to the question, “Where does sovereignty lie?” is a forever mutable response.  It is perfectly possible for the answers (and there can be many) to that question to lie at multiple points between those poles, depending on which issue or question you’re asking.  Just for example, the states are prohibited from making war or peace, or coining money.  That’s specifically reserved to the federal government.  On the other hand, the regulation of “Commerce with foreign Nations, and among the several States, and with the Indian Tribes,” while extremely broad, is not, and cannot with honesty be read to constitute, a grant of authority to Congress (to say nothing of the executive) to prohibit a man from feeding his own family with the produce of his own land.  And yet that’s precisely what the Supreme Court said the Commerce Clause does.  I’m still waiting to hear anyone make a convincing case that, had you told the farmers of any of the 13 states that they were ceding authority to Congress to dictate what they could and could not grow on their own land to feed their own children, the Constitution would have stood a ghost of a chance of ratification.  The fact that a group of sophists on the bench can articulate a rationale which, as long as you don’t actually press on it with any force, supports such an outcome does not mean that outcome was contemplated by the men who drafted or voted on the Constitution as among the permissible.  The argument that everything is both necessary and proper to accomplish some hypothetical purposed which allegedly by some remote chain of causation (think: the schoolbook example of the butterfly flapping its wings off the coast of Africa, which results in a Category 5 hurricane coming ashore at Gulfport, Mississippi) is an argument which renders superfluous the entire text of Article I Section 8.  If that argument has any validity then Section 8 could have been written simply as, “Congress shall have all Powers to enact such Legislation as it shall deem expedient.”

As if to emphasize the extent to which Ellis doesn’t Get It, he offers us this:  “Madison’s ‘original intention’ was to make all ‘original intentions’ infinitely negotiable in the future.”  Got that?  Just because it says you can’t be president unless you’re 35, it doesn’t really mean that.  Just because it says each state gets to elect two senators, a state — let’s say, Alabama — can go ahead and elect three, and have them seated.  Just because it says, “No Tax or Duty shall be laid on Articles exported from any State,” and just because Article I Section 8 gives Congress the authority to “lay Taxes, Duties, Imposts and Excises,” (and requires that such be “uniform throughout the United States”), that wouldn’t stop Dear Leader from levying a tax on tobacco shipped from North Carolina to Amsterdam, but excusing tobacco grown in northern California from that tax.  Can private property be taken for public use without “just compensation”?  According to Ellis, the answer is yes, if you can get either a majority in Congress, or the president acting without Congress, to decide to do it.  Because “infinitely negotiable.”  Right now there is a lawsuit pending in which the House of Representatives is suing Dear Leader over the “Affordable” Care Act’s spending of money.  Remember this one:  “No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law”?  Well, it seems that at least some provisions of the ACA produce just that outcome: expenditures not authorized by law.  According to Ellis, that prohibition is “infinitely negotiable” for all time.  Why, one wants to ask Ellis, did the drafters include a provision (Article V) for the document’s amendment, if nothing in it had any now-and-forevermore meaning anyway?  “Living documents” require no amendment; all they require is a consensus that it doesn’t mean that anymore.  Like Brown v. Board of Education, presumably.  What exactly, under the leftish framework, would prohibit Congress and the president from deciding that Brown was decided entirely wrong and well, gosh darn it, we’re going back to “separate but equal”?

Bless the dear professor’s heart.  He puts in a good word for collectivism/corporatism/fascism, but really can’t bring it off.  Not to an intelligent audience, in any event.

The second place where Ellis goes to bat for the leftists occurs beginning on page 211.  He gives Madison’s original draft of what became the Second Amendment.  The two clauses of the text we know (“A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”) were inverted in the original draft, with the “necessary to the security” starting out as “being the best security of a free country.”  Madison’s draft also included a specific clause excusing what we would know as conscientious objectors from “render[ing] military service in person.”  Ellis just refers to “some editing in the Senate,” and laconically observes that it became the Second Amendment.  He provides no clue as to what the substance of that “some editing” might have been.

According to Ellis, Madison’s draft was merely “to assure those skeptical souls that the defense of the United States would depend on state militias rather than a professional, federal army.”  According to Ellis, Madison’s draft makes clear that the right to keep and bear arms was “not inherent but derivative, depending on service in the militia.”  Good leftist talking point.  He’s got some problems, of course, starting with the simple text itself.  The amendment, even in its original draft, does not speak of the states being free to arm their militias; nor does it provide that the right of militia members to keep and bear arms shall not be subject to unreasonable restriction; nor grant the states the right to compel militia service.

If you look at Madison’s first draft, it consists of two independent clauses separated by a subordinate clause.  Let’s try this as a catechism.

Q:  What “shall not be infringed”?

A:  A right.

Q:  What right?

A:  To keep and bear arms.

Q:  Whose right?

A:  The right “of the people.”

Simple enough.  But perhaps Madison (and more importantly, the rest of Congress) really meant “the states” when writing “the people”?  Plausible, until you consider that in four other instances in the Bill of Rights the expression “the people” is used.  The First Amendment protects “the right of the people peaceably to assemble.”  Now read that to substitute “states” for “the people” and what result do you get?  The Fourth Amendment protects the “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”  Same exercise:  Are the states to secure against unreasonable searches and seizures?  Say it with a straight face, Prof. Ellis.  The Ninth Amendment provides that the enumeration of “certain rights” shall not be construed to “deny or disparage others retained by the people.”  I guess you could read that to mean “the states,” but then what to make of the Tenth Amendment, which of course provides for the reservation of all rights neither granted to the U.S. nor prohibited to the states “to the States respectively, or to the people.”  If the leftish reading of the Second Amendment is correct, then the Tenth Amendment can mean “to the States respectively, or to the states.”  You just cannot get around the fact that in every other instance where the Bill of Rights refers to a right “of the people,” either is preservation or its reservation, the reference is plainly to individual humans.

Well, maybe “shall not be infringed” really means “shall not be subject to unreasonable restriction”?  Why, then, does that “unreasonable” qualifier appear in the Fourth Amendment but not the Second?  But what of the subordinate clause about well-regulated militias?  That’s very nice, but that phrase has neither subject nor verb.  Structurally it bears the same relationship to the grammatically operative portion of the text that the Preamble bears to the overall document.  Actually, that’s not quite true:  The Preamble does contain a subject, verb, and direct object:  “We the People . . . do ordain and establish this Constitution for the United States of America.”  This is in marked contrast to the prefatory clause of the Second Amendment.

So far as I am aware there has never been serious suggestion that the language of the Preamble operates to qualify or limit the scope or operation of any substantive provision of the document.  Does Congress only have authority to regulate commerce among the several states if and to the extent reasonably necessary to “provide for a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity”?  Of course not; it has all authority “necessary and proper” to regulate that commerce for any purpose not prohibited by the balance of the Constitution.  Any at all.  Or read the Preamble as a qualifier to the judicial power granted to the Supreme Court and such subordinate courts as Congress may establish.  How is that going to work?

[Purely as an aside, I’d note that — except for those boobs on the bench, of course — no one makes an argument that the Free Exercise Clause, or the right of peaceable assembly, or the freedom of the press are subject to any purpose-based restriction, as is argued by the leftists about the Second Amendment.  Nor is the “unreasonable searches and seizures” clause of the Fourth Amendment so read as to provide that hiding one’s criminal activity is not a legitimate object of that protection.  In fact, the Second Amendment is the subject of its very own interpretive scheme under the leftish project.  Curious, isn’t that?]

I’d also observe that what Ellis is arguing for is not only the “original intent” which he just 39 pages before disparaged in favor of a “living document,” but he’s arguing for the “original intent” as contained in a draft that never made it into the document.  Priceless; but, it illustrates rather well the leftish principle that all means are permissible to the Party, because what the Party line is at the moment is by definition the Truth.

Again, dear Prof. Ellis takes a mighty swing at the bat for his Party, but comes up with air.  I was a bit disappointed that he didn’t work in something about Global Climate Change or how Citizens United is just such a horrible decision because Koch Brothers.  Or something like that.

Notwithstanding his gratuitous introduction of 20th Century political theory into 18th Century politics — and let me allow that I think Ellis is entirely correct in his portrayal of the Convention and ratification process as being as much or more about practical politics than it was the implementation of a theory — I still highly recommend this book.  It grates to have to read a book like this with one’s bullshit filters at high alert, but nowadays when there’s no such thing as a politics-free zone, I guess we’ll just have to learn to live with writing like this.

The Quartet does a marvelous job of showing just how unlikely a prospect was the transformation of the United States from a maelstrom of co-equal sovereigns to a multi-polar entity almost serendipitously adapted to the task of subduing and populating the better part of an entire continent.

Read it for the story of a political miracle, not for its legal analysis.