INTRODUCTION: 120 MILLION CHILDREN IN THE EYE OF THE HURRICANE
The division of labor among nations is that some specialize in winning and others in losing. Our part of the world, known today as Latin America, was precocious: it has specialized in losing ever since those remote times when Renaissance Europeans ventured across the ocean and buried their teeth in the throats of the Indian civilizations. Centuries passed, and Latin America perfected its role. We are no longer in the era of marvels when fact surpassed fable and imagination was shamed by the trophies of conquest--the lodes of gold, the mountains of silver. But our region still works as a menial. It continues to exist at the service of others' needs, as a source and reserve of oil and iron, of copper and meat, of fruit and coffee, the raw materials and foods destined for rich countries which profit more from consuming them than Latin America does from producing them. The taxes collected by the buyers are much higher than the prices received by the sellers; and after all, as Alliance for Progress coordinator Covey T. Oliver said in July 1968, to speak of fair prices is a ''medieval'' concept, for we are in the era of free trade.
The more freedom is extended to business, the more prisons have to be built for those who suffer from that business. Our inquisitor-hangman systems function not only for the dominating external markets; they also provide gushers of profit from foreign loans and investments in the dominated internal markets. Back in 1913, President Woodrow Wilson observed: You hear of 'concessions' to foreign capitalists in Latin America. You do not hear of concessions to foreign capitalists in the United States. they are not granted concessions.'' He was confident: ''States that are obliged.....to grant concessions are in the condition, that foreign interests are apt to dominate their domestic affairs . . . '' he said, and he was right.
Along the way we have even lost the right to call ourselves Americans, although the Haitians and the Cubans appeared in history as new people a century before the Mayflower pilgrims settled on the Plymouth coast. For the world today, America is just the United States, the region we inhabit is a sub-America, a second-class America of nebulous identity.
Latin America is the region of open veins. Everything, from the discovery until our times, has always been transmuted into European--- or later United States --- capital, and as such has accumulated in distant centers of power. Everything: the soil, its fruits and its mineral-rich depths, the people and their capacity to work and to consume, natural resources and human resources. Production methods and class structure have been successively determined from outside for each area by meshing it into the universal gearbox of capitalism. To each area has been assigned a function, always for the benefit of the foreign metropolis of the moment, and the endless chain of dependency has been endlessly extended. The chain has many more than two links. In Latin America it also includes the oppression of small countries by their larger neighbors and, within each country's frontiers, the exploitation by big cities and ports of their internal sources of food and labor. (Four centuries ago sixteen of today's twenty biggest Latin American cities already existed.)
It ought to be generally known that the source of our pleasure, merriment, laughter, and amusement, as of our grief, pain, anxiety, and tears, is none other than the brain. It is specially the organ that enables us to think, see, and hear, and to distinguish the ugly and the beautiful, the bad and the good, pleasant and unpleasant. Sometimes we judge according to convention; at other times according to the perceptions of expediency. It is the brain, too, that is the seat of madness and delirium, of the fears and frights which assail us, often by night but sometimes even by day; it is there where lies the cause of insomnia and sleepwalking, of thoughts that will not come, forgotten duties and eccentricities. All such tings result from an unhealthy condition of the brain; it may be warmer than it should be, or it may be colder, or moister, or drier, or in any other abnormal state.
For these reasons, I believe the brain to be the most potent organ in the body. So long as it is healthy, it is the interpreter of what is derived from the air. Consciousness is caused by air. The eyes, ears, tongue, hands, and feet perform actions that are planned by the brain, for there is a measure of conscious thought throughout the body proportionate to the amount of air which is receives. The brain is also the organ of comprehension, for when a man draws in a breath, it reaches the brain first, and thence is dispersed into the rest of the body, having left behind in the brain its vigor and whatever pertains to consciousness and intelligence. If the air went first to the body and subsequently to the brain, the power of understanding would be left to the flesh and to the blood vessels; it would only reach the brain hot and when it was no longer pure, owing to admixture with fluid from the flesh and from the blood, and this would blunt its keenness.
I therefore assert that the brain is the interpreter of comprehension. Some say that we think with our hearts, and it is the heart that suffers pain and feels anxiety. There is no truth in this; blood vessels from all parts of the body run to the heart, and these connections ensure that it can feel if any pain or strain occurs in the body. Moreover, the body cannot help giving a shudder and a contraction when subjected to pain, and the same effect is produced by an excess of joy, which heart and diaphragm feel most intensely. Neither of these organs takes any part in mental operations, which are completely undertaken by the brain.
Forty years ago, nearly all the major decisions that shape our lives--whether or not we are offered employment, a mortgage, insurance, credit, or a government service--were made by human beings. They often used actuarial processes that functioned more like computers than people, but human discretion still prevailed.
Today, we have ceded much of that decision-making power to machines. Automated eligibility systems, ranking algorithms, and predictive risk models control which neighborhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud. Our world is crisscrossed by information sentinels, some obvious and visible: closed-circuit cameras, GPS on our cell phones, police drones. But much of our information is collected by inscrutable, invisible pieces of code embedded in social media interactions, applications for government services, and every product we buy. They are so deeply woven into the fabric of social life that, most of the time, we don't even notice that we are being watched and analyzed.
Even when we do notice, we rarely understand how these processes are taking place. There is no sunshine law to compel the government or private companies to release details on the inner workings of their digital decision-making systems. With the notable exception of credit reporting, we have remarkably limited access to the equations, algorithms, and models that shape our life chances.
We all live under this new regime of data analytics, but we don't all experience it in the same way. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social than advantaged groups.
The most marginalized in our society face higher levels of data collection when they access public benefits, walk through heavily policed neighborhoods, enter healthcare system, or cross national borders. That data reinforces their marginality when it is used to target them for extra scrutiny. Groups seen as undeserving of social support and political inclusion are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a feedback loop of injustice.
Take the case of Maine. In 2014, under Republican governor Paul LePage, the state attacked families who were receiving cash benefits through a federal program called Temporary Assistance for Needy Families. TANF benefits are loaded onto EBT cards, which leave a digital record of when and where cash is withdrawn. LePage's administration mined data collected by federal and state agencies to compile a list of 3,650 transactions in which TANF recipients withdrew cash from ATMs in smoke shops, liquor stores, and out-of-state locations. The data was then released to the public.
The transactions that were flagged as suspicious represented only 0.3 percent of the 1.1 million cash withdrawals completed during that time period, and the data showed only where cash was withdrawn, not how it was spent. But the administration disclosed the data to suggest that TANF families were defrauding taxpayers by buying liquor, cigarettes, and lottery tickets. Lawmakers and professional middle-class public eagerly embraced the misleading tale they spun.
The Maine legislative introduced a bill that would require TANF families to retain all cash receipts for twelve months, in order to facilitate state audits of their spending. Democratic legislators urged the state's attorney general to use LePage's list to investigate and prosecute fraud. The governor introduced a bill to ban TANF recipients from using their benefit cards at out-of-state ATMs. These proposed laws were patently unconstitutional and unenforceable, and would have been impossible to obey--but that was not the point. Such legislation is part of the performance politics governing poverty. It is not intended to work; it is intended to heap stigma on social programs and reinforce the misleading narrative that those who access public assistance are criminal, lazy, spendthrift addicts.
This has not been limited to Maine. Across the country, poor and working-class people are being targeted by new tools of digital poverty management, and face life-threatening consequences as a result. Vast networks of social services, law enforcement, and neighborhood surveillance technology make their every move visible and offer up their behavior for scrutiny by the government, corporations, and the public.
Automated eligibility systems in Medicaid, TANF, and the Supplemental Nutritional Assistance Program discourage families from claiming benefits that they are entitled to and deserve. Predictive models in child welfare deem struggling parents to be risky and problematic. Coordinated entry systems, which match the most vulnerable unhoused people to available resources, collect personal information without adequate safeguards in place for privacy or data security.
These systems are being integrated into human and social services at a breathtaking pace, with little or no discussion about their impacts. Technology boosters rationalize the automation of decision-making in public services---they say we will be able to do more with less and get help to those who really need it. But programs that serve the poor are as unpopular as they have ever been.
This is not coincidence: technologies of poverty management are not neutral. They are shaped by our nation's fear of economic insecurity and hatred of the poor. The new tools of poverty management hide economic inequality from the professional middle-class public and give the nation the ethical distance it needs to make inhuman choices about who gets food and who starves, who has housing and who remains homeless, whose family stays together and whose is broken up by the state. This is part of the long American tradition. We manage the poor so that we do not have to eradicate poverty.
America's poor and working-class people have long been subject to invasive surveillance, midnight raids, and punitive policies that increase the stigma and hardship of poverty. During the nineteenth century, they were quarantined in county poorhouses. In the twentieth century, there were investigated by caseworkers who treated them like criminal on trial. Today, we have forged a digital poorhouse. It promises to eclipse the reach of everything that came before.
The differences between the brick-and-mortar poorhouse of yesterday and the digital one of today are significant. Containment in a physical institution had the unintended result of creating class solidarity across the lines of race, gender, and national origin. If we sit at a common table to eat the same gruel, we might see similarities in our experiences. But now surveillance and digital social sorting are driving us apart, targeting smaller and smaller microgroups for different kinds of aggression and control. In an invisible poorhouse, we become ever more cut off from the people around us, even if they share our suffering.
In the 1820s, those who supported institutionalizing the indigent argued that there should be a poorhouse in every county in the United States. But it was expensive and time-consuming to build so many prisons for the poor---county poorhouses were difficult to scale (though we still ended up with more than a thousand of them). In the early twentieth century, the eugenicist Harry Laughlin proposed ending poverty by forcibly sterilizing the ''lowest one tenth'' of the nation's population, approximately 15 million people. But Laughlin's science fell out of favor after its use in Nazi Germany.
The digital poorhouse has a much lower barrier to expansion. Automated decision-making systems, matching algorithms, and predictive risk models have the potential to spread quickly. The state of Indiana denied more than a million public assistance applications in less than three years after switching to private call centers and automated document processing. In Los Angeles, a sorting survey to allocate housing for the homeless that started in a single neighborhood expanded to a countywide program in less then four years.
Models that identify children at risk of abuse and neglect are proliferating rapidly from New York City to Los Angeles and from Oklahoma to Oregon. Once they scale up, these digital systems will be remarkably hard to decommission.
Oscar Gandy, a communications scholar at the University of Pennsylvania, developed a concept called rational discrimination that is key to understanding how the digital poorhouse automates inequality. Rationale discrimination does not require class or racial hatred, or even unconscious bias, to operate. It requires only ignoring bias that already exists. When automated decision-making tools are not built to explicitly dismantle structural inequalities, their increased speed and vast scale intensify them dramatically.
Removing human discretion from public services may seem like a compelling solution to discrimination. After all, a computer treats each case consistently and without prejudice. But this actually has the potential to compound racial injustice. In the Eighties and Nineties, a series of laws establishing mandatory minimum sentences took away discretion from individual judges. Thirty years later, we have made little progress in rectifying racial disparity in the criminal justice system, and the incarcerated population has exploded. Though automated decision-making can streamline the governing process, and tracking program data can help identify patterns of biased decision-making, justice sometimes requires an ability to bend the rules. By transferring discretion from frontline social servants and moving it instead to engineers and data analysts, the digital poorhouse may, in fact, supercharge discrimination.
Think of the digital poorhouse as an invisible web woven of fiber-optic threads. Each strand functions as a microphone, a camera, a fingerprint scanner, a GPS tracker, a trip wire, and a crystal ball. Some of the strands are sticky. Along the threads travel petabytes of data. Our activities vibrate the web, disclosing our location and direction. Each of these filaments can be switched on or off. They reach back into history and forward into the future. They connect us in networks of association to those we know and love. As you go down the socioeconomic scale, the strands are woven more densely and more of them are switched on.
When my family was erroneously red-flagged for a health care fraud investigation in 2015, we had to wrestle only one strand. We weren't also tangled in threads emerging from the criminal justice system, Medicaid, and child protective services. We weren't knotted up in the histories of our parents or the patterns of our neighbors. We challenged a single strand of the digital poorhouse and we prevailed.
Eventually, however, those of us in the professional middle class may very well end up in the stickier, denser part of the web. As the working class hollows outs and the economic ladder gets more crowded at the top and the bottom, the middle class becomes more likely to fall into poverty. Even without crossing the official poverty line, two thirds of Americans between ages of twenty and sixty-five will at some point rely on a means-tested program for support.
The programs we encounter will be shaped by the contempt we held for their initial targets: the chronically poor. We will endure invasive and complicated procedures meant to divert us from public resources. Our worthiness, behavior, an social relations will be investigated, our missteps criminalized.
Because the digital poorhouse is networked, whole areas of middle-class life might suddenly be subject to scrutiny. Because the digital poorhouse serves as a continuous record, a behavior that is perfectly legal today but becomes criminal in the future could be targeted for retroactive prosecution. It would stand us all in good stead to remember that an infatuation with high-tech social sorting emerges most aggressively in countries plagued by sever inequality and governed by totalitarians, and here, a national catastrophe or a political regime change might justify the deployment of the digital poorhouse's full surveillance capability across the class spectrum.
We have always lived in a world we built for the poor. We created a society that has no use for the disabled or the elderly, and therefore are cast aside when we are hurt or grow old. We measure human worth by the ability to earn a wage, then suffer in a world that undervalues care, community, and mutual aid. We base our economy on exploiting the labor of racial and ethnic minorities and watch lasting inequalities snuff out human potential. We see the world as inevitably riven by bloody competition and are left unable to recognize the many ways in which we cooperate and life one another up.
When a very efficient technology is deployed against a scorned out-group in the absence of strong human rights protections, there is enormous potential for atrocity. Currently, the digital poorhouse concentrates administrative power in the hands of a small elite. Its integrated data systems and digital surveillance infrastructure offer a degree of control unrivaled in history. Automated tools for classifying the poor, left on their own, will produce towering inequalities unless we make an explicit commitment to forge another path. And yet we act as if justice will take care of itself. If there is to be an alternative, we must build it purposefully, brick by brick and byte by byte.
James Madison called it America's " original sin."" Chattel slavery. Its horrors, Thomas Jefferson prophesied, would bring down a wrath of biblical proportions. ''Indeed,'' Jefferson wrote, ''I tremble for my country when I reflect that God is just: that his justice cannot sleep forever.''
In 1861, the day of reckoning came. The Southern states' determination to establish '''their independent slave republic'' led to four years of war, 1.5 million casualties, including at least 620,000 deaths, and 20 percent of Southern white males wiped off the face of the earth.
In his second inaugural address, in 1865, Abraham Lincoln agonized that the carnage of this war was God's punishment for ''all the wealth piled by the bondsman's 250 years of unrequited toil.'' Over time the road to atonement revealed itself: In addition to civil war, there would be the Emancipation Proclamation, three separate constitutional amendments - one that abolished slavery, another that defined citizenship, and the other that protected the right to vote - and, finally, the Freedmen's Bureau, with its mandate to provide land and education. Redemption for the country's ''sin,'' therefore, would require not just the end of slavery but also the recognition of full citizenship for African Americans, the right to vote, an economic basis to ensure freedom, and high-quality schools to break the generational chains of enforced ignorance and subjugation.
America was at the crossroads between its slaveholding past and the possibility of a truly inclusive, vibrant democracy. Thee four-year war, played out on battlefield on an unimaginable scale, had left the United States reeling. Beyond the enormous loss of life to contend with, more than one million disabled ex-soldiers were adrift, not to mention the widows seeking support from a rickety and virtually non-existent veterans' pension system. The mangles sinews of commerce only added to the despair, with railroad tracks torn apart: fields fallow, hardened, and barren: and bridges that had once defied the physics of uncrossable rivers now destroyed. And then this: Millions of black people who had been treated as no more than mere property were now demanding their full rights of citizenship. To face these challenges and make this nation anew required a special brand of political leadership.
Could the slaughter of more than six hundred thousand men, the reduction of cities of smoldering rubble, and casualties totaling nearly 5 percent of the U.S population provoke America's come-to-Jesus moment? Could white Americans override ''the continuing repugnance, even dread'' of living among black people as equals, as citizens and not property? In the process of rebuilding after the Civil War, would political leaders have the clarity, humanity, and resolve to move the United States away from the racialized policies that had brought the nation to the edge of apocalypse?
Initially, it appeared so. Even before the war ended, in late 1863 and early 1864, Rep. James M. Ashley (R-OH) and Senator John Henderson (D-MO) introduced in Congress a constitutional amendment abolishing slavery. The Thirteenth Amendment was, in important ways, revolutionary. Immediately, it moved responsibility for enforcement and protection of civil rights from the states to the federal government and sent a strong, powerful signal that citizens were first and foremost U.S citizens. The Thirteenth Amendment was also corrective and an antidote for a Constitution whose slave-owning drafters, like Thomas Jefferson, were overwhelmingly concerned with states' rights. Finally, the amendment sought to give real meaning to ''we hold there truths to be self-evident'' by banning not just government-sponsored but also private agreements that exposed blacks to extralegal violence and widespread discrimination in housing, education, and employment. As then-congressman James A Garfield remarked, the Thirteenth Amendment was designed to do significantly more than ''confer the bare privilege of not being chained.''
That momentum toward real freedom and democracy, however, soon enough hit a wall - - one that would be more than any statesman was equipped to overcome. Indeed, for all the saintedness of his legacy as the Great Emancipator, Lincoln himself had neither the clarity, the humanity, not the resolve necessary to fix what was so fundamentally broken. Nor did his successor. And as Reconstruction wore on, the U.S Supreme Court also stepped in to halt the progress that so many had hoped and worked for.
Lincoln had shown his hand early in the war. Heavily influenced by two of his intellectual heroes - - Thomas Jefferson, who advocated expulsion of blacks from the United States in order to save the nation: and Kentuckian Henry Clay, who had established the American Colonization Society, which had moved thousands of free blacks into what is now Liberia --- Lincoln soon laid out his own resettlement plans. He had selected Chiriqui, a resource-poor area in what is now Panama, to be the new home for millions of African Americans. Lincoln just had to convince them to leave. In August 1862, he lectured five black leaders whom he had summoned to the White House that is was their duty, given what their people had done to the United States, to accept the exodus to South America, telling them, ''But for your race among us there could not be war.'' As to just how and why ''your race'' came to be ''among us,'' Lincoln conveniently ignored.
His framing of the issue not only absolved plantation owners and their political allies of responsibility for launching this war, but it also signaled the power of racism over patriotism. Lincoln's anger in 1862 was directed at blacks who fully supported the Union and did not want to leave the United States of America. Many, indeed, would exclaim that, despite slavery and enforced poverty, ''We will work, pray, live, and, if need be, die for the Union,'' Nevertheless, he cast them as the enemy for wickedly dividing ''us'' instead of defining as traitors those who had fired on Fort Sumner and worked feverishly to get the British and French to join in the attack to destroy the United States.
From this perspective flowed Lincoln's lack of clarity about the purpose and cause of war. While the president, and then his successor, Andrew Johnson, insisted that thepast four years had been all about preserving the Union, the Confederacy operated under no such illusions. Confederate States of America (CSA) vice president Alexander H. Stephens remarked, ''What did we go to war for, but to protect our property?'' This was a war about slavery. About a region's determination to keep millions of black people in bondage from generation to generation. Mississippi's Articles of Secession stated unequivocally, ''Our position is thoroughly identified with the with the institution of slavery...Its labor supplies the product which constitutes by far the largest and most important portions of commerce of the earth.'' In fact, two thirds of the wealthiest Americans at the time ''lived in the slaveholding South.'' Eighty-one percent of South Carolina's wealth was directly tied to owning human beings. It is no wonder, then, that South Carolina was willing to do whatever it took, including firing the first shot in the bloodiest war in U.S history to be free from Washington, which had stopped the spread of slavery to the West, refused to enforce the Fugitive Slave Act, and, with the admission of new free-soil states to the Union prior to 1861, set up the numerical domination of the South in Congress. When the Confederacy declared that the ''first duty of the Souther states'' was ''self-preservation.'' what it meant was the preservation of slavery.
To cast the war as something else, as Lincoln did, to shroud that hard, cold reality under the cloak of ''preserving the Union'' would not and could not address the root causes of the war and the toll that centuries of slavery had wrought. And that failure of clarity led to a failure of humanity. Frederick Douglas later charged that in '' the hurry and confusion of the hour, and the eagerness to have the Union restored, there was more care for the sublime superstructure of the republic than for the solid foundation upon which it alone could be upheld '' -- the full rights of the formerly enslaved people.
Millions of enslaved people and their ancestors had built the enormous wealth of the United States; indeed, in 1860, 80 percent of the nation's gross national product was tied to slavery. Yet, in return for nearly 250 years of toil, African Americans had received nothing but rape, whippings, murder, the dismemberment of families, and forced subjugation, illiteracy, and abject poverty. The quest to break the chains was clear. As black residents in Tennessee explained in January 1865:
We claim freedom, as our natural right, and ask that in harmony and co-operation with the nation at large, you should cut up by the roots the system of slavery, which is not only a wrong to us, but the source of all the evil which at present afflicts the State. For slavery, corrupt itself, corrupted nearly all, also, around it, so that it has influenced nearly all the slave States to rebel against the Federal Government, in order to set up a government of pirates under which slavery might be perpetrated.
The drive to be free meant that 179,000 soldiers, 10 percent of the Union Army, (and an additional 19,000 in the Navy) were African Americans. Humanity, therefore, cried out to honor the sacrifice and heroism of tens of thousands of black men who had gallantly fought the nation's enemy. That military service had to carry with it, they believed, citizenship rights and the dignity that comes from no longer being defined as property or legally inferior.
To be truly reborn this way, the United States would have had to overcome not just a Southern but also a national disdain for African Americans. In New York City, for example, during the 1863 Draft Riots:
Black men and women were attacked, but the rioters singled out the men for special violence. On the waterfront, they hanged William Jones and then burned his body. White dock workers also beat and nearly drowned Charles Jackson, and they beat Jeremiah Robinson to death and threw his body in the river. Rioters also made a sport of mutilating the black men's bodies, sometimes sexually. A group of white men and boys mortally attacked black sailor William Williams - jumping on his chest, plunging a knife into him, smashing his body with stones - while a crowd of men, women, and children watched. None intervened, and when the mob was done with Williams, they cheered, pledging ''vengeance on every nigger in New York."
This violence was simply the most overt, virulent expression of a stream of anti-black sentiment that conscribed the lives of both the free and the enslaved. Every state admitted to the Union since 1819, starting with Maine, embedded in their constitutions discrimination against blacks, especially the denial of the right to vote. In addition, only Massachusetts did not exclude African Americans from juries; and many states, from California to Ohio, prohibited blacks from testifying in court against someone who was white.
The glint of promise that had come as the war ended required an absolute resolve to do what it would take to recognize four million newly emancipated people as people, as citizens. A key element was ensuring that the rebels would not and could not assume power in the newly reconstructed United States of America. Yet, as the Confederacy's defeat loomed near, Lincoln had already signaled he would go easy on the rebel leaders. His plan for rebuilding the nation required only that the secessionist states adopt the Thirteenth Amendment and have 10 percent of eligible voters (white propertied males) swear loyalty to the United States. That was it. Under Lincoln's plan, 90 percent of the power in the state could still openly dream of full-blown insurrection and consider themselves anything by loyal to the United States of America.
As one South Carolinian explained in 1865, the Yankees had left him '' one inestimable privilege...and that was to hate 'em.'' '' I get up at half past four in the morning,'' he said, ''and sit up till twelve midnight, to hate 'em.'' The liberator reported that in South Carolina, ''there are very many who . . . do not disguise the . . . undiminished hatred of the Union.'' The visceral contempt, however, extended far beyond the Yankees to encompass the formerly enslaved. One official stationed in the now-defeated South noted, '' Wherever I go - - the street, the shop, the house, or the steamboat - - - I hear the people talk in such a way as to indicate that they are yet unable to conceive of the Negro as possessing any rights at all.'' He further explained how murder, rape, and robbery, in this Kafkaesque world, were not seen as crimes at all so long as whites were the perpetrators and blacks the victims. Given this poisonous atmosphere, he warned, '' The people boast that when they get freedmen affairs in their own hands, to use their own classic expression, 'the niggers will catch hell.'' ''
To stop this descent into the cauldrons of racial hate, African Americans had to have access to the ballot box. The reasoning was simple. As long as blacks were disenfranchised, white politicians could continue to ignore or, even worse, trample on African Americans and suffer absolutely no electoral consequences for doing so. The moment that blacks had the vote, however, elected officials risked being ousted for spewing anti-black rhetoric and promoting racially discriminatory policies. But, in 1865, that was not to be. Suffrage was a glaring, fatal omission in the president's visions for Reconstruction - although one that was consistent with the position Lincoln had taken early in his political career when he ''insist [ed] that he did not favor Negroes voting, or, '' for that matter, ''Negroes serving on juries, or holding public office, or intermarrying with whites,''
What a splendid era this was going to be, with one remaining superpower spreading capitalism and liberal democracy around the world. Instead, democracy and capitalism seem increasingly incompatible. Global capitalism has escaped the bounds of the postwar mixed economy that had reconciled dynamism with security through the regulation of finance, the empowerment of labor, a welfare state, and elements of public ownership. Wealth has crowded out citizenship, producing greater concentration of both income and influence, as well as loss of faith in democracy. The result is an economy of extreme inequality and instability, organized less for the many than for the few.
Not surprisingly, the many have reacted. To the chagrin of those who look to the democratic left to restrain markets, the reaction is mostly right-wing populist. And “populist” understates the nature of this reaction, whose nationalist rhetoric, principles, and practices border on neofascism. An increased flow of migrants, another feature of globalism, has compounded the anger of economically stressed locals who want to Make America (France, Norway, Hungary, Finland…) Great Again. This is occurring not just in weakly democratic nations such as Poland and Turkey, but in the established democracies—Britain, America, France, even social-democratic Scandinavia.
We have been here before. During the period between the two world wars, free-market liberals governing Britain, France, and the US tried to restore the pre–World War I laissez-faire system. They resurrected the gold standard and put war debts and reparations ahead of economic recovery. It was an era of free trade and rampant speculation, with no controls on private capital. The result was a decade of economic insecurity ending in depression, a weakening of parliamentary democracy, and fascist backlash. Right up until the German election of July 1932, when the Nazis became the largest party in the Reichstag, the pre-Hitler governing coalition was practicing the economic austerity commended by Germany’s creditors.
The great prophet of how market forces taken to an extreme destroy both democracy and a functioning economy was not Karl Marx but Karl Polanyi. Marx expected the crisis of capitalism to end in universal worker revolt and communism. Polanyi, with nearly a century more history to draw on, appreciated that the greater likelihood was fascism.
As Polanyi demonstrated in his masterwork The Great Transformation (1944), when markets become “dis-embedded” from their societies and create severe social dislocations, people eventually revolt. Polanyi saw the catastrophe of World War I, the interwar period, the Great Depression, fascism, and World War II as the logical culmination of market forces overwhelming society—“the utopian endeavor of economic liberalism to set up a self-regulating market system” that began in nineteenth-century England. This was a deliberate choice, he insisted, not a reversion to a natural economic state. Market society, Polanyi persuasively demonstrated, could only exist because of deliberate government action defining property rights, terms of labor, trade, and finance. “Laissez faire,” he impishly wrote, “was planned.”
Polanyi believed that the only way politically to temper the destructive influence of organized capital and its ultra-market ideology was with highly mobilized, shrewd, and sophisticated worker movements. He concluded this not from Marxist economic theory but from close observation of interwar Europe’s most successful experiment in municipal socialism: Red Vienna, where he worked as an economic journalist in the 1920s. And for a time in the post–World War II era, the entire West had an egalitarian form of capitalism built on the strength of the democratic state and underpinned by strong labor movements. But since the era of Thatcher and Reagan that countervailing power has been crushed, with predictable results.
In The Great Transformation, Polanyi emphasized that the core imperatives of nineteenth-century classical liberalism were free trade, the idea that labor had to “find its price on the market,” and enforcement of the gold standard. Today’s equivalents are uncannily similar. We have an ever more intense push for deregulated trade, the better to destroy the remnants of managed capitalism; and the dismantling of what remains of labor market safeguards to increase profits for multinational corporations. In place of the gold standard—whose nineteenth-century function was to force nations to put “sound money” and the interests of bondholders ahead of real economic well-being—we have austerity policies enforced by the European Commission, the International Monetary Fund, and German Chancellor Angela Merkel, with the American Federal Reserve tightening credit at the first signs of inflation.
This unholy trinity of economic policies that Polanyi identified is not working any more now than it did in the 1920s. They are practical failures, as economics, as social policy, and as politics. Polanyi’s historical analysis, in both earlier writings and The Great Transformation, has been vindicated three times, first by the events that culminated in World War II, then by the temporary containment of laissez-faire with resurgent democratic prosperity during the postwar boom, and now again by the restoration of primal economic liberalism and neofascist reaction to it. This should be the right sort of Polanyi moment; instead it is the wrong sort.
Gareth Dale’s intellectual biography, Karl Polanyi: A Life on the Left, does a fine job of exploring the man, his work, and the political and intellectual setting in which he developed. This is not the first Polanyi biography, but it is the most comprehensive. Dale, a political scientist who teaches at Brunel University in London, also wrote an earlier book, Karl Polanyi: The Limits of the Market (2010), on his economics.
Polanyi was born in 1886 in Vienna to an illustrious Jewish family. His father, Mihály Pollacsek, came from the Carpathian region of the Hapsburg Empire and acquired a Swiss engineering degree. He was a contractor for the empire’s growing rail system. In the late 1880s, Mihály moved the family to Budapest, according to the Polanyi Archive. He magyarized the children’s family name to Polanyi in 1904, the same year Karl began studies at the University of Budapest, though he kept his own surname. Karl’s mother, Cecile, the well-educated daughter of a Vilna rabbi, was a pioneering feminist. She founded a women’s college in 1912, wrote for German-language periodicals in Budapest and Berlin, and presided over one of Budapest’s literary salons.
At home, German and Hungarian were spoken (along with French “at table”), and English was learned, Dale reports. The five Polanyi children also studied Greek and Latin. In the quarter-century before World War I, Budapest was an oasis of liberal tolerance. As in Vienna, Berlin, and Prague, a large proportion of the professional and cultural elite consisted of assimilated Jews. In the mid-1890s, Dale notes, “the Jewish faith was accorded the same privileges as the Christian denominations, and Jewish representatives were accorded seats in the upper house of parliament.”
Drawing on interviews and correspondence as well as published writings, Dale vividly evokes the era. Polanyi’s milieu in Budapest, known as the Great Generation, included activists and social theorists such as his mentor, Oscar Jaszi; Karl Mannheim; the Marxist Georg Lukács; Karl’s younger brother and ideological sparring partner, the libertarian Michael Polanyi; the physicists Leo Szilard and Edward Teller; the mathematician John von Neumann; and the composers Béla Bartók and Zoltán Kodály, among many others. In this hothouse Polanyi thrived, attending the Minta Gymnasium, one of the city’s best, and then the University of Budapest. He was expelled in 1907 following a shoving match in which anti-Semitic right-wingers disrupted a lecture by a popular leftist professor, Gyula Pikler. He had to finish his doctor of law degree in 1908 at the provincial University of Kolozsvár (today Cluj in Romania). There, he was a founder of the left-humanist Galilei Circle and later served on the editorial board of its journal.
Polanyi became a leading member of Jaszi’s political party, the Radicals, and was named its general secretary in 1918. He was drawn to the Christian socialism of Robert Owen and Richard Tawney and the guild socialism of G.D.H. Cole. He mused about a fusion of Marxism and Christianity. Polanyi is best classified as a left-wing social democrat—but a lifelong skeptic of the possibility that a capitalist society would ever tolerate a hybrid economic system.
After World War I broke out, Polanyi enlisted as a cavalry officer. When he came home in late 1917, suffering from malnutrition, depression, and typhus, Budapest was in the throes of a chaotic conflict between the left and the right. In 1918 the Hungarian government made a separate peace with the Allies, breaking with Vienna and hoping to create a liberal republic. Events in the streets overtook parliamentary jockeying, and the Communist leader Béla Kun proclaimed what turned out to be a short-lived Hungarian Soviet Republic.
Polanyi decamped for Vienna, both to recover his health and to get off the political front lines. There he found his calling as a high-level economics journalist and the love of his life, Ilona Duczynska, a Polish-born radical well to his left. Their daughter, Kari, born in 1923, recalls, as a preteen, clipping marked-up newspaper articles in three languages for her father. At age ninety-four, she continues to help direct the Polanyi Archive in Montreal.
Central Europe’s equivalent of The Economist, the weekly Österreichische Volkswirt, hired Polanyi in 1924 as a writer on international affairs. He continued his quest for a feasible socialism, engaging with others on the left and challenging the right in ongoing arguments with the free-market theorist Ludwig von Mises. The debates, published in agonizing detail, turned on whether a socialist economy was capable of efficient pricing. Mises insisted it was not. Polanyi argued that a decentralized form of worker-led socialism could price necessities with good-enough accuracy. He ultimately concluded, Dale recounts, that these abstruse technical arguments had been a waste of his time.1
A practical answer to the debate with Mises was playing out in Red Vienna. Well-mobilized workers kept socialist municipal governments in power for nearly sixteen years after World War I. Gas, water, and electricity were provided by the government, which also built working-class housing financed by taxes on the rich—including a tax on servants. There were family allowances for parents and municipal unemployment insurance for the trade unions. None of this undermined the efficiency of Austria’s private economy, which was far more endangered by the hapless policies of economic austerity that were criticized by Polanyi. After 1927, unemployment relentlessly increased and wages fell, which helped bring to power in 1932–1933 an Austrofascist government.
To Polanyi, Red Vienna was as important for its politics as for its economics. The perverse policies of Dickensian England reflected the political weakness of its working class, but Red Vienna was an emblem of the strength of its working class. “While [English poor-law reform] caused a veritable disaster of the common people,” he wrote, “Vienna achieved one of the most spectacular triumphs of Western history.” But as Polanyi appreciated, an island of municipal socialism could not survive larger market turbulence and rising fascism.
In 1933, with homegrown fascists running the government, Polanyi left Vienna for London. There, with the help of Cole and Tawney, he eventually found work in an extension program sponsored by Oxford University, known as the Workers’ Educational Association. He taught, among other subjects, English industrial history. His original research for these lectures formed the first drafts of The Great Transformation.
His mentor Oscar Jaszi was also now in exile and teaching at Oberlin. To supplement his meager adjunct pay, Polanyi was able to put together lecture tours to colleges in the United States. He found Roosevelt’s America a hopeful counterpoint to Europe. After war broke out, one of those lecture trips evolved into a three-year appointment at Bennington College, where he completed his book.
The timing of publication was auspicious. The year 1944 included the Bretton Woods Agreement, Roosevelt’s call for an Economic Bill of Rights, and Lord Beverage’s epic blueprint Full Employment in a Free Society. What these had in common with Polanyi’s work was a conviction that an excessively free market should never again lead to human misery ending in fascism.
Yet Polanyi’s book was initially met with resounding silence. This, I think, was the result of two factors. First, Polanyi belonged to no academic discipline and was essentially self-taught. Dale writes that when he was finally offered a job teaching economic history at Columbia in 1947, “the sociologists saw him as an economist, while the economists thought the reverse.” Midcentury America was also a period when political economy, institutionalism, the history of economic thought, and economic history were going into a period of eclipse, in favor of formalistic modeling. Polanyi’s was not a hypothesis that could be tested.
Second and more important, Polanyi’s ideological adversaries enjoyed subsidy and promotion while he had only the power of his ideas. Mises, like Polanyi, had no academic credentials. But he conducted an influential private seminar from his post as secretary of the Austrian Chamber of Commerce. The seminar developed the ultra-laissez-faire Austrian school of economics. Mises’s prime student was Friedrich Hayek. As a laissez-faire theorist financed by organized business, Mises anticipated the Heritage Foundation by half a century.
Hayek later contended in The Road to Serfdom that well-intentioned state efforts to temper markets would end in despotism. But there is no case of social democracy drifting into dictatorship. History sided with Polanyi, demonstrating that an unrestrained free market leads to democratic breakdown. Yet Hayek ended up with a chair at the London School of Economics, which was founded by Fabians; the “Austrian School” got dignified as a formal school of libertarian economics; and Hayek later won the Nobel Memorial Prize in Economic Sciences. The Road to Serfdom, also published in 1944, was a best seller, serialized in Reader’s Digest. Polanyi’s Great Transformation sold just 1,701 copies in 1944 and 1945.
When The Great Transformation appeared in 1944, the review in The New York Times was withering. The reviewer, John Chamberlain, wrote, “This beautifully written essay in the revaluation of a hundred and fifty years of history adds up to a subtle appeal for a new feudalism, a new slavery, a new status of economy that will tie men to their places of abode and their jobs.” If that sounds curiously like Hayek, the same Chamberlain had just written the effusive foreword to The Road to Serfdom. Such is the political economy of influence.
Yet Polanyi’s book refused to fade away. In 1982, his concepts were the centerpiece of an influential article by the international relations scholar John Gerard Ruggie, who termed the postwar economic order of 1944 “embedded liberalism.” The Bretton Woods system, Ruggie wrote, reconciled state with market by “re-embedding” the liberal economy in society via democratic politics.2 The Danish sociologist Gøsta Esping-Andersen, a major historian of social democracy, used the Polanyian concept “decommodification” in an important book, The Three Worlds of Welfare Capitalism(1990), to describe how social democrats contained and complemented the market.3
Other scholars who have valued Polanyi’s insights include the political historians Ira Katznelson, Jacob Hacker, and Richard Valelly, the late sociologist Daniel Bell, and the economists Joseph Stiglitz, Dani Rodrik, and Herman Daly. On the other hand, thinkers who seem quintessentially Polanyian in their concern about markets invading nonmarket realms, such as Michael Walzer, John Kenneth Galbraith, Albert Hirschman, and the Nobel laureate Elinor Ostrom, don’t invoke him at all. This is the price one pays for being, in Hirschman’s self-description, a trespasser.
Having been exiled three times—from Budapest to Vienna, from Vienna to London, and later to New York—Polanyi had to move yet again when the US authorities would not grant Ilona a visa, citing her onetime membership in the Communist Party in the 1920s. They ended up in a suburb of Toronto, from which Polanyi commuted to Columbia until his retirement in the mid-1950s.
Though his enthusiasts tend to focus only on The Great Transformation, Dale’s book is valuable for his discussion of Polanyi after 1944. He lived for another twenty years, working on what was then known as primitive economic systems, which gave him yet another basis to demonstrate that the free market is no natural condition, and that markets in fact do not have to overwhelm the rest of society. On the contrary, many early cultures effectively blended market and nonmarket forms of exchange. His subjects included the slave trade of Dahomey and the economy of ancient Athens, which “demonstrated that elements of redistribution, reciprocity, and market exchange could be effectively fused into ‘an organic whole.’” Dale writes, “For Polanyi, democratic Athens was truly antiquity’s forerunner to Red Vienna.” Athens, of course, was far from socialist, but its precapitalist economy did blend market and nonmarket forms of income.
Dale also addresses Polanyi’s views on the escalating cold war and on the mixed economy of the postwar era that many now view as a golden age. The trente glorieuses, combining egalitarian capitalism and restored democracy, should have felt to him like an affirmation. But Polanyi, having lived through two wars, the destruction of socialist Vienna, the loss of close family members to the Nazis, four separate exiles, and long separations from Ilona, was not so easily convinced. While he admired Roosevelt, he considered the British Labour government of 1945 a sellout—a welfare state atop a still capitalist system.
Half a century later, that concern proved all too accurate. Others saw the Bretton Woods system as an elegant way of restarting trade while creating shelter for each member nation to run full-employment economies, but Polanyi viewed it as an extension of the sway of capital. That may also have been prescient. By the 1980s, the IMF and the World Bank had been turned into enforcers of austerity, the opposite of what was intended by their architect, John Maynard Keynes. He blamed the cold war mostly on the Allies, praising Henry Wallace’s view that the West could have reached an accommodation with Stalin.
Dale makes no excuses for Polanyi’s blind spot about the Soviet Union. At various points in the 1920s and 1930s, he notes, Polanyi gave Stalin something of a pass, even blaming the 1940 Molotov–Ribbentrop pact on Whitehall’s anti-Sovietism. And he was sanguine about the intentions of the Russians in the immediate postwar period. As a member of the émigré Hungarian Council in London, he broke with its other leaders over whether the Red Army should be welcomed as a harbinger of democratic socialism. The Soviet liberation of Eastern Europe, Polanyi insisted, would bring “a form of representative government based on political parties.”
Having been proven badly wrong, Polanyi cheered the abortive Hungarian revolution of 1956, yet after it was crushed by Soviet tanks he also found reasons for hope in the mildly reformist “goulash communism” that followed. This was naive, yet not totally misplaced. Though Polanyi was no Marxist, there was enough openness in Hungary that in 1963, a year before his death and well before the Berlin Wall came down, he was invited to lecture at the University of Budapest, his first visit home in four decades.
On the centennial of his birth in 1986, Kari Polanyi-Levitt organized a symposium in his honor in Budapest. The conference volume makes a superb companion to the Dale biography.4 The twenty-five short articles are written by a mix of writers based in the West and several from what was still Communist Hungary—where Polanyi was widely read. The writing is surprisingly exploratory and nondogmatic. Even so, when her turn came to speak, Polanyi-Levitt took a moment to plead: “If I may be permitted one more request to the Hungarian Academy of Sciences…it is that The Great Transformation be made available to Hungarian readers in the Hungarian language.” This was finally done in 1990. Like many in the West, the Communist regime in Budapest was not quite sure what to do with Polanyi.
Today, after a democratic interlude, Hungary is a center of ultra-nationalist autocracy. Misguided policies of financial license played their usual part. After the 2008 financial collapse, Hungarian unemployment steadily rose, from under 8 percent before the crash to almost 12 percent by early 2010. And in the 2010 election, the far-right Fidesz Party swept a left-wing government out of power, winning more than two thirds of the parliamentary seats, which made possible the “illiberal democracy” of Prime Minister Viktor Orbán. It was one more echo, and one more vindication, that Polanyi didn’t need.
What, finally, are we to make of Karl Polanyi? And what lessons might he offer for the present moment? As even his champions admit, some of his details were off. Earlier friendly critics, Fred Block and Margaret Somers, point out that his account of late-eighteenth-century Britain exaggerates the ubiquity of poor relief. His famous case of the poor law of Speenhamland of 1795, whose public assistance protected the poor from the early perturbations of capitalism, overstated its application in England as a whole. Yet his account of the liberal reform of the poor laws in the 1830s was spot on. The intent and effect were to push people off of relief and force workers to take jobs at the lowest going wage.
One might also argue that the failure of liberal democracy to take hold in Central Europe in the nineteenth century, which paved the way for right-wing nationalism, had more complex causes than the spread of economic liberalism. Yet Polanyi was correct to observe that it was the failed attempt to universalize market liberalism after World War I that left the democracies weak, divided, and incapable of resisting fascism until the outbreak of war. Neville Chamberlain is best remembered for his capitulation to Hitler at Munich in 1938. But at the nadir of the Great Depression in April 1933, when Hitler was consolidating power in Berlin and Chamberlain was serving as Tory chancellor of the exchequer in London, he said this: “We are free from that fear which besets so many less fortunately placed, the fear that things are going to get worse. We owe our freedom from that fear to the fact that we have balanced our budget.” Such was the perverse conventional wisdom, then and now. That line should be chiseled on some monument to Polanyi.
A recent article by three Danish political scientists in the Journal of Democracyquestions whether it was reasonable to attribute the surge of fascism in the 1920s and 1930s to the long arc of laissez-faire and economic collapse.5 They reported that the well-established democracies of northwest Europe and the former British colonies Canada, the US, Australia, and New Zealand “were virtually immune to the repeated crises of the interwar period,” while the newer and more fragile democracies of southern, central, and eastern Europe succumbed. Indeed, fascists briefly assumed power in northwest Europe only through invasion and occupation. Yet that observation makes Polanyi a more prophetic and ominous voice for our own time. Today in much of Europe, far-right parties are now the second or third largest.
In sum, Polanyi got some details wrong, but he got the big picture right. Democracy cannot survive an excessively free market; and containing the market is the task of politics. To ignore that is to court fascism. Polanyi wrote that fascism solved the problem of the rampant market by destroying democracy. But unlike the fascists of the interwar period, today’s far-right leaders are not even bothering to contain market turbulence or to provide decent jobs through public works. Brexit, a spasm of anger by the dispossessed, will do nothing positive for the British working class; and Donald Trump’s program is a mash-up of nationalist rhetoric and even deeper government alliance with predatory capitalism. Discontent may yet go elsewhere. Assuming democracy holds, there could be a countermobilization more in the spirit of Polanyi’s feasible socialism. The pessimistic Polanyi would say that capitalism has won and democracy has lost. The optimist in him would look to resurgent popular politics.
To The Sounds of Marching Feet
I must go back, these years have held
Great lives and final words, all quelled
Beat desperately, the snow, her sheet
All lost to the sounds of marching feet
In progress, all their proper ends
Unto memories, your lifelong friends
Those caught within the heavy net
Or destined days, so bravely met
With this, then, I might change it all
The seas must rise, the skies must fall
In one, may lone survivor find
I leave no taken dead behind
Familiar loss, more frequent ache
Their visits in a life may take
So clench your fists and beat closed doors
I speak, surviving son, of yours
So tear, my life to stir, replace
Those caught within a mortal chase
To Destiny, as sets things right
Throw singeing flame and blinding light
The lone, of forests ravaged weak,
Now stands, a hope of beacons, bleak
Upon the remnants rests the woe,
The only thing which here, might grow
I've lost, I've made a grave mistake!
The horror holds me, faint, awake
Here all I ask, just take me back,
My faith, my past and all I lack
The marks, the scars, the broken glass
They cannot fade, they mustn't pass
Leave bare upon the timeless slate,
Each ridge and groove, a road to fate
Then grim, for not a hope remains
The slate of names is bled with stains
I see their disillusioned gaze,
Those trapped of time, deprived of days
So goes, as it has always been
Each breath to bring, each beat begin
And intertwine, to end and meet
All lost to the sound of marching feet
I was going to die, sooner or later,
whether or not I had even spoken myself.
My silences had not protected me.
Your silences will not protect you….
What are the words you do not yet have?
What are the tyrannies you swallow day by day
and attempt to make your own,
until you will sicken and die of them, still in silence?
We have been socialized to respect fear
more than our own need for language.
Next time, ask: What’s the worst that will happen?
Then push yourself a little further than you dare.
Once you start to speak, people will yell at you.
They will interrupt you, put you down and suggest it’s personal.
And the world won’t end. And the speaking will get easier and easier.
And you will find you have fallen in love with your own vision,
which you may never have realized you had.
And you will lose some friends and lovers,
and realize you don’t miss them.
And new ones will find you and cherish you.
And you will still flirt and paint your nails,
dress up and party, because, as I think Emma Goldman said,
“If I can’t dance, I don’t want to be part of your revolution.”
And at last you’ll know with surpassing certainty
that only one thing is more frightening than speaking your truth.
And that is not speaking.
SUPER GOETHE BY FERDINAND MOUNT Goethe: Life as a Work of Artby Rüdiger Safranski, translated from the German by David DollenmayerLiveright, 651 pp., $35.00
Herr Glaser of Stutzerbach was proud of the life-sized oil portrait of himself that hung above his dining table. The corpulent merchant was even prouder to show it off to the young Duke of Saxe-Weimar and his new privy councilor, Johann Wolfgang Goethe. While Glaser was out of the room, the privy councilor took a knife, cut the face out of the canvas, and stuck his own head through the hole. With his powdered wig, his burning black eyes, his bulbous forehead, and his cheeks pitted with smallpox, Goethe must have been a terrifying spectacle. While he was cutting up his host's portrait, the duke's other hangers-on were taking Glaser's precious barrels of wine and tobacco from his cellar and rollin then down the mountain outside. Goethe wrote in his diary: ''Teased Glaser shamefully. Fantastic fun till 1am. Slept well.''
Goethe's company could be exhausting. One minute he would be reciting Scottish ballads, quoting long snatches from Voltaire, or declaiming a love poem he had just made up; the next, he would be smashing the crockery or climbing the Brocken mountain through the fog. Only in old age, and more so in the afterglow of posterity, did he take on the mantle of the dignified sage. Yet even late in life, he remained frightening. He daughter-in-law, Ottilie, whom he insisted on marrying to his son August, though they were not in love and got on badly, admitted that she was terrified of him.
He alarmed people as much as he charmed them, not only by his impatience, his sudden flare-ups, and his unpredictable antics, but by his foul language. In moments of exasperation he would denounce as a shithead any of the great men who had assembled at Weimar-Wieland, Herder, Schiller. The best-remembered line from his first play, Gotz von Berlichingen, is the robber baron Gotz shouting through the window to the emperor's messenger: ''Tell his Imperial Majesty that he can lick my arse'' - otherwise known as the Swabian salute. Goethe's Venetian Epigrams cheerfully skitter through masturbation, sodomy, and oral sex, with sideswipes at coffee shops and yo-yos (one of the first mentions of the toy). Here's a sample couplet:
Hattest de Madchen wie deine
Kanale, Venedig, und Fotzen
Wie die Gasschen in dir, warst du
die herrlichste Stadt.
(If only, Venice, you had girls as
charming as your canals and
As narrow as your alleys, you
would be the world's finest city.)
This Goethe had to be cleaned up quite a bit to become the national poet of the resurgent Germany of the later nineteenth century. Even the architects of that fearsome renaissance were not 100 percent sure of his iconic status. Bismarck said that he could do very well with no more than one-seventh of the forty-two volumes of Goethe's collected works. The centenary of his birth in 1849 passed with relatively little notice. It was the British who led the way in revaluing Goethe as the genius for the new serious age. Thomas Carlyle advised: Close thy Byron; open thy Goethe.'' G.H Lewes's enthusiastic but not unduly reverent biography of 1855 predates anything comparable in German. George Eliot was even more enthusiastic than her husband, regarding Goethe as having raised the human mind to an eminence from which it could more clearly see the world as it really was.
In his slashing attack on Prussian culture, When Blood Is Their Argument, published at the height of the Great War, Ford Madox Ford mocks the cult of '' Goethe as Superman.'' Yet there still persists a nation of Goethe's life as exemplary, a phenomenon above and beyond his works. This tradition lingers on in the subtitle of Rudiger Safranski's new biography, Geothe: Life as a Work of Art. Goethe is not the only artist to have seen his life like this. Oscar Wilde famously said to Andre Gide, ''I have put all my genius into my life; I have put only my talent into my works''; Marcel Duchamp had the same fancy. But only Goethe, I think, has succeeded so well in persuading posterity to take the same view.
After the collapse of Germany in 1945, only the figure of Goethe was still visibly upright amid the ruins as a source of national moral authority. All over the world, German Academies were rebranded as Goethe Institutes - there are presently more than 150 of them in operation. Yet in recent years, interest abroad in Goethe (and more generally in German language and literature) has sadly declined. Safranski's book is advertised by his publishers as ''the first definitive biography in a generation.'' This overlooks Nicholas Boyle's mighty undertaking, which has already occupied two volumes (1991 and 2000), each slightly longer than Safranski's, with another twenty-nine years of Goethe's life still to go. Safrankski does not begin to measure up to the depth and subtlety of Boyle's analysis. On the other hand, he says certain things that Boyle tends to blur or omits altogether - like the story of Herr Glaser. After reading Safranski, we are enlightened, amused, and impressed but rather less inclined to take Goethe's life as nonpareil, while still regarding him as a wonderful writer. By his honesty, the biographer undermines his own subtitle.
Johann Wolfgang Goethe (he earned the ''von'' after seven years in the duke's service) was born to the plush if not the purple. His maternal grandfather was mayor of Frankfurt, his paternal grandfather the city's principal couturier who also married the wealthy widow of the proprietor of the Weidenhof Inn. Goethe's indulgent father, Johann Caspar, spent much of his large inheritance on the education of his only surviving son (of Goethe's five siblings, only his sister Cornelia survived into adulthood), providing a dozen tutors for every subject from Yiddish to cello.
The boy was spoiled and self-confident from the start. At the age of seven, he wrote, ''I cannot reconcile myself to what is satisfactory for other people.'' He insisted that his mother lay out three different outfits for him to choose from every morning. His father gave him an allowance twice anyone else's and never seriously interfered with his son's plans. Nevertheless, a lung illness forced Goethe to drop out of Leipzig University, and his dissertation at Strasbourg was rejected because it was critical of state control over religion.
Then, quite suddenly, he was famous all over the German-speaking lands and, a couple of years later, all over Europe. Gotz von Berlichingen (1773) instantly became the trailblazer for the movement half-mockingly dubbed Sturm und Drang. This is normally translated as ''storm and stress.'' which seems to be a surrender to alliteration. Drang donates, more properly, an active force, as in Drang nach Osten: ''push'' or ''thrust,'' rather than a passive undergoing of pressure. And Gotz is a thrusting play. In Goethe's view, the sixteenth-century robber baron with the prosthetic iron hand was one of the ''noblest Germans.'' Gotz's former friend Wieslingen says admirably, ''You alone are free, you whose great soul is sufficient unto itself and has no need either to obey or to rule in order to be something.'' Gotz is certainly brave, and he is steadfast in defense of his traditional rights, but he is also a thug and a bully who robs innocent merchants, partly for the hell of it. At the end of the play, we are told that it's an unhappy age - i.e., the over civilized eighteenth century - that has no room for a Gotz.
The throb of nationalism is unmistakable. Herder, the inventor of German nationalism, perhaps of all modern nationalism, told his wife that she would enjoy the play, because ''there's an uncommon amount of German strength, depth, and truth to it, although now and then only the thought is there,'' In later life, Goethe protested when nationalists deployed Gotz in their cause, but he had used the phrase ''Deutschheit emergierend'' (German national feeling emergent) about this phase in his work. In 1943, Hitler named the 17th Panzer Grenadier Division the Gotz von Berlichingen Division. It's badge was an iron fist. In the early 1770s, too, Goethe collaborated with Herder in collecting German folk songs. His own delightful ''Heidenroslein'' was included in the collection as if it were a traditional folk song. For Herder's book of essays Of German Culture and Art, he wrote an article on German architecture, ''Von dutscher Baukunst,'' identifying Strasbourg Cathedral as the quintessential masterpiece of the German style. In fact, the cathedral is more usually described as a triumph of high French Gothic, although its principal architect, Erwin von Steinback, certainly was German.
As a young man, Goethe shared in the widespread longing for a revival of the German peoples and a recovery from the devastation of the Thirty Years' War. In later life, he would talk quizzically, almost patronizingly, of ''my dear Germans,'' but in the rough, down-to-earth language of Gotz, its pace and movement -- all borrowed from Shakespeare but simplified and coarsened --he had given his fellow dreamers something to work with. This plainness of speech he never lost. It i just as apparent in his ''classical'' dramas, Iphigenia in Tauris and Torquato Tasso, as it is in his rougher ''Germanic'' pieces, Gotz and Part One of Faust. The same is true of most of his lyric poetry. In all his variousness, he remained a highly accessible sage.
It was a different sort of dream that animated his second and even more amazing success only a year later. While Gotz was for the German public, the novel The Sorrows of Young Werther (1774) was for . young people everywhere. In tone and technique, it owes a lot to Rousseau's La Nouvelle Heloise and Richardson's Clarissa, both also international best sellers. The copious weeping, the unbridled privileging of personal feeling, the letter format - all these are characteristic of the eighteenth-century novel of sensibility. Werther differs only in two respects. The letters all come from one person, young Werther, and the novel is drenched in the possibility of suicide.
Werther is the ''I'' whose hankerings, recollections, and opinions fill the hundred-odd pages of this novella, which overwhelmed European readers because they already thought as Werther does. He worships Nature as the do, loves the simple life as everyone up to Marie Antoinette claimed to do; he is happiest picking peas in the inn's garden and shelling them while reading his Homer. And of course he simply adores Ossian. He reads six pages of his translations of Ossian to Lotte, with whom he has fallen in love; she bursts into tears, the only possible reaction. The latter part of the story, rather awkwardly, has to be told by ''the Editor to the Reader,'' because Werther has already shot himself with pistols belonging to Lotte's husband, Albert. Werther's mind has been on killing himself for much of the story, for the situation is hopeless from the start as Lotte is already engaged to Albert.
Copycat suicides have been dubbed the Werther Effect, but Safranski dismisses as only a persistent rumor the claim that young men actually killed themselves in droves after reading the book. There was, however, one attested case of suicide painfully close to Goethe. On January 16, 1778, Christel von Lassberg, the daughter of a court official in Weimar who was embroiled in an unhappy love affair, jumped from a bridge into the icy waters of the River Ilm and drowned. A copy of Werther was found in her pocket -- or was that only a rumor too? Neither Safranski nor Boyle seems quite sure. At all events, Goethe was summoned from a nearby pond where he was skating with the duke (how cold the river must have been), and he immediately ordered a grotto to be dug in memory of the unlucky girl. He wielded a pickaxe and shovel himself and told his platonic lover Charlotte von Stein that they worked deep into the night:
In the end I continued alone until the hour when she had died; that's the kind of evening it was. Orion stood so beautifully in the sky...There is something dangerously attractive and inviting about this grief, like the water itself, and the reflection of the stars of heaven that shines from both.
This is pure Goethe: he alone digs on, watching the heavens, watching himself, appropriating Christel's feelings if not her fate. The grotto came to nothing, but out of that night came his lovely poem ''To the Moon,'' which is also addressed to the River Ilm and to Charlotte. All the same, he was undeniably under pressure. Had he inspired a terrible example? Only a couple of weeks before Christel's death, he had put on at the Wiemar court theater a farce he had written, The Triumph of Sentimentalism, with himself playing a king who has gone made with the craze for Nature and Sentiment and who fills the arbor in his garden with soppy books like La Nouvelle Heloise and Werther. This mediocre piece made some viewers uncomfortable. Wasn't Goethe heartless to make fun of the silly folk whom he himself had made dizzy? Safranski is inclined to acquit his subject: ''Goethe's ridicule of Werther-like sentimentalism could surprise only those who hadn't read Werther closely. For the novel presents Werther as a young man who has read too much of such literature, and whose feelings come more from books than from life.''
But will this let-off do? There is so much of young Goethe in Young Werther. He admitted, ''I myself was in this case and know nest what anguish I suffered in it and what exertion it cost me to escape from it.'' We are reminded of Flaubert saying ''Madame Bovary, c'est moi.'' But Flaubert was merciless about Emma's pretensions. Goethe wasn't.
Not everyone was blown away by the book. Georg Christoph Lichtenberg, the physicist-satirist of Gottingen, was no more enthusiastic about the novella than he was to be about Goethe's scientific efforts: ''I think the smell of a pancake is a better motive for staying in this world than all young Werther's ponderous reason's for leaving it.'' I must myself confess a congenital antipathy to Geothe's novels. The characters seem to swim about in a glaucous haze like electronically controlled fish. Goethe claimed that Elective Affinities was his best book and that it needed to be read three times to be properly appreciated. I have done just that but remain baffled by its implausibilities: the extraordinary stilted talk between the husband and wife, Eduard and Charlotte, the failure of anyone to notice that beautiful Ottilie is starving herself to death, the immunity of her lovely corpse to the normal processes of decomposition. The central conceit that the characters are attracted to one another by a quasi-chemical process seems to me to lack any shock value, since they are such inert substances to begin with.
Like other biographers, Safranski portrays Goethe as a genius who is constantly reinventing himself. This is a natural tendency in dealing with a subject who lived so long and did so much. But certain cautionary notes need to be sounded. While his career as a lyric poet lasted his entire life and he was as fresh at the end as at the beginning, his career as a dramatic demiurge was a blaze as brief as it was brilliant, with no more than seven years between Götz (1773) and Tasso (1780), and it was over by the time he was thirty.
In many ways, he was fully formed as a young man, and his subsequent turns, toward classicism, toward the erotic, often appear as no more than pirouettes on the ice. He bursts upon our attention with his marvelous poetic facility, from “Welcome and Departure” when he was twenty to “The Bridegroom” and “To the Rising Moon” when he was nearly eighty. As with other old creative artists—Hardy, Yeats, Elgar—he found in his late-life infatuations with young women “the throbbings of noontide,” but this was reviving an old self, not inventing a new one. In almost all his verse, there is an extraordinary combination of movement and musicality, the best of Byron with the best of Tennyson. He is the easiest of poets to remember.
Safranski’s translator, David Dollenmayer, has produced an excellent English version. He has chosen to translate the verse himself, and his versions have a modest grace which often stands up well against acknowledged masters such as Michael Hamburger and David Luke. I prefer, for example, Dollenmayer’s Fifth Roman Elegy to Hamburger’s:
All the night long, however, it’s
Amor who keeps me busy.
If I only learn half, I am
doubly amused and
Do I not learn, after all, by
tracing the lovely breasts’
Forms, by running my hand
down the beautiful hips?
Only then do I grasp the marble
aright, I think and compare,
See with a feeling eye, feel
with a seeing hand.
What astonishes, almost as powerfully as his lyric fertility, is the ferocity of Goethe’s self-assertion, his determination not to bow to any god. He tells us that he had begun early on to develop his own religion, far from any church or liturgy. He revolted against the austere Lutheranism of his boyhood, and when he came in contact with the Pietist communities around Strasbourg, they bored him rigid, though he had a close friendship with the devout Susanna von Klettenberg, a cousin of his mother’s. He describes himself as a Pelagian, without any belief in original sin. He told Susanna that he didn’t know what he needed to ask God’s forgiveness for. He was a stranger to guilt. Despite intermittent gloom, his basic outlook was sunny. He signs off “The Bridegroom” with a line that might wind up the spiel of any corporate motivator: “Let life be as it will, yet it is good.”
Curiously, biographers of Goethe, while acknowledging that he was a self-confessed pagan, tend to present him as a rather chaste character. Safranski and Boyle recount his flirtations with Lotte and Lili and the rest and sigh over his testing amitié amoureuse with Charlotte von Stein. Yet they both leave the impression that for Goethe sex began in 1788, in Rome, when he was nearly forty. He is conceded only two sexual relationships in his whole long life, with the insecurely identified Faustina in Rome (for whom he seems to have left behind a payoff of four hundred scudi) and with the sweet-natured and loyal Christiane Vulpius when he got back to Weimar, later to become his wife and the mother of his children. Goethe’s bragging to the duke about Faustina, in Boyle’s view, suggests a certain sexual innocence that “makes it unlikely there were many predecessors.” Safranski mentions none.
I wonder. Goethe was a boundless, energetic, uninhibited character who happened to be the most famous author in Germany. In his early twenties he had boasted to his friend Kestner: “Between you and me I know something about girls.” His first letters from Weimar record that “I’m leading a pretty wild life here.” It was common gossip that almost as part of his duties, he was constantly out with the duke sharing the local girls. In his early farce Hanswurst’s Wedding, the bumpkin Hanswurst only wants to take Ursula up to the hayloft, and when he is told that all the posh people are coming to his wedding, says, “Sie mögen fressen und ich will vögeln” (“They want to eat, I want to fuck”). To present Goethe as a stranger to the hayloft is to collude in the sort of prudish sanctification he abhorred. What he believed above all was that Nature must take its course. Certainly he had no shred of reverence for Christian chastity.
His revulsion against Christianity extends to a loathing of its iconography and of the personality of Jesus. Here he differs from his nineteenth-century English admirers, who still warmed to the ethos of Christianity while doubting whether any of it was true. Goethe did write a couple of religious poems in his early youth but burned them and never used biblical imagery again. “I for my part could not be persuaded by an audible voice from heaven that a woman has given birth without a man or that a dead man has risen again; on the contrary, I regard these as blasphemies against the great God and His revelation in Nature.”
When he finally made his long-dreamed-of trip to Italy, he remained impervious to the Christian art he saw. He was disappointed even by the classical monuments he saw in Rome, most of them at this date overgrown tumbles of stone. For all his endless fertility, Goethe’s imagination or lack of it has a forbidding quality. He made an exception for Mantegna’s frescoes in the Eremitani Chapel in Padua, which seemed to have a blunt, pure presence. “Presence” is the key word here. He condemned what he saw as the poverty of Christian mythology, always longing for something absent, dwelling, in a way that he regarded as unholy, on deprivation, suffering, and expectation rather than empowerment and possession.
The only true divinity was Nature. “Gott sive Natur”—“sive” here meaning “which is only another way of saying.” This did not lead him into a woozy pantheism, seeing divine purpose in every blade of grass and benevolence in every puff of wind. In his early poem “Divinity,” he spells out the message: Nature is unfeeling; the sun shines on the evil and the good.
From his early thirties onward Goethe plunged into almost every branch of the natural sciences—mineralogy, geology, botany, anatomy, chemistry, optics—with a zest that was thorough without ever quite ceasing to be amateurish (he was reluctant to use the latest instruments, preferring to rely on the evidence of his own senses). On his mountain ambles, he was never without his geologist’s hammer and his samples satchel. He had too the amateur’s umbrage when the professionals refused to take him seriously. Emil du Bois-Reymond, the founder of neurophysiology, called his theory of color, which attempted to disprove Newton, “the stillborn bagatelle of a dilettantish autodidact.”
Goethe’s first great claim, in 1784, was to have discovered an intermaxillary bone in the human skull, which would have established a hitherto missing link with other animal species. The savants, led by the foremost osteologist of the day, Petrus Camper, pooh-poohed this. Yet Goethe turned out to have been right: there are vestiges of a bone between the human maxillae, and it is known today as Goethe’s Bone. I am surprised by how little the biographers make of this: Safranski records the episode in a few sentences, and Boyle remarks, rather patronizingly, that “Goethe lacked a methodical theoretical framework by which to interpret his observation.” Yet in a way Goethe’s achievement is more remarkable than if he had possessed such a framework. He had caught a glimpse of the evolutionary process without looking for it.
The Prometheus of Weimar had cleared his own mind not only of religion in any sense but of any notion of a purposeful Providence. Now Goethe takes the sixteenth-century Puppenspiel vom Dr Faustus, a puppet show about a magician, and transforms it into an epic drama of Man versus the Universe. Goethe’s Faust is worth comparing with Marlowe’s Dr. Faustus, another masterpiece based on the same sources. Marlowe was much denounced as an atheist, if mostly by unreliable and hostile witnesses, but Dr. Faustus follows a more or less orthodox theological trajectory. Faustus makes a bargain with the devil, enjoys all the pleasures of the world, repents too late, and is carried off to hell.
Goethe’s first version, the Urfaust, not discovered until 1886, sticks more closely to this schema. His introduction of the story of Faust’s seduction of poor Gretchen only intensifies and humanizes what is clearly a tragedy. But Goethe confessed himself unsuited or antipathetic to the tragic genre, and he proved this by repeatedly fiddling with the play to detragedize it. Part Two, which contains no whisper of the tragic, was finished only six months before his death.
Chaotic and interminable as it may seem on the page, on stage Faust usually works, whether performed as Urfaust, as Part One, or as the full thing. After his years as director of the Weimar theatre, Goethe was a master of all the tricks of surprise, quickfire changes of pace and scene, and a colorful cast scurrying across the stage. The play accommodates the small-scale tragedy of Gretchen with her beautiful songs, the lurid antics of Walpurgisnacht, the backchat of pre-Socratic philosophers, the birth of Homunculus the artificial human being, and much else besides. Part Two wanders through classical pillars and along Homeric seashores into scenes that satirize the bustle and corruption of contemporary society. In the closing scenes, Faust becomes a greedy property developer who burns an elderly couple, Philemon and Baucis, out of their cottage on the dunes because he lusts to own the entire coastline. Rereading this scene, I could not help thinking of Donald Trump winkling recalcitrant homeowners off the dunes of Aberdeenshire to make way for his golf course. But there are no rewards and no punishments. At the end of Part One, a voice from heaven tells us that Gretchen is saved. At the end of Part Two, Faust’s soul is carried off to heaven by a chorus of angels.
Many of Goethe’s plays deal with great men. In none of them do we find much concern with democracy. In his own life, he accepts as given the assortment of absolutist duchies that made up the Germany of his day. He eagerly sought out a position at one of these courts, quickly settling on the duchy of Saxe-Weimar, whose young duke, Karl August, was delighted to have such a celebrity author on the payroll. It was not a big place, then or now, with an overall population of 80,000, only 6,000 of them in Weimar itself, a quarter of those dependent on the court. The mail coach didn’t even call at Weimar; the tracks on the roads were so rutted that carriages had to detour through the fields. Yet no other town in Germany could boast Wieland, Herder, Lenz, Schiller, and Goethe among its residents.
Walking today down the pretty streets the short distance to the little river and the cottage that the duke gave Goethe, one still feels the thrill of treading on sacred ground. But at the same time it was a backbiting, introverted, jockeying, petty sort of place. In the perpetual sunshine of Italy, with its uninhibited outdoor life, Goethe reflected, “How I feel what wretched lonely people we are forced to be in the little sovereign [German] states, because, especially in my position, one can speak with scarcely anyone who does not want or desire something.”
So why did he stay in Weimar fifty years? Why did he return twice from those Italian sojourns, where he could sketch and drink and sightsee and munch grapes and figs to his heart’s content? Boyle devotes seventeen pages to a careful examination of the reasons why Goethe never left Weimar (he is buried in the ducal cemetery there). They seem to boil down to “big fish in small pond.” In Berlin or Dresden, Goethe would have been a minor functionary. At Weimar, provided he kept on the right side of the duke, a good-natured chap, he was a lion.
But his long stay at court did not come without a moral cost. After his death, Goethe was often denounced as a prince’s toady, and a selfish escapist, until he was rescued by the new Germany’s need for a national bard. Weimar certainly was a refuge. Goethe was able to sit out the French Revolution, burying himself in his scientific work without offering a single political comment for six months after the fall of the Bastille. During the years he spent as chief minister, he reduced Karl August’s little army from 500 to 136 men and reduced the tax burden, which had been one of Germany’s highest; the eventual failure of his great silver mining project at Ilmenau looks like bad luck rather than bad management.
He remained, though, steadfastly opposed to anything resembling a popular constitution. He supported Metternich’s Carlsbad Decrees, which introduced press censorship, the police investigation of dissidents, and state control of universities. Even after the great German defeat at nearby Jena in 1806, he refused to mourn the loss of liberty:
When people bewail an entity that has supposedly been lost, an entity that not a soul in Germany has ever seen in his life, much less bothered about it, then I have to conceal my impatience so as not to become impolite.
Goethe hero-worshiped Napoleon from first to last, referring to him as “my Emperor” and on every possible occasion wearing the cross of the Légion d’Honneur that the emperor had awarded him. He called Napoleon “the highest phenomenon possible in history.” He was enraptured to be given a private audience at the congress of European princes in Erfurt in October 1808, and treasured Napoleon’s greeting to him: “voilà un homme.” Others came to be horrified by the slaughter Napoleon unleashed on the world and to recognize his cynicism and theatricality. Goethe saw in him the impervious, almost mineral hardness that was indispensable to a great creator, and was particularly gratified that Napoleon said “that my character was in accord with his.”
The hard outlines of Goethe’s character might be blurred by his nineteenth-century admirers, but they were always there for those who cared to look. And no one looked more intently at Goethe than Friedrich Nietzsche. In Twilight of the Idols, he tells us that “Goethe is the last German before whom I feel reverence.” Only Goethe had treated the French Revolution and the doctrine of equality with the disgust they deserved. He was a convinced realist in an epoch disposed to the unreal:
Goethe conceived of a strong, highly cultured human being, skilled in all physical accomplishments, who holding himself in check and having reverence for himself, dares to allow himself the whole compass and wealth of naturalness, who is strong enough for this freedom…a man to whom nothing is forbidden, except it be weakness, whether that weakness be called vice or virtue.
This is such a resonant and exact description of how Goethe saw his mission that I am surprised that Safranski does not quote it (nor, as yet, does Boyle). Perhaps that is because the encomium is also somewhat off-putting. This Superman—for that is what Nietzsche is describing, though he does not apply the term directly to Goethe—is ultimately a frightening figure. He acknowledges no external limits on his will, his actions are self-validating, he is beyond scruples. Nothing forbidden except weakness? Give me a little weakness every time. Hardness only leads to hardness. I am not the first to note that included among the sights of Weimar in the Michelin Green Guide is Buchenwald.
LettersWhere the Lemon Trees Bloom January 18, 2018
Donald Trump has threatened ''Little Rocket Man'' with ''Fire and fury like the world has never seen'' - not even seen, presumably, at Hiroshima or Nagasaki. We possess, after all, many more and much better (that is, much worse) explosives than were used by President Truman in 1945, when he incinerated those cities without Congress or the American people knowing we even had them. The fact that President Trump (''old lunatic'') has a legally absolute power to destroy Kim John - un (''short and fat'') over dueling insults is so scary that Senator Edward Markey and Representative Ted Lieu are trying to restrict that absolute power, so that only Congress would have the authority to declare nuclear war. This seems not only reasonable but constitutionally necessary. The Constitution in fact denies the president the power to declare war and reserves it solely to Congress.
More than that, the framers clearly opposed the massing of power in the executive - lest it become the monarchy they had opposed with a revolution. They so feared one-man rule that they entertained the idea of a double executive (based on the ancient Roman consulship ) or a legislative council. The single executive was adopted largely because James Wilson of Pennsylvania argued that it would make the president more impeachable (it would be hard to fix responsibility on members of a team or a council). They thought one man would be more accountable - not anticipating post-Constitution developments like ''executive privilege, '' the '' classification'' of secrets, and ''the unitary executive'' that would make him less accountable.
But now that we have traveled so far from constitutional government, what can we do? The atom bomb was born as a secret project of President Franklin Roosevelt, and then deployed by Truman without any but his own authority. Truman did not even know, as vice-president, that Roosevelt was developing this new weapon until he became the chief executive himself and was let in on the secret. Then, after the bombings of Japan were sprung as a surprise on the whole world, presidential authority to keep and use the ''Bomb'' (soon to be a vast arsenal of hydrogen explosives) was extended undiminished in the Atomic Energy Act of 1946.
Ever since, every president carries with him where he goes the ''football'' containing the codes for the immediate arming and launching of obliterative missiles. As Vice President Dick Cheney said of President George W. Bush's war power in 2008, ''He could launch the kind of devastating attack the world has never seen [that phrase again]. He doesn't have to check with anybody, he doesn't have to call the Congress, he doesn't have to check with the courts.''
The symbolism of that tremendous power has put the nation on a permanent war footing - so much so that we think and talk about the president as ''our commander in chief,'' though the Constitution does not give him that power over citizens but only command ''of the Army and Navy of the United States, and of the Militia of the several States, when called into the actual Service of the United States'' (Article 2, Section 2). That is: he is not even the commander in chief of the National Guard in its normal service in the separate states, only when it is nationalized for use in the country's wars.
The war footing of the presidency in 1946 was the setting of the Atomic Energy Act. President Truman did not know what conditions would prevail after World War II. He did not want to give up any of the vast powers the executive had accumulated in that conflict. He tried to impose universal military training on all young males. He tried to prevent strikes by drafting coal miners and steel companies into military service, since all sources of strength were to be at his disposal as our commander in chief. He did not ask Congress for approval of American intervention in the Korean War, since his secretary of state, Dean Acheson, said that might weaken his power to respond instantly to nuclear threats. There was a wartime edginess then not only over the Soviet threat from abroad but from inner subversion that had to be guarded against by ''classification'' of our many secret programs, loyalty oaths, and extensive monitoring and blacklisting of suspected leftists. (In the 1950s, Donald Trump's dogged defenses of the Russian leader and government would have made him unemployable on TV as a loyalty risk.)
War conditions, instead of fading after the defeat of the Axis, found new homes as fresh threats came. World War II melded into the cold war, which has melded into the ''war on terror.'' There was no reduction in arms expenditures between the ''end'' of one peril and preparation for the next. When Truman was given his authority over the Bomb, there was at first not a full deployment of nuclear production and delivery systems. The president thought he could preserve a nuclear monopoly.
As other nations have acquired the Bomb, we have had to develop strategies for containment with them. The ''nuclear club'' now numbers nine, while other countries are working to develop their own nuclear weapons. As each one acquired the Bomb and different degrees of deliverability, there was a temptation to think the scourge of further spread could be eliminated by a preemptive strike; but the danger of again using any nuclear weapon was too terrible to be entertained, and the notion that a first blow would not be followed by a renewed nuclear program was seen to be chimerical. So attempts at treaties and other agreements restricting production or proliferation were explored. (Even when George W. Bush launched a false-alarm attack on Iraq's nonexistent ''weapons of mass destruction'' it was at least not a nuclear attack.)
When the Constitution granted one person the executive power, there was an expectation that he would consult experts - scientific, military, and diplomatic - before making his decisions. He was given a power that he was expected to use:
He may require the opinion, in writing, of the principal officer in each of the executive departments, upon any subjects relating to the duties of their respective offices. (Article 2, Section 2)
This power of inquiry is in fact a duty, one that President Trump has neglected. Rather than consult the officers trained and reporting to the president, he has mocked the most experienced intelligence veterans (calling them political hacks), dismantled the governments scientific bodies (as promoting hoaxes), drained the diplomatic agencies (as useless bureaucrats), and reduced or eliminated national commitments to other countries. He says he does not need expertise; he knows more than experts; he has a very good brain, which is his greatest and often his only resource. This neglect of necessary requirements for governing offers in itself grounds for impeachment, but he is hasty enough that in the long impeachment process he might be goaded to use the very nuclear power whose duties he has not prepared himself for using responsibly.
The assumptions that Congress made about the conduct of President Eisenhower or President Reagan - that they could be counted on to act with humble precaution - no longer seem to apply. What can be done? There comes a time when, as Cicero put it, ''The highest law should be preservation of people,'' Salus populi suprema lex esto. A crisis sufficient to justify use of this maxim cannot be predicted. It could be any first nuclear strike the president may order. Only extreme peril can justify an extreme remedy. It is said (I don't know with what truth) that in 1974, Secretary of Defense James Schlesinger told the implementers that in the event of a nuclear order from President Nixon, who was in a massive drunken funk, they should clear it with him.
We can only hope that there are high-ranking patriots who might act like that if Big Rocket Man went after Little Rocket Man. Even a soldier in the field must disobey a truly disastrous order from a manifestly disabled officer. The commander in chief has to be held to the same standard as his subordinate commanders, for the preservation of the people. It is reassuring to know that the current commander of the US Strategic Command, Air Force General John Lyten, as well as former one, General Robert Kehler, recognizes this as a rule of international law.*
*Kathryn Watson, ''Top General Says He Would Resist 'Illegal' Nuke Order from Trump,'' CBS News, November 18, 2017.
Just as Donald Trump was being inaugurated last January, the People's Daily, the mouthpiece of the Chinese Communist Party, declared: '' Western-style democracy used to be a recognized power in history to drive social development. But now it has reached its limits." Two years earlier, China's education minister, Yuan Guiren, told a conference of academics that they should ''by no means allow teaching materials that disseminate Western values in classrooms.''(1) These statements are just two examples of an ever more evident theme of Xi Jinping's tenure as China's paramount leader. Behind the strident rhetoric lies a longstanding fear that somehow the ''West'' will take over and destroy China's sense of itself.
The fear may be misplaced, but it is not surprising. The West and China have been intertwined for nearly two centuries, and the relationship has often been unhappy. What the Chinese call the ''century of humiliation,'' from the mid-1800s to the mid 1900s, lies at the heart of their political thinking about the wider world. The arrival of gunboats, missionaries, and the opium trade resulting in the Opium Wars of the mid-nineteenth century made Chinese observers believe that all Westerners had to offer was violence and commercialism.
In the early twentieth century, the Chinese writers and intellectuals who championed a ''New Culture'' movement advocated adoption of Western political and cultural concepts such as Social Darwinism and anarchism while simultaneously rejecting the imperialist presence of Western nations. Today, too, the Chinese government officially speaks of the need for ''internationalization'' - through increasing its involvement with the UN and sending thousands of Chinese students overseas every year - while also warning its educators and students about the pernicious influence of the ''West,'' an ill-defined concept that apparently includes liberalism and constitutional reform but not Marxism or industrial capitalism.
During much of the period from the mid-nineteenth century through the mid-twentieth century, China ceded territory and sovereignty to Britain, France, America, Russia, and Austria-Hungary, as well as its Asian neighbor Japan. In his new book, Out of China, Robert Bikers stresses the importance of this history for Westerners who wish to understand Chinese attitudes toward the wider world, although he remains skeptical of the idea - characteristic of much contemporary Chinese scholarship-that the period amounted to an ''unrelenting Chinese nightmare.'' His thoughtful, engaging, and well-written analysis helps to separate fact from myth when it comes to understanding the nature of Chinese nationalism.
Out of China is a panoramic examination of the increasingly powerful articulation of China's national identity in the twentieth century and the country's painful encounter with Western imperialism. The book picks up from the end of Bicker's last major work, The Scramble for China (2011), which detailed the rise of Western influence in China up to the 1911 revolution that overthrew the last emperor, Puyi. This account starts in 1918, at the end of the Great War, with a victory parade in the streets of Beijing led by British community of Shanghai and the Chinese government at the time, which had committed China to the Allied side in 1917. (The 96,000 Chinese who went to Europe were not given combat duties but worked at the front, digging trenches and doing manual labor.) The rest of the book is divided into two sections: the first looks at China in the early twentieth century, weak but seeking to make itself strong; the second examines it later in the century, objectively strong but acting as if it were still weak.
Bickers begins by describing the growing sense of anger in China's cities and rural areas over the influence of Western economic and political interests. The invasion of China in the mid-nineteenth century, first by the British but soon after by France, Russia, and Japan, among others, had deeply compromised the country's sovereignty. China was never fully colonized, but portions of territory, such as Hong Kong and Dalian in Manchuria, were captured as spoils of the Opium Wars, the Sino-Japanese War of 1904-1905; and a system of treaty ports across China gave the West preferential trading rights. Perhaps most insidious was the system of ''extraterritoriality,'' which meant that Westerners were partially immune from Chinese commercial and criminal law anywhere within China, with foreign-dominated courts arbitrating disputes instead.
By the early twentieth century, Chinese anger against these arrangements had peaked. Centuries-old mistrust of foreign interference combined with a more modern nationalism based on the idea that China should be free and sovereign republic, equal to others in the world. This was not just a Chinese phenomenon. In his influential study The Wilsonian Moment (2007), Erez Manela argued that Woodrow Wilson's support for ''self-determination'' had inspired independence struggles across the colonized world, in places as far apart as India, Korea, and Egypt. Bickers shows that China's delegates had come to the Paris Peace Conference in 1919 seeking nothing less than 'the repudiation of Imperialism as a rule of action in the transactions of nations.''
Instead, China had to acquiesce to a dubious settlement in which former German colonial territory in China was handed over to Japan, which had entered the war in 1914 as one of the Allied powers. Chinese figures as different as the dapper diplomat V.K. Wellington Koo and the rural revolutionary Peng Pai all agreed after Versailles to focus on the same task: to strengthen China and remove the ''unequal treaties'' by which it had been controlled ever since the 1842 Treaty of Nanjing handed over Hong Kong to Britain. The Nationalist (Guomindang) government of Chiang Kai-shek won a precarious hold on power in 1927, compromised by fiscal weakness and the need to cut deals with the warlord leaders who controlled much of China away from the prosperous east coast. Yet it used its new authority to renegotiate sovereignty, slowly regaining autonomy over tariffs in 1930 and setting unilateral dates by which it expected the Western powers to end extraterritoriality.
This advance toward sovereignty was accelerated by the second Sino-Japanese war, which broke out in 1937 and lasted eight years. After Pearl Harbor, the war became global as the US and British Empire formally allied themselves with China. The war had the ironic effect of making China weaker than it had been before, while also giving it symbolic strength in the global order. In 1937, China was still a semi-colonized state. By 1945, it was one of the few fully sovereign states in Asia, with a permanent seat on the UN security council. But China's improved international standing came even as its government was burdened by inflation, corruption, and human rights abuses, all of which contributed to the collapse of Chiang's regime and his defeat by Mao Zedong's Communists in 1949.
Mao's China, by contrast, was freer to make its own choices in comparison to its predecessors, which had had to deal with a constant round of internal insurgencies and foreign invasions. Yet it was vulnerable in a different way. Pre-war China had ben unable to keep foreigners out, even when it wanted to. Mao's China, on the other hand, was prevented from allowing many of them in, as major Western powers (notably the United States) refused to open diplomatic relations. Mao was more inclined to focus on China's relationship with the socialist bloc and to create new partnerships with the USSR and ''fraternal'' states such as Vietnam.
But by the 1960s, China had turned even further inward, and had begun to regard even old allies like the Soviets with suspicion. In the 1970s, after the opening to the US and the restoration of markets by Deng Xiaoping, China reversed course and made the journey toward political and economic strength that marks it today. But throughout that time, and even now, Beijing's policymakers have remained fearful that China is only a step or two away from once more becoming victim of a world that wants to alter the ideas and identity that it has developed internally over decades.
One theme that distinguishes from much Chinese-language and Anglophone scholarship is its concentration on the part Britain played in shaping modern China. Broadly, the story of Sino-Western encounters in the twentieth century has been dominated by the tumultuous relationship between China and the US, with the principal participants on the American side being figures such as Henry Luce, a magazine magnate and Republican adviser, and Richard Nixon. Discussion of Europe's influence on China during the twentieth century has been increasingly confined to shorthand secondary cultural references, such as the love of croissants that Deng Xiaping developed in France as a student in the 1920s.
Yet Britain in particular profoundly influenced modern China. When architectural historians think of great cities of British imperialism, Bombay and Cape Town tend to come to mind. They rarely mention Shanghai. However, a stroll down the Bung, the waterfront in the heart of the city, lined by buildings that combine imperial pomp with the art deco and modernism of the interwar years, reminds one of Britain's historical dominance. Halfway down the Bund, the entrance hall of the old Hong Kong and Shanghai Bank (now the Pudong Development Bank) features mosaic murals of eight cities where it previously had branches, among them Calcutta, Hong Kong, and London. The bank was just one of the institutions that tied China to British imperial interests.
Technically, the center of Shanghai was an ''International Settlement,'' a term devised by the British in the nineteenth century to describe a zone that was run by an autonomous Municipal Council of foreigners, rather than by a colonial governor. Americans and Japanese contributed to the Settlement as councilors and taxpayers, supporting its police force and bureaucracy, but its government and culture were markedly British - at best, in Bickers phrase, a sort of ''Anglo-cosmopolitanism.'' More powerful, if less visible, was the Maritime Customs Service, established in 1854 to gather tariff revenue from imports. It was an agency of the Chines government and lasted until 1950, but it, too, was shaped by Britain. All but one of its inspectors-general were British, and one of them, Sir Robert Hart, spent nearly half a century in charge.
Bickers analyzes the interaction between Britain and China, and divests it of any false romance or glamour. The sheer violence of colonialism echoes through the book. On May 30, 1925, police under British command, panicking as they were confronted by a demonstration against imperialism, shot at students and workers in central Shanghai; twelve died. Over the next few weeks, protests intensified across China. In Guangzhou (Canton) on June 23, a hot summer day, protesters gathered on Shamian Island. '' We do not know who opened fire,'' Bickers writes, ''but we do know that in the ensuing twenty-five minute slaughter, as French and British machine guns raked the column at 30 yards range across the canal, at least fifty-two Chinese and one Frenchman lost their lives.'' This came just six years after the Amritsar massacre in India, in which British troops opened fire on unarmed civilian protesters, perhaps the archetypal example of how empires ultimately rely upon violence to maintain control.
Bickers argues that the killings in China resulted in part from the imperial powers' inability to understand that it had been moving, painfully but genuinely, toward becoming a modern society. Student nationalist movements, hygienic reform, and a strong Chinese presence at the League of Nations amounted to very little when the British were faced with a crowd of Chinese protesters. Indeed, '' in most foreign eyes, every gathering was a potential mob'' no better than the violent rebels who had besieged the foreign legations in Beijing as part of the Boxer Rebellion in the summer of 1900.
However, Britain began to lose its hold as China's more outward-looking leaders realized that their anti-imperialist aspirations were better aligned with the American self-image than with that of the British or French. Many Americans felt that if the archetypal British figure in China was a red-coated soldier sacking the Summer Palace after the Boxers were defeated, then the American equivalent was a missionary or a sympathetic writer such as Pearl S. Buck. This image was only partially true at best, as the US shared in the spoils of imperialism; American missionaries and businessmen alike were protected by the hated system of extraterritoriality. Still, some Chinese persuaded themselves that the flattering image of the US as a champion of anti-imperialism put that nation firmly on China's side.
Perhaps the most powerful Americ-ophile was Song Meiling (Soong Mayling), often known as Madame Chiang Kai-shek, the wife of the leader of China's Nationalist government from 1927 to 1975 (on the island of Taiwan after 1949). Song came from a wealthy Chinese diaspora family and was sent to Wellesley college to improve her knowledge of American customs and the English language. After war broke out between China and Japan in 1937, she lobbied ceaselessly in the US for American entry in the war in Asia (''China - first to fight!'' read the posters intended to shame the neutral American public), and she knew the power of the unexpected gesture. ''Two baby pandas arrived at the Bronx Zoo just after the Pacific War commenced,'' Bickers notes drily, ''heralded as ''furry emblems of Chinas' gratitude' for the work of the United China Relief.''
Song Meiling has frequently been dismissed as a glamorous butterfly, and stories of her extravagance abounded during World War II. Bichers notes that in 1943, she supposedly reserved an entire floor of the Waldorf-Astoria hotel. Yet she was probably the single most prominent woman in global politics of the mid-twentieth century (rivaled only by Eleanor Roosevelt). Song and her husband, boosted by the Luce press, embodied the idea of China as a rising nation that deserved its own sovereignty. Chiang was the China insider, convinced that China needed authoritarian military to modernize and to expel the foreigners, a view that he expressed in his 1943 tract China's Destiny. Song's fluent, it florid, English and charming Westernized manners allowed her to convey to American leaders like Roosevelt and Wendell Willkie that Nationalist China was a nascent democracy not unlike the US. Liberal democracy was not, in the end, the destination of Chiang's government.
Still the primary aim of Asia's first power couple was achieved: when the war ended, no power, not Britain, the US, or Japan, would encroach on China's rights. But it was their Communist successors who would reap the benefit.
Even as China was trying to assert its independence from Western influence during the interwar years, its political intellectual leaders began a new campaign to shape the way it was perceived by the West. In the 1930s, Japan was regarded as the most advanced and modern Asian power, and the increasing encroachment of Japanese troops into China was seen by at least some Westerners as no less than China deserved. To reassert its identity and resist Japan's influence, the Nationalist government promoted China's ancient culture. On November 28, 1935, the ''International Exhibition of Chinese Art'' opened at the Royal Academy of Art in London. Featuring a nineteen-foot high, 1,300-year-old statue of the Maitreya Buddha, the exhibition of treasures from the Forbidden City served to demonstrate that China was not simply a supplier of curios but played a major part in a changing global story of art. Bickers argues that by making the case for China's cultural longevity, the Nationalist government hoped to provoke sympathy for China's political weakness.
In the US, the Chinese turned to the movie industry for their cultural promotion. Even in those early days, Hollywood's producers sought to attract as many Chinese views as possible, and a thumbs-down from the censors in Nanjing (Chiang's capital) could mean significant losses. Frank Capra's The Bitter Tea of General Yen (1933), despite its relatively progressive attitude toward racial ''miscegenation,'' was roundly condemned by Chinese diplomats, and Columbia Pictures eventually issued an apology. Astonishingly, the studios permitted a Chinese diplomat (the twenty-three year old Jiang Yisheng, who had never been to America before) to be stationed in Hollywood to approve plots as they were developed. When a film version of Peal Buck's best-selling The Good Earth was proposed, the Nationalists made it clear that ''the film should present a truthful and pleasant picture of China and her people''; that ''the Chinese government can appoint its representative to supervise the picture in its making''; and that ''all shots taken by MGM staff in China must be passed by the Chinese censor for their export.'' Similar preoccupations can be seen today. In the past decade, Hollywood blockbusters have frequently been edited to gain access to the highly lucrative Chinese market (2). But as far as we know, no diplomat from the Chinese consulate in Los Angeles has been placed on permanent censorship duty.
Bickers takes us through the turmoil of Mao's Cultural Revolution (1966-1976), years when a foreign presence in China was not only unusual but actively unwelcome. In the late 1970s, after Mao's death, his successor Deng Xiaoping realized that China required knowledge of the outside world once again if it were going to strengthen its military and industrial capacity. What emerged and continues today was a China willing to embrace the outside world when it comes to trade and technology, while trying hard to keep foreign influence out of politics.
Bicker's book ends with the transfer of Hong Kong from the UK to China in 1997. A song by the Chinese pop-folk singer Ai Jing, entitled ''My 1997," celebrated the event. The song (available on YouTube) has its own touches of nationalist anger. About pre-1997 Hong Kong, Ai sings: '' He can come to Shenyang, but I can't go to Hong Kong.'' But it ends with an upbeat sense that Hong Kong's return might open up new horizons for China's youth.
That was twenty years ago. Recent events suggest that the future may involve closing borders for both China and Hong Kong. In August, three young Hong Kong activists from Occupy Movement 2014 were sent to prison for trespassing and disqualified from standing for Hong Kong's legislature. In the same month, Cambridge University Press blocked articles about topics including Xi Jinping, Taiwan, and the Cultural Revolution in the electronic version of its journal The China Quarterly in China (though the decision was quickly reversed after protests from scholars and human rights activists).
Nationalist voices in China appearing in such newspapers as the Global Times may sound hysterical. But a simple assertion of liberal values will not sound convincing to a Chinese elite and a public mindful of the history of imperialism that the ''liberal'' West visited upon their country within living memory. Out of China, underpinned by extensive research in archives and written in warm and often witty prose, seeks neither to condemn nor celebrate the Western presence in China. Instead, it is an important reminder that even when our shared history is forgotten in the West, it is very much remembered - and sometimes resented - in Beijing and Shanghai today.
(Recommended book - Out of China: How the Chinese Ended an Era of Western Domination by Robert Bickers | Harvard University Press, 532pp., $35.00
(1) Hannah Beech, ''China Campaigns Against 'Western Values,' but Does Beijing Really Think They're That Bad?,'' Time, April 29, 2016, and ''China Slams Western Democracy as Flawed,'' Bloomberg News, January 22, 2017.
(2) Charlie Lyne, ''The China-fication of Hollywood Blockbusters,'' The Guardian,May4,2013