Friday, April 18, 2014

Voskamp's 1,000 gifts

From the last point of Sunday's sermon from Dave Ramsey and his daughter Rachel, we were reminded of the importance of contentment, which is related to gratitude, which is related to humility. If you're in good shape on that point (and his other two points: God owns it all; you're a steward and the value of work), your finances and your approach to finances will be sound.

A similar message resounds, for all of life, in Ann Voskamp's One Thousand Gifts. Her writing is a reminder about the importance of gratitude and understanding the blessings we've received: salvation (and "prevenient grace") available to all of us bozos; "common grace" available to all whether we accept the gift of salvation; providential grace as God works through people and events to accomplish His will. And "isn't one grace enough?" (p. 93)

Voskamp is particularly persuasive in finding grace in everyday moments. The book stems from a friend's challenge to write down 1,000 gifts (and by extension, to quit complaining).

Voskamp writes poetically; she is a pleasure to read. And she's good on doctrine as well, working in lessons about the original Greek. Throughout, she notes that the Greek word for eucharisteo, charis, char, chara are all related-- grace, gift, thanksgiving, gratitude. In particular, I had never connected the term Eucharist to gift and thanksgiving. Beautiful.

On the story of the 10 lepers in Luke 17, she relates that the 10th leper (the only who returns to Jesus) is told that his faith has made him "sozo"-- usually translated "saved" (p. 38). The term "saved", biblically, is must richer than mere justification or the usual narrow use of "salvation". It means to be saved to the point of wholeness and wellness; to be redeemed; and so on.

She points to Christ's response to "failure" in Mt 11:25-- to give thanks (p. 36-37) and in the midst of  difficult circumstances (p. 35). In both the case of the Eucharist and prior to the resurrection of Lazarus, Jesus gives thanks. "Eucharisteo-- thanksgiving-- always recedes the miracle."

Reading the book came at a good time for me-- a relatively dry period. Likewise, I strongly recommend the book if you're in a phase like that. But it's good stuff whenever! In particular, it contains a vital message for parents to pass along to their children-- negatively, that the world does not rotate around them; positively, that they have so many reasons to give thanks.

Thursday, April 17, 2014

advice on college: research and teaching; secular and Christian; on-line, hybrid and traditional

Of course, I should start with the caveat that the following comments are necessarily generalizations. And perhaps the generalizations are not as accurate as I imagine. But these are my observations and inferences, from my first-hand and second-hand experiences.

Let's start with a key distinction: smaller vs. larger colleges. Small schools typically provide better "customer service" and have smaller class sizes. (For example, at IU Southeast, we only have a handful of classes over 45 students-- with a max of 70, I think. Large schools typically have many sections with hundreds of students.) Therefore, students are less likely to "get lost" and it's easier to get around. Smaller schools tend to be more conservative and to have less variance in the student body.

On the flip side, larger schools will provide more diversity (of many sorts), more opportunities as a student, greater access to grad schools, and greater networking after graduation. I would imagine that the quality of smaller schools has more variance-- since key hires (for good or for ill) will make a bigger difference.

To me, this is a key and under-rated consideration for choosing a college. If you are (or have) a student who would thrive with a more personal environment, then a smaller college is closer to wisdom. If the student "won't get lost" in a less personal environment, then they may endure some bumps at a larger university as they adjust, but should be fine.

Related to school size, there are key distinctions between research schools vs. teaching schools vs. hybrids. At research schools, teaching will get at least lip service (hey, we teach at universities, right?)--but maybe not much beyond that. In some cases, good teaching may actually be a liability for a professor's career-- taken as a sign that one is not spending enough time on research. (I know of one example.)

Beyond the incentives put forward by administrators, there are presumably differences in the preferences of the teacher/researchers at research schools. They find teaching relatively less attractive-- if not absolutely so. (For one thing, they're only going to teach a class or two per semester-- and that, often, to a small set of grad students.)

Research schools will often use grad students as teachers. Their lack of experience may be trumped by passion and energy. (I'm confident that this was often the case when I was at Texas A-M.) Some schools do not allow this. But at least for native speakers, grad students would often be an improvement.

A hybrid school expects and supports research. But their research expectations will be lower: fewer articles, a broader sense of acceptable quality (including lower-tier peer-reviewed journals, editor-reviewed articles, practitioner-based research, books, etc.), and a greater range of acceptable topics. Teaching quality is vital: at least competence is required; and excellent teaching can offset modest/minimal research. The teaching load would generally be three sections per semester, allowing a lot of time in the classroom, but some time for research as well.

(Pure) teaching schools will have little or no role for research. The professors will teach four (or more) sections per semester. They're attracting profs who are more interested in teaching. Beyond that, these profs are not all that interested in research (unless they see this as a temporary job and are looking to "move up")-- or even, may not be all that competent to do research. Teaching prowess will typically be required; classrooms will have fewer students; and faculty will generally be more focused on mentoring.

One caveat to add here: the increased role of lecturers in many schools. Lecturers have a heavier teaching load (e.g., 4 sections per semester), typically focus on lower-level classes, have few if any ops for research, love teaching and are good at it. Many research-oriented tenure-track faculty have been concerned about this trend. But it makes complete sense to hire specialized, better, cheaper faculty to cover teaching, leaving research and higher-end teaching to those more qualified for those tasks.

As for "Christian" schools, it's difficult to define what that means, really. You can find plenty of Christian students at any school, at least in the Midwest and South. If one is looking at full-time ministry, I would strongly consider a Christian school; otherwise, this narrows one's possibilities post-grad quite a bit. Even for "sheltering" a child from worldly influences (or concern about their academic performance or otherwise handling all that new freedom), I would encourage them to stay home for the first two years instead-- and then consider transferring after that.

Finally, a note about on-line and hybrid courses: I'll blog on this separately, later. But in a word:  I'd be really careful with these opportunities. First, they're new-- and I can tell you from experience that it requires tons of effort to work out the bugs and do this well. Second, beyond that, some profs will respond by reducing effort, "covering" material and testing in a convenient way that generally reduces student competence. Third, it takes a remarkable amount of discipline for students to complete these courses-- even if they learn nearly as much through the process. I would not enter into these (particularly the on-line courses) without a strong sense that one has the requisite self-motivation.

Thoughts? Questions? Your experiences and wisdom on this?

Tuesday, April 15, 2014

The 1040 Turns 100

As it appeared in the newspaper...

The 1040 Turns 100

Editors: Please note April 15 peg (731 words).

by Eric Schansberg, Ph.D.

The 16th Amendment to the U.S. Constitution brought us the 

federal income tax in 1913. A year later, the 1040 tax form was born.

The 1040 had a modest debut but has grown impressively since. 

The original was so compact it was published on the front page of 
the New York Times. Today, it has hundreds of supplemental forms 
and thousands of instruction pages. The supporting tax laws now 
total more than four million words on 74,000 pages.

The growth of the 1040 matches the spread of the income tax itself. 

The original Internal Revenue Service (IRS) had 4,000 employees; 
now, there are 90,000. Less than 1 percent of Americans filled out 
a tax form in its first year; now, there are about as many filed 
returns as there are workers.
The initial 1040 imposed a tax of 1 percent on taxable income above a standard deduction of $4,000 for married couples (almost $100,000 in today’s dollars). The 1 percent tax applied to income up to $20,000 ($470,000 today) and a top tax of 7 percent was applied to taxable income above $500,000 ($11.5 million today). The top tax bracket briefly reached 94 percent during World War II, before settling in at 91 percent after the war. JFK dropped the top rate to 70 percent (on income earned above $1.5 million in today’s dollars) before Reagan reduced the top rate to 28 percent (on income earned above $60,000 in today’s dollars).

Not surprisingly, the growth of the 1040 has matched the growth in the size and power of government. In its first year, the income tax raised about $10 billion (in today’s dollars) and now raises more than $1.3 trillion annually. Interest groups lobby for exemptions, deductions and credits — part of a lobbying industry that benefits politicians and “the organized” at the expense of the general public.

It turns out that federal “payroll” (FICA) taxes on income impose a larger burden on most workers since those taxes are applied to every dollar earned (no deductions, exemptions or credits). Amazingly, those in working-poor households at the poverty line pay no “income taxes” but lose $3,000 to payroll-FICA taxes each year.

Part of FICA’s burden is hidden because it looks like employers pay half of it for their employees. And its burden is more subtle since it is simply withheld from our paychecks. In this way, the 1040 is far worse. It’s rough enough to have the government take so much money from the half of the population who pay “income taxes.” On top of that, though, Americans spend more than a billion dollars and more than six billion hours on filing their 1040’s. If they’re going to take our money, can’t they do it more efficiently?

In the recent minimum-wage debate, one of the more reasonable arguments was that the policy hasn’t changed recently. If states or the federal government are going to insist on having a minimum wage, it should be updated regularly. Or better yet, it should be adjusted annually (“indexed”) to deal with the effects of inflation.

One could easily make the same argument about income taxes. We haven’t had substantive federal income-tax reforms since the 1980s under President Reagan and a bi-partisan Congress. The number of tax brackets was reduced from 16 to two; marginal tax rates were reduced for everyone; and the tax code was finally “indexed” for the effects of inflation. Since then, many of their improvements have been reversed: top marginal tax rates have increased (to nearly 40 percent); the number of tax brackets has creeped back up to seven; and the tax code has become more complex.

A “flat tax” on income could replace current income taxes and the flat FICA tax with a single marginal tax rate on all income earned above the poverty line (with the possible exception of charitable contributions). It could raise the same amount of money with far less cronyism, inequity and inefficiency.

Unfortunately, few of our current national political leaders seemingly have the courage for anything so bold. But talk of hope and change can rise again. As we enter electoral cycles in 2014 and 2016, perhaps the public will make it a priority to insist on more efficiency and equity in our federal income tax code.

Eric Schansberg, Ph.D., an adjunct scholar of the Indiana Policy Review Foundation, is a professor of economics at Indiana University Southeast.

Monday, April 14, 2014

book review of "Unbroken"

My friend Buddy recommended Hillenbrand's latest book, based on some of the book's themes and how much I enjoyed the movie Life of Pi. Both books feature long periods of survival on the open sea, as well as human courage, surprising turns of events, and wrestling with the extent to which Providence is at work in trying circumstances. (I have not yet read Martel's book and the movie based on Hillenbrand's book is due out later this year.)

Her biography of Louis Zamperini largely focuses on his time in military service during World War II. Before that, Hillenbrand highlights a few parts of his childhood and devotes most of her time to his career as a world-class distance runner. He held records in the mile; was an early threat to break the four-minute barrier; and is still the youngest American Olympic qualifier for the 5,000 meters.

As an Air Force pilot, Zamperini survived an attack where his plane took 594 bullets. One of his later flights would crash into the Pacific Ocean. He and two other men survived the impact and spent 47 days on a raft. (Two of the three survived the journey, breaking the old record of 34 days at sea.) They drifted to an island controlled by the Japanese. Zamperini credits God's providence and their efforts to remain mentally engaged, peppering each other with questions.

He spends the rest of the war in a number of brutal POW camps. My only complaint about the book is the amount of time spent on the torture he endured (30% of the text). The descriptions are gruesome-- but far worse, quite repetitive. (For an experience in a much more relaxed POW camp, King Rat.)

How they survived is somewhere between a mystery, Providence, and a sustained hope based on Allied victory, stealing and sabotaging the Japanese as possible, and living one day at a time. Along the lines of the last factor, Zamperini said he would kill himself if he knew he had to go through those experiences again (321).

It was also surprising that they did not rise up and kill their most vicious captor, Watanabe, "The Bird". I kept imagining that the foreshadowing would lead to his demise (or at least the attempt). But they only made one modest attempt at it, trying to poison him.

After the war, the book picks up considerable pace and seems like a breath of (very) fresh air: a quick marriage to a younger woman, a tumultuous marriage/life as he struggled with alcohol, and his conversion at the famous Billy Graham crusade in Los Angeles (369-376). His conversion story is paired with earnest wrestling about forgiveness (377-379, 389-397). Zamperini found that he had truly forgiven his captors-- testing it by seeing them in person. 

Hillenbrand purposes to contrast Zamperini's forgiveness and German POW treatment with Japanese POW treatment-- in terms of both torture and murder. Hillenbrand documents the differences in outcomes between German and Japanese POW's: 37% deaths vs. 1% (314-315, 346); thousands dying in forced marches and labor camps; "kill orders" to murder POW's when their positions were being over-run (272-273); and much higher hospitalization rates for those who did return home.

In part, Hillenbrand portrays this as due to the Japanese belief that surrender was shameful (291-292). One sees this in the kamikaze missions and their treatment of POW's. Beyond that, it gives credence to the belief that Japan would not have surrendered, without a full-scale invasion-- or the nuclear bombs that seemed to make resistance futile.

Two other thoughts here: 1.) Years ago, it struck me that many men would have died in an invasion of Japan. (I've read estimates of one million men.) And those men would not have been fathers for their children or their future children. It would have been a devastating loss for America. One of those men would have likely been my wife's grandpa. How sobering. 2.) On reading the book, it occurred to me why hatred of the Japanese would have been much greater than the Germans. In addition to racial differences, their prosecution of the war was despicable-- and outside supernatural forgiveness as experienced by people like Zamperini, would have naturally resulted in hatred of the Japanese.

-The man who designed the Olympic Village for Berlin in 1936 was a Wehrmacht captain and Jewish (31, 37). He committed suicide after learning he would be decommissioned after serving a role for propaganda.

-When the Japanese invaded Pearl Harbor, they were worried about Japan advancing as far as Chicago (52-54). There were air-raid alerts in SF; trenches were dug along the California coast; and schools in Oakland were closed. Moreover, Japan conquered a bunch of territory that day and the next, running into trouble only at Wake.

-Not surprisingly, battles deaths were dwarfed by friendly fire, weather and especially accidents (61, 80).

-Again, as in other books on the war I've reviewed by Ambrose and Atkinson, soldiers were noted for their sexual immorality (63, 67, 317; see also: 218's fortune-teller).

-Hillenbrand details Paul Tibbetts dropping the first atomic bomb (299). Tibbetts had cyanide in case he was caught. The bomb was 12 feet long and weighed 9,000 pounds. After dropping the bomb, Tibbetts turned as hard as he could and dove to pick up more speed. They were not sure that 43 seconds was enough time to get far enough away from the explosion.

Thursday, February 20, 2014

economics vs. the EPI letter on the minimum wage

The EPI letter:

July will mark five years since the federal minimum wage was last raised. We urge you to act now and enact a three-step raise of 95 cents a year for three years—which would mean a minimum wage of $10.10 by 2016—and then index it to protect against inflation. Senator Tom Harkin and Representative George Miller have introduced legislation to accomplish this. The increase to $10.10 would mean that minimum-wage workers who work full time, full year would see a raise from their current salary of roughly $15,000 to roughly $21,000.

So far, so good. Indexing would be a nice move forward, assuming that we're going to have a minimum wage (MW). They go "normative" in recommending a policy with an interesting array of benefits and costs, but ok. They ignore far better policy choices in the same realm, allowing politics and pragmatism to get in the way of an opportunity to educate.

These proposals also usefully raise the tipped minimum wage to 70% of the regular minimum.

70% is arbitrary; why not 100%? "Useful" is an odd choice of terms. Servers would find their compensation shifted from tips to income. It's useful that there would be less tax evasion among servers. Some would lose their jobs.

This policy would directly provide higher wages for close to 17 million workers by 2016. Furthermore, another 11 million workers whose wages are just above the new minimum would likely see a wage increase through “spillover” effects, as employers adjust their internal wage ladders.

The economists are correct to point to the spillover effects-- by the way, the most compelling reason why unions like this proposal. (See: South Africa and a connection between the MW and racism.) And the economists are assuming no job loss-- benefits only, so far. They could address/fix that below, but unfortunately, they won't.

The vast majority of employees who would benefit are adults in working families, disproportionately women, who work at least 20 hours a week and depend on these earnings to make ends meet. At a time when persistent high unemployment is putting enormous downward pressure on wages, such a minimum-wage increase would provide a much-needed boost to the earnings of low-wage workers.

Wrong. Yes, for those who keep their jobs, things would get better. But only a small percentage of MW workers are heads of household.

In recent years there have been important developments in the academic literature on the effect of increases in the minimum wage on employment, with the weight of evidence now showing that increases in the minimum wage have had little or no negative effect on the employment of minimum-wage workers, even during times of weakness in the labor market.

Wrong/over-stated. There is some debate in the literature on the size of the employment effects and the more subtle negative effects-- in the private sector. But there is little evidence about increases in the MW of this magnitude-- and the evidences available are quite sobering. With the initial MW in the 1930s, Puerto Rico was quickly given a waiver after the MW devastated its labor market. And in more recent history, other U.S. territories were given exemptions from the "federal" MW law, anticipating the same problems.

Research suggests that a minimum-wage increase could have a small stimulative effect on the economy as low-wage workers spend their additional earnings, raising demand and job growth, and providing some help on the jobs front.

Wrong, wrong, wrong-- and aggravatingly stupid. How can an artificial (price or) wage increase in competitive markets make things better overall? If this is good for an economy, why not increase the MW to $20 or $50 per hour. You can argue for a higher MW, but keep obviously-stupid stuff like this out of your mouth.

Tuesday, February 18, 2014

did you know that Jackie Robinson wrote a book?

Baseball Has Done It includes comments by Jackie Robinson (JR) as well as his narration accompanying a series of mini-interviews with players, coaches, and owners. The players range from stars to the relatively unknown. The interviewees describe a range of experiences-- on how things were growing up and how things were for them in baseball. The interviews depict a wide range of views-- on how to handle the difficult situations they encountered with newly-integrated baseball and the state of race relations in America at the time of the interview

JR graciously notes that he was not the first "Negro" to play in the major leagues: Fleetwood Walker and his brother Weldon played for the Toledo MudHens in 1884 (30). And he notes that the same sort of discrimination happened in other fields of entertainment. He cites Florence Mills who had the #1 song in 1925 but was never allowed to sing in a white establishment. "The 1920s were called the Golden Age of sports, but no Negro was allowed to face Jack Dempsey in the ring, Bill Tilden on the tennis courts, Bobby Jones on the links; no Negro pitcher faced Babe Ruth." (40)

JR said regular season fans could be hostile, but Spring Training was a lot rougher, esp. in Florida. Ironically, Los Angelenos were "in certain respects...less understanding than Southerners and even more were openly hostile." (41) And his "Southern teammates were more reliable than some Northerners. I knew where I stood with them. After they knew me better, they were regular guys on the field. The Northerner might give you the glad hand, but after he discovers that you have as much ability...he's a different person altogether." (73) One hears this sort of thing off-and-on even today-- about overt vs. covert racism; honest dealings vs. condescension.

In response to all of this, JR famously chose to "turn the other cheek" for two years-- and then was given latitude by Branch Rickey in 1949 to fight/stand for his rights, etc. Even so, he clearly "picked his battles", not letting "small things" set him off. Along the way, JR's teammates were supportive. He gives details, including the famous moment where PeeWee Reese put his arm around him at second base, during a game (55, 80).

JR had strong ideas about the best way to proceed, but generally respected those who chose other paths. (In contrast, he noted that Roy Campanella had been critical of his approach [123]. And there were limits to JR's tolerance: he critiqued Willie Mays and Maury Wills for failing to speak in general and submitting to an interview for his book in particular [208-209].) 

JR spends a big chunk of the book on Branch Rickey: his initial interview with him (which included Rickey yelling racial taunts at him to se how he would respond); his strong words of encouragement to JR; his desire for his team to win; the history of his thought process that led up to the big decision; the wisdom of the decision to have JR play on the farm team in Montreal.  

JR also talks about Bill Veeck-- who combined a quirky preference for novelty with a desire for profit and winning team. Veeck was the first owner to integrate an American League team-- with Larry Doby (68-69). Next up were the Giants with Hank Thompson and Monte Irvin (chapter 8). Late in the book, JR describes the integration of Latin players, including Minnie Minoso and Vic Power.

Within a decade, integration had exploded and life in baseball was forever changed: "Colored players" on every team by 1959; in 1963, "51 Negroes and 24 colored Latins"-- 15% of the players-- including five batting champs, six Rookies of the Year, and six MVP's. Ten of the top eleven batters in the NL (and 3 of the top 10 in the AL) were non-whites.

Of course, as a labor economist, I find the economics of discrimination to be fascinating. For example, in competitive markets, it is costly to engage in personal discrimination: avoiding good workers, employees, tenants, etc. because one wants to indulge odd tastes and preferences about traits unrelated to "productivity". Moreover, in a competitive setting, if you discriminate and I don't, then I have a competitive advantage over you-- and will find it difficult to stay in the market.

This is one of many reasons why producers would seek restrictions against competition. JR fingers Cap Anson for leading a collusive arrangement against Negros in baseball. In this, we find the common desire to lock out competitors, bolstered by government. Anson didn't want to compete as hard-- and his racism was correlated with that desire-- so he fought for segregation and used peer pressure and regulation to accomplish his goals. Likewise, some owners were bigots-- and didn't want to compete with non-bigots (or the less-bigoted)-- so they sought integration regulation from the industry (and the government who condoned it).

JR points to some fascinating examples of this. In 1901, John McGraw tried to pass off a Black as a Native-American (32), but the plan didn't work. JR repeatedly claims that baseball led the way in integrating many restaurants (56), hotels (86, 100), movie theaters (100), and entire smaller Southern towns (111), clubs and neighborhoods (185). They tried to retain segregation, but they started to lose customers and reversed course.

JR saw it happen quickly-- when the tide turned. But he imagined this, at least nationwide and in the hearts of people, as a slow and steady process (218). He didn't trash law, but thought it had an exaggerated importance, especially when it would not be enforced (149). I think he would also point to what might be called the "Spike Lee-- Do the Right Thing" effect. In that movie, the Italians are bigoted toward "Blacks", but their favorite entertainers were all Black.

As an aside, JR provides a sad commentary on discrimination and its impact on building human capital: "My brothers, their friends and acquaintances, all older than me, had studied hard and wound up as porters, elevator operators, taxi drivers, bellhops. I came to the conclusion that long hours over books were a waste of time. Considering my situation, I was not far wrong." (44)

Finally, JR's discussion of language is interesting and useful: "Nigger is offensive only when employed in a derogatory senses...We object to boy or girl in reference to adult Negroes...Girl, as applied to a woman who is a mother or even grandmother, is particularly insulting." (63-64) In this, JR points to the intent behind the language. While I understand why some people find it bothersome for African-Americans to use the term "nigger" and while their use of that term undermines their critique of it, they still have a point: it is different, depending on the context in which its used. Along the same lines, JR's critique of "girl" is now (quite) dated-- as the term has become quite popular, even across racial lines.

A h/t to Matt Welch, whose excellent article put me onto the book in his Reason essay.

The misleadingly titled Baseball Has Done It was not some kind of gee-whiz celebration of the sport’s integration. It was a forceful attempt to document the human struggles involved in that monumental project, through first-person accounts from black and white players and coaches ranging from Branch Rickey to eventual homerun champ Henry Aaron to accused racist Alvin Dark. Robinson’s explicit aim was to apply lessons learned from baseball to the raging civil rights debate of the day.

Reading the book in 2013 doesn’t just deliver a sharp slap of a reminder about how disgustingly racist much of this country still was within recent memory. (Black players still routinely faced “whites only” public accommodations in Florida during the 1960s, for example.) It also calls into question just why a contemporaneous history of great ballplayers discussing their struggles faded into immediate obscurity while Glory’s paean to segregation-era ball rocketed to instant fame...

Freezing Jackie Robinson in 1947 amber also lets baseball—and society—off the hook for all the governmental and private racism that was still actively poisoning the country two decades after Branch Rickey’s great experiment. Better to remember that one magical year than dwell on all the different southern minor leagues that were still being integrated well into the 1960s. When your face is unlovely, it’s always more fun to look at old photographs than the bathroom mirror.

Perhaps the most surprising part of Baseball Has Done It is Robinson’s report that during his Hall of Fame induction ceremony in 1962, “No one mentioned that I was the first Negro in the Hall of Fame, or that another bastion of prejudice had fallen. No one was thinking about such things that day.” He says this as a point of pride, that the quality of his performance—the content of his baseball character—was evaluated on its own merits and found victorious. Maybe one day that can again be true.  

Monday, February 17, 2014

Atkinson's WWII books

I've read the first two of Atkinson's "Liberation Trilogy" on World War II. (Two out of three ain't bad, right?) An Army at Dawn (AD) covers the war in North Africa (1942-43) and The Day of Battle (DB) covers the war in Sicily and Italy (1943-44). I was as familiar as the average bear on World War II. In other words, I knew virtually nothing about the first two parts of the European campaign-- the warm-ups and tightening-the-noose that were the African and Southern European campaigns.

The African campaign was useful for clearing the Mediterranean and to provide a base of operations to attack Italy. The Italian campaign was useful for knocking out the Italians, diverting German resources, and providing us with a place nearer to Germany's doorstep. Atkinson makes clear that, as well, Africa and Italy were vital for revving up the American war machine (production), training soldiers, and working out (some of) the kinks in everything from supply processes to command structure.

A key theme is the role of Africa (and then Italy to a lesser extent) in the development of rookie soldiers and its leaders. At least initially, "war was fought by ignorant armies on a darkling plain" (AD, 116). "A callow, clumsy army had arrived in North Africa with little notion of how to act as a world power. The balance of the campaign-- indeed, the balance of the war-- would require learning not only how to fight but how to rule" (AD, 159). In a way, the early parts of the African campaign were deceiving since victory (over the French) was relatively easy (AD, 160). But that would change, soon enough. As such, Atkinson describes how the troops "matured" from naïve to hardened and cynical to "hating" the enemy (AD, 461-463).

Without these experiences, the European campaign (or an earlier European campaign)-- against the best German troops-- would have been a logistical, leadership and fighting disaster (AD, 377, 539-540). Along the way, the Allies had poor strategy-- a combo of both sins of commission and omission; foolhardy attacks and failures to pursue. Two surprising (and counterproductive) angles were that the leaders tried to combine troops of different nationalities and that the leaders (especially the Brits) valued personal glory or better corporate outcomes (AD, 276, 373, 403, 499)

But the Allies, particularly America, had overwhelming production (AD, 413-415; DB, 252). A lot of this was the result of diverting a lot of effort from personal consumption (forced rationing) and peacetime industrial production (DB, 8-9, 450). Given the material advantages, in one sense, the war became a matter of competitive attrition: the Allies inevitably wearing down the Axis vs. the Allies tiring of trying to wear them down. (See: DB, 254, 582-583. See also: North vs. South in the American Civil War.) This led to unintended consequences, such as the development of the bikini (DB, 9)! And quite surprisingly, production began moving back toward normal in 1944 (DB, 313). In one sense, the war was "a struggle not between rival ideologies or opposing tacticians, but between systems-- the integration of political, economic, and military forces needed for sustained offensive power (452). Of course, the "ideologies" included politics and economics-- impacting our ability to produce (vs. Germany), but I know what Atkinson means.

A series of small observations:

--There was much more back and forth than I had realized on what to do with Japan: short-term vs. long-term; and whether to focus on them vs. work with England to deal with Germany/Italy (10-18, 289-290).

--The initial landing in Africa (airborne and amphibious) is virtually unknown, but rivaled the landing on D-day in relative size (then), its overall importance to the war (without it...), and its ambition (31, 87, 88).

--The amazing (and often senseless) troop loss in World War II. (See also: World War I.)

--Two key battles: The Americans almost losing their beachhead at Salerno, including mixed decisions on whether to evacuate or not (DB, 226ff). And the French General Juin leading his troops to an important victory in a large-scale battle that turned the tide, late in the Italian campaign (DB, 511ff).

-Rome was more psychological than tactical per se. Not much was gained once they had a substantial beachhead/occupation in Sicily and then Italy. But it was good for morale-- there and at home-- to conquer Rome.

Not surprisingly, Atkinson gives space to develop key characters:

Patton goes from back-burner to celebrated to seemingly buried and headed for Western Europe. A year later, he was dead. "He was a paradox and would always remain one...Well-read, fluent in French, and the wealthy child of privilege, he could be crude, rude, and plain foolish." (AD; 35-36). "But the caricature of a raging martinet failed to capture Patton's nuances. Few officers had studied the art of war with greater care." And "He had proposed riding his horse up the stairs and onto the terrace of her house." (DB; 45)

But Eisenhower is the most important figure-- particularly his strengths/weaknesses and his development into the general who would emerge to lead the Allies to victory in Western Europe. (AD; 59-60, 411-412) At the end of the African campaign, Atkinson writes "No soldier in Africa had changed more-- grown more-- than Eisenhower. He continued to pose as a small-town Kansan...retained the winning traits of authenticity, vigor and integrity...displayed admirable grace and character under crushing strain. But...naivete provided a convenient screen for a man who was complex, shrewd and sometimes Machiavellian." (AD, 533)

Others make smaller but noteworthy appearances. Other key military leaders such as Henry Hewitt, Terry Allen, Lucian Truscott, Mark Clark, and Omar Bradley. Teddy Roosevelt's son was a compelling figure. (See: AD, p. 86, for how little TR expected of him!) Audie Murphy's storied military career gets quite a bit of play. Newsman Ernie Pyle pops up over and over again. And I didn't know that FDR had been so adamant to aim for the unconditional surrender of the Axis from the beginning (AD, 293-294, 298). In this, he sounded like Reagan with the USSR.

Others make cameo appearances: for example, Colonel Claus von Stauffenberg who tried to oust/assassinate Hitler (AD, 464-465); "Kilroy"-- who always, already been "here" (AD, 517); novelist Joseph Heller and actor Jimmy Stewart (497).

A few other topics of interest:

1.) The sexual immorality and drunkenness of many troops in WWII comes up quite a bit (p. 39, 195-196, 435, 462-463 in AD; p. 29, 30, 136, 175, 247, 321, 446-449 in DB; see also: AD's 5, 462-463 on rape, murder, etc.; and DB's 308, 375 on gluttony by leaders.) "The Arab soldier is interested in just three things: women, horses and guns. The American soldier is the same, except that he doesn't care anything about horses and guns." (DB, 529). I don't bring this up to denigrate our troops. People-- particularly the young, away from home, influenced by peer pressure-- can do rough and even nasty things. On top of that, war presumably brings out the worst (and best) in people. But I find it interesting in light of the claims about 1950s morality-- that they represent some high-water mark (or at least the end of that tide)-- and the implications of those claims. (Stephen Ambrose makes similar observations in his work.)

Atkinson also gives us a few glimpses into religion directly. Ike believed that "the Almighty would provide him with a decent set of cards. [But] he appeared not to share the metaphysical feeling that God owed him anything specific." (DB, 50). But Ike's lucky coins also point to superstition (246). See also: Clark's 4-leaf clovers [DB, 183] and Truscott's Thomas Jefferson "Bible" [DB, 385]). But he also provides a picture of a baptism (DB, 428; see also: Randall Harris' prayer [DB, 70]). Given so few mentions, I'd guess that this not a point of emphasis/interest for the author. 

2.) In DB, Atkinson explores a number of "Prisoner's Dilemmas". The initial reference is to two prisoners who are separated and incentivized to break a cartel that would otherwise be in their best interests. Both individuals have an incentive to collude, but especially with the right pay-offs, both can have a tremendous incentive to cheat on the (implied) cartel. The term is a big deal in "game theory" and economics-- based on imperfect information, in less-than-competitive market settings-- where collusion can be useful, at least on paper.

Examples? Soldiers were told that the other side would "cut off the balls" of prisoners (116). Atkinson shares the story of a Lt. Colonel who "repeatedly lecture" that "a captive can't fight", exhorting them to fight to the death-- before surrendering himself (116). News reporters agreed to kill a story that Patton had slapped a soldier (170), but the story broke a few months later (296). Soldiers were supposed to treat enemy soldiers well, but sometimes it was a little too inconvenient: "A soldier told to escort a captured German officer down the mountain soon reappeared. 'The son of a bitch died of pneumonia.'" (283) And apparently, the Germans had white-flag ruses (474), which led to a general distrust on both sides and a reduction in what would have been an optimal arrangement for the soldiers. Likewise, there was distrust about the use of (forbidden) mustard gas, which seems to have been caused by the U.S. (271-278)

3.) Miscellaneous topics: segregation (DB, 381ff); typhus and DDT (DB, 448); the mistreatment of Jews in Rome by the Germans (DB, 475-476), the eruption of Mt. Vesuvius in the middle of the war! (DB, 483ff).

CBO, ACA, Casey Mulligan, and empirical work

From the WSJ, a look behind the scenes at the CBO (on the ACA/ObamaCare): smart people who are generalists and apparently let science do what science does-- as they listen to researchers/specialists who necessarily know more than they do and self-correct...

"...the CBO—Congress's official fiscal scorekeeper, widely revered by Democrats and Republicans alike as the gold standard of economic analysis—reported that by 2024 the equivalent of 2.5 million Americans who were otherwise willing and able to work before ObamaCare will work less or not at all as a result of ObamaCare. As the CBO admits, that's a 'substantially larger' and 'considerably higher' subtraction to the labor force than the mere 800,000 the budget office estimated in 2010...Mr. Mulligan's empirical research puts the best estimate of the contraction at 3%. The CBO still has some of the economics wrong, he said in a phone interview Thursday, "but, boy, it's a lot better to be off by a factor of two than a factor of six."...

The CBO works in mysterious ways, but its commentary and a footnote suggest that two National Bureau of Economic Research papers Mr. Mulligan published last August were "roughly" the most important drivers of this revision to its model. In short, the CBO has pulled this economist's arguments and analysis from the fringes to center of the health-care debate.

For his part, Mr. Mulligan declines to take too much credit. "I'm not an expert in that town, Washington," he says, "but I showed them my work and I know they listened, carefully."

And then a nice insight into how economists think (as economists vs. economists in their potential roles as public policy analysts, ideologues, etc.)

Mr. Mulligan reserves particular scorn for the economists making this "eliminated from the drudgery of labor market" argument..."it looks like they're trying to leverage the lack of economic education in their audience by making these sorts of points." A job, Mr. Mulligan explains, "is a transaction between buyers and sellers. When a transaction doesn't happen, it doesn't happen...I can understand something like cigarettes and people believe that there's too much smoking, so we put a tax on cigarettes, so people smoke less, and we say that's a good thing. OK. But are we saying we were working too much before? Is that the new argument? I mean make up your mind. We've been complaining for six years now that there's not enough work being done..."

The larger betrayal, Mr. Mulligan argues, is that the same economists now praising the great shrinking workforce used to claim that ObamaCare would expand the labor market. He points to a 2011 letter organized by Harvard's David Cutler and the University of Chicago's Harold Pollack, signed by dozens of left-leaning economists including Nobel laureates, stating "our strong conclusion" that ObamaCare will strengthen the economy and create 250,000 to 400,000 jobs annually. (Mr. Cutler has since qualified and walked back some of his claims.)

Mr. Mulligan is uncomfortable speculating about whether the benefits of this shift outweigh the costs. Perhaps the public was willing to trade market efficiency for more income security after the 2008 crisis. "As an economist I can't argue with that," he says. "The thing that I argue with is the denial that there is a trade-off. I argue with the denial that if you pay unemployed people you're going to get more unemployed people. There are consequences of that. That doesn't mean the consequences aren't worth paying. But you can't deny the consequences for the labor market."