The Might and Myth of Good To Great

I wrote this essay back in 2009 for 800-CEO-READ’s annual publication In The Books and realized recently the piece doesn’t exist anywhere online in its entirety. This post is a remedy to that problem.

2009 WAS A BIG YEAR for Jim Collins. The Boulder-based management guru came out with his new book How The Mighty Fall. This is his first new work in eight years since the release of the four million copy strong bestseller Good To Great.

But that year also marked a turning point in the mood about Collins’ research-based methodology. A growing number of researchers began to question the ability to find “great” companies and assign reasons for their success. What follows a chronicle of the debate.

We start with Drake Bennett wrote a outstanding article “Luck Inc.: The 7 secrets of really, really lucky companies” for The Boston Globe on April 12, 2009 that pulled together all the arguments and got the big players on both sides of the debate in the record.

NOTHING SUCCEEDS LIKE success – and few books succeed like books about success. Since the 1982 publication of “In Search of Excellence,” by the McKinsey consultants Tom Peters and Robert Waterman, the business-success book has grown from a genre into a juggernaut. The most influential among them, like “The Leadership Engine” and “Reengineering the Corporation” and “Built to Last” and “Good to Great,” sell millions of copies, are translated into dozens of languages, and shape the habits of managers from Detroit to Dubai. Their authors launch research firms and leadership education institutes, consult for Fortune 500 companies, and command $50,000 fees to speak at conferences and corporate retreats.

Even those people who haven’t read the books (or listened to the CDs or watched the DVDs) live in their shadow. Working at companies whose executives have embraced the wisdom of one or another success bestseller, many people have their daily lives shaped by its prescriptions, and grow dimly familiar with the catch phrases – like “stick to the knitting” and “level 5 leadership” – that the books have spawned.

While the particulars vary, the basic idea underlying the literature is the same: that the secrets of success can be divined by careful study of the institutional habits of the world’s business all-stars – companies that set the standard for their industries, that thrive in tough times, companies that win the war for talent, companies that are built to last. In the imperturbable focus on core values of Hewlett-Packard or the restless innovativeness of Google or the ruthless accountability of GE, there are lessons for us all.

At their most ambitious, these books purport to elevate the study of excellence to a science, its nuggets culled from exhaustive research and refined by painstaking analysis. Jim Collins, coauthor of “Built to Last” and author of “Good to Great,” likens what he does to physics. Readers of his books, he writes, have their eyes opened to the “immutable laws of organized human performance.”

But a few consultants and business school professors have begun to argue that much of this literature is, in fact, useless. Far from a science, they argue, the success literature is made up of little more than just-so stories in which authors use dramatic anecdotes – often drawn from previously published magazine profiles or interviews with the very executives whose performance is being examined – as evidence for “secrets” that amount to little more than warmed-over homilies. The critics accuse the success gurus of cherry-picking their evidence, of doing little to double-check their results, of circular reasoning, and of making elementary statistical errors.

“These books try to impress you with the massive amounts of data that they gather, but much of the data are not valid,” says Phil Rosenzweig, a professor at Switzerland’s International Institute for Management Development and author of “The Halo Effect,” a 2007 book that set out to debunk much of the business-success literature. “These sorts of data are seen through the lens of the company’s success. They don’t explain the company’s success, they are explained by it.” Along with Rosenzweig’s, the past few years have seen books by Robert Sutton and Jeffrey Pfeffer of Stanford arguing for a more truly evidence-based business-success literature.

And most recently, a team led by Michael Raynor, a researcher and consultant at Deloitte Consulting, has begun to argue that business-success books aren’t even particularly good at spotting success in the first place: When you take a closer look at the companies they study, the accomplishments of the vast majority are just as likely to be due to simple luck. It’s the equivalent of finding someone who flipped a coin seven times and happened to end up with seven heads and asking for her secret.

As a result, these critics argue, we may be learning the wrong lessons from the wrong companies. “When we look at the samples of great companies in most studies, by our measure, the companies that they call great by and large aren’t,” says Raynor. “The conclusions they come to are more a function of the researcher than the company.”

For their part, authors like Collins and Peters see such critiques as caricatures of their work, and defend their methodologies as the best one could hope for in distilling the endlessly complex contingencies of business into digestible lessons for working managers.

“It’s not the same kind of data that you would use to refute or confirm Einstein’s theory of relativity,” says Peters. “It’s exploratory social science research.”

Still, critics like Raynor believe we can do better, and they believe that their work will help pave the way to a more truly scientific study of success. But, at the same time, their critical insights also raise fundamental questions about how possible such a science is, and remind us just how much success – whether in business or in life – can be out of our hands.

It was perhaps inevitable that something like the contemporary business-success book would arise out of the early 1980s, an era defined both by a growing popular interest in the worlds of business and finance and, like today, a deep insecurity in American boardrooms.

“It’s around that time that business pages became more popular,” recalls Jacqueline Murphy, editorial director of Harvard Business School Press, which has published several business-success bestsellers. “People became more interested in investing, in finance, and in management as a topic.”

Running through “In Search of Excellence,” though, is also a note of reassurance for a powerful country that feared it had lost its edge. “There is good news from America. Good management practice today is not resident only in Japan,” Peters and Waterman write early in the book. To back this up, the two authors picked out 43 American companies with especially good reputations in the business world (they determined this, Peters later recounted, simply by asking around the offices of McKinsey) that had also delivered strong long-term financial growth. Among the book’s paragons are Hewlett-Packard, Boeing, Johnson & Johnson, IBM, and Caterpillar. Then Peters and Waterman, sifting newspaper and magazine articles about the companies and talking to the firms’ executives, set out to divine what set those companies apart from the mediocre and the merely adequate.

Their answer was a set of eight “attributes of excellence.” Number one was “a bias for action,” number two was “staying close to the customer,” number eight was “simultaneous loose-tight properties,” which is defined as “fostering a climate where there is dedication to the central values of the company combined with tolerance for all employees who accept those values.”

The book was an immediate sensation and went on to sell more than 6 million copies. But even at the time there were questions raised about its reliability: few of the companies Peters and Waterman showcased kept up their market-beating stock performance in the years after the publication of the book (though a good portion did later return to posting impressive gains). Many of them struggled, and a few, like Atari, Wang, and Data General, collapsed.

And the fact that Peters and Waterman had looked only at companies that they deemed successful – without comparing them with less excellent competitors – meant the authors were in little position to identify what factors mattered and which were irrelevant. They had no way of knowing, in other words, whether 43 utterly dysfunctional companies might be just as likely to be characterized by a “bias for action” and “loose-tight properties.”

Criticisms like these did little to dent the authors’ influence, especially that of Peters, who became the first of the high-profile management gurus and inspired a raft of imitators. The most famous of his heirs has been Collins, also a former McKinsey consultant. Collins’s own books – “Built to Last” was published in 1994, “Good to Great” in 2001 – address some of the criticisms leveled at “In Search of Excellence”; in particular he makes sure to include comparison companies to contrast to the exemplars he lauds in his books.

But he also makes bolder claims for the principles he unearths. In “Good to Great,” he lays out the road map for merely good companies to transform themselves into great ones, using as his case studies companies like Wells Fargo, Philip Morris, and Gillette. The process starts with having leaders with “a paradoxical blend of personal humility and professional will” who get “the right people on the bus,” instill a “culture of discipline,” and apply “technology accelerators.” Such elements, he writes, gain their power from “the enduring physics of great organizations.”

Recent years, however, have seen a proliferation of critiques of books like “Good to Great,” fueled in part by the same quantitative urge behind the push for evidence-based medical care. In last fall’s issue of The Academy of Management Perspectives, two separate articles took issue with “Good to Great.” One of them, by Bruce Resnick and Timothy Smunt, professors at Wake Forest University’s Babcock School of Management, found that if the 15-year window during which Collins looked at each of his “great” companies was moved by as little as a few months, the exceptional stock market performance that distinguished them all but disappeared.

Another line of criticism aims not at the math but at the sort of evidence that success books rely on. Much of that information, argues Rosenzweig, is biased. In many cases researchers rely on newspaper and business magazine articles and business school case studies, as well as internally produced documents and interviews with managers at the companies being evaluated, and the amount of data gathered can be impressive. These sources give the books their breezy, intimate, anecdotal feel.

But Rosenzweig argues that all of these sources can be easily tainted by whether or not a company is understood to be successful or not. The business press, for example, can hail a CEO as a genius when his stock price is up then turn around and assail him as a cretin when it drops. And even the perceptions of company insiders can be sensitive to how someone else tells them the company is doing. A host of psychological studies, for example, have shown the extent to which arbitrarily telling a person they have either excelled or failed at a task shapes their memory of the task: people told they succeeded are far more likely to remember their team as tightknit and their team leader as competent and inspiring. Those told they failed remember team squabbling and ineffectual leadership.

These tendencies, Rosenzweig argues, are only likely to be exacerbated by questions that ask them to explain the success of their company – one of Collins’s interview questions, for example, asks managers, “Can you think of one particularly powerful example or vignette from your experience or observation that, to you, exemplifies the essence of the shift from good to great at [good-to-great company]?”

The newest and perhaps most radical critique, though, comes from Raynor, of Deloitte, along with the Deloitte consultant Mumtaz Ahmed and the University of Texas business school professor Andrew Henderson. In a paper currently under review, the three argue that not only are business gurus bad at identifying the causes of success, they have no way of telling true greatness from mere luck – if enough people are flipping coins, someone is likely to string together an impressive run of heads. According to their analysis of 13 of the most influential business success books, three quarters of the purportedly great companies had track records that could just as easily have been explained by the vicissitudes of random chance – performances that looked impressive on first glance were simply akin to being the lucky person in a stadium full of coin-flippers. And if that’s your data, says Raynor, “you’re not inferring the underlying causes of great performance, you’re basically just imposing patterns on randomness.”

In response, Collins points to unpublished work he is doing that shows that luck does not, in fact, explain the difference between the winners and losers in his model. “There is, however, a significant difference in how the winners and losers view the role of luck – and therein will lie an absolutely fascinating chapter!” he writes in an e-mail. On a broader level, he objects to the characterization of his work as “success literature.” It is more concerned, he argues, with discovering why companies do not become great than with why they do.

Raynor and his coauthors, though, are not arguing that luck explains success, but that it masks it. And they’ve set themselves the tricky – and perhaps impossible – task of coming up with a set of metrics that avoid the pitfalls of subjectivity and self-fulfilling prophecy. Some of the factors they’re looking at are the timing of product launches and mergers and acquisitions and the degree of geographical diversity. These are, Raynor admits, measures that don’t yet add up to a rich portrait of an organization, and may not do much to anatomize its excellence or lack thereof.

Ultimately, argues Robert Sutton of Stanford, a lot of what people look for in advice books, whether in business or any other realm, isn’t so much advice as encouragement. And that can have value.

“There’s value in mastering the obvious,” he says. “If Jim Collins’s impact is to get people to do stuff that they know they should do already – facing the hard truths or being selfless or whatever – I certainly don’t think that’s a bad thing.”

There are a number of important responses to Drake’s article. Tom Peters quickly responded on his blog in two separate posts (one and two) with the primary message described in this excerpted paragraph:

The far more important point is—and this has apparently eluded 100% of our critics: Our readers are not idiots! They are pragmatic businesspeople or managers in the public sector or, pastors or priests or football coaches—the essence of the practice of management in all of these disciplines is indeed pragmatism! That is, our book (and others like it) do not appear in the “religion” section of the book store with the Bible on one side and the Koran on the other. Businesspeople, and police chiefs and fire chiefs and public works directors and elementary school principals, are neither looking for Biblical guidance nor full-blown academic theories of the Einsteinian or Darwinian or Newtonian sort. They are looking for … “a couple of good ideas” they can use now. They are far more capable than Bob Waterman or I or Gary Hamel or Warren Bennis or Rosabeth Moss Kanter of deciding what’s worth trying and what’s not in their peculiar context—and when to start trying whatever and when to stop.

Jim Collins talked a bit more about his unpublished turbulence research in the cover story for the 30th anniversary issue of Inc. magazine and in a profile in the New York Times. He said that there were differences between the successful and unsuccessful IPOs of the last forty years and the findings were “so intense”:

I’ve become a total paranoid, neurotic freak. It has shown me the importance of building in big shock absorbers. I keep a year’s operating budget in cash in the bank across the street all the time and run this place so that we could go an entire year without a penny of revenue. I learned that from reading about Bill Gates in the early days of Microsoft. I want to be able to say at any given time, “If we don’t get a penny for three years, we’ll be fine.” So, we can focus on our work.

Stanford Professor Bob Sutton also responded with more thoughts beyond his quote at the end of the article:

I had a pretty detailed conversation with Drake, and although most of it focused on the drawbacks of these kinds of books, he ended-up quoting me as defending these books. I think he was completely fair, and in any case, I am on record many places raising concerns about the suspect methods used in both “In Search of Excellence” and “Good to Great”. But I have an especially ambivalent reaction to Good to Great — let me explain why. There are a lot of things that bother me about the book:

It is a very small and flawed sample. Most notably, we have 11 companies that used the practices that Collins celebrates, but the sampling strategy made it impossible to discover how many companies used these practices yet failed to make the leap to greatness.

The main method was retrospective — they would label a company as “great” and then look for articles and do interviews to determine why it happened. This is a quite biased method — if you ask someone to explain the secrets of their success, you get a certain kind of story that differs from if you ask them to explain why the failed (regardless of actual performance). Winners will report having better leaders, being more focused, and persistent — and trying to untangle what is part of the sensemaking process versus what really happened is tough.

“Good to Great” cites almost no prior research, even though there are literally thousands of more rigorous studies that are pertinent to claims in the book, especially studies of leadership. Indeed, as knowledge accumulates one study at a time, and there are few if any definitive studies. So any author who claims or implies that he or she has done THE definitive study is immediately suspect — indeed, it is something that well-trained researchers never do, even Nobel Prize winners. I think that Collins needs to say — “this is just one study, we learned a lot from it, but it isn’t definitive…and it has flaws.”

Perhaps the biggest problem of all is that Collins makes bold and excessive claims based on the research; ironically, this book about the virtues of modest leaders reveals considerable hubris in its claims. Perhaps that is necessary to get a bestseller — but, as an example, Malcolm Gladwell would never make such claims. BUT despite all these concerns, what if Collins had actually reviewed and integrated rigorous research and had built a book based on that body of evidence? If he had done so, he could have found considerable support for his ideas in published peer-reviewed research. Although there is a good deal of randomness in the process, and Collins probably overstates the wallop packed by leaders, the fact remains that leaders do need help (it is a damn hard job), and the simple and compelling ideas in Collins’ book are probably mostly right and have probably helped a lot of leaders and managers. Spreading the message to leaders that they need to face facts and to be persistent and humble strikes me as a good thing, and also consistent with studies from diverse places.

So, although it is mediocre research, I think the message has done a lot of good. I just wish that Collins had shown more modesty…

As I think about this now, perhaps the most important standard for business books is that, whatever basis is used to support the authors advice should be stated clearly and not be overblown. I don’t expect a book by Jack Welch to be based on anything but his experience, and my favorite business book of all time, “Orbiting The Giant Hairball”, is based only on Gordon MacKenzie’s personal experience and opinions — but you know where the claims have come from.

Indeed, as I think of the books I have written, and things I am writing now, the lesson I take away is that my values and biases do affect what I write, but I also draw as heavily as I can on peer reviewed research, as that is so much of a part of my history and identity. So I am going to start making more clear that what I write is best seen as “evidence-based opinion.” I also think that is the most honest way to describe what Gladwell does so well and Dan and Chip Heath do too… management is a craft, requiring a complex mix of experience and evidence. Think of what great doctors do, it is much the same thing. If they ignore the evidence too much, they are making a big mistake, but they also need to take into consideration the particular case, what they and the patient want and value, and their clinical experience. To that end, as Pfeffer and I wrote in Hard Facts, we believe that management will always be a craft, but that evidence needs to play a bigger role in how the craft is practiced — so “evidence-based opinion”” fits that perspective well.

Michael Raynor and his collegues Mumtaz Ahmed and Andrew Henderson published the study “A Random Search for Excellence: Why ‘great company’ research delivers fables no facts” under the Deloitte banner. The 20-page report referred to in Bennett’s piece was published in April 2009. The white paper clearly lays out the methodology of the researchers and from multiple vantage points debunks previous techniques used to determine which companies were outliers from the rest. Raynor and his fellow researchers conclude:

These authors cannot be seen to have achieved what they set out to achieve because they were not studying what they said they were studying. Rather, just as patterns perceived in ink blots are seen by some to reveal underlying character traits, the secrets of success identified in what is in the end, at best, a randomly chosen sample from the right tail of a distribution almost certainly says more about the researcher than it does about the evidence.

This doesn’t mean, however, that you should necessarily dismiss the advice offered in existing success studies. The authors are savvy observers of the business world. Their recommendations can be useful, but more in the manner of fables than evidence-based advice. And we use fables very differently from science. For example, no one reads “The Tortoise and the Hare” and, faced with a chance to bet on such a race, chooses the tortoise. Rather, people take from this tale the idea that there is merit in perseverance while arrogance can lead to a downfall. Similarly, because the prescriptions of most success studies lack an empirical foundation, they should not be treated as how-to manuals, but as a source of inspiration and fuel for introspection. In short, their value is not what you read in them, but what you read into them.

It would be unfair to end this exchange without allowing a rebuttal from Mr. Collins and a counter-argument does exist. Clearly aware of the growing criticism of both technique and result, Collins offers a series of research notes in the beginning pages of How The Might Fall. He mentions a lack of time to properly evaluation the meltdown of Fannie Mae and that an evaluation should be done at a later date. He explains again his use of paired company sets, a hallmark of all of his research studies since Built to Last. He even splits hairs defending his use historical research (a fatal flaw that Phil Rosenweig points out in The Halo Effect) by saying he only uses evidence from the time of the event before the outcome is known to be a success or failure. But the most significant clarification is this:

Correlations, Not Causes: The variables we identify in our research are correlated with the performance patterns we study, but we cannot claim a definitive causal relationship. If we could conduct double-blind, prospective, randomized, placebo-controlled trials, we would be able to create a predictive model of corporate performance. But such experiments simply do not exist in the real world of management, and therefore it’s impossible to claim cause and effect with 100-percent certainty. That said, our contrast method does give us greater confidence in our findings than if we studied only success, or only failure.

This admission is a significant departure from his widely quoted description of his past findings as “organizational physics.”

The bottom line is this: there is enough evidence now to force us to reconsider Good To Great as the pinnacle management book of this decade. As with most business books, we should look to them as a source of inspiration, not how-to manuals. Good to Great is directionally correct, but it is hard to see the book as an organizational GPS for making your company great.

Collins’ Bibliography

Leave a Reply