In 1924, a young quality engineer named Joseph Juran went to work for Western Electric in their Hawthorne Works facility in Cicero, Illinois. If you were born after 1970, it is likely you won’t appreciate, or even know, Juran’s employer. Western Electric was the manufacturing arm of AT&T and responsible for supplying the entire technological platform that transmitted telephone calls in the United States for most of the 20th century.
Juran’s hire had to do with the growing installed base of Western Electric. With much of the telephony infrastructure buried in the ground, there came to be a high importance put on the quality of that equipment, as digging switches back out of the ground was expensive and time-intensive. Engineers devoted enormous efforts to improving the reliability of the transmission infrastructure.
The Hawthorne Works turned out to be a leader in this regard, an incubator for some of the most important management insights of the twentieth century. Walter Shewhart introduced the concept of statistical process control the same year Juran joined the company. Up to that point, the standard practice was to test every unit at the end of the assembly to be certain it worked for shipping into the field. Stewhart introduced the concept of control charts that could monitor the process and identify what variables influenced the end quality of the product.(Shewhart’s work also showed that when defects were detected, people often overreacted and made problems worse.)
Another set of studies started around the same time. Engineers decided to test how lighting levels affected worker productivity. Small changes in either up or down created short-term increases in productivity, but nothing that ever lasted. Further experiments were conducted with changes to pay rates, break times, and work hours. It took thirty years for Henry Landsberger to correctly identify that the subjects of these experiments were responding favorably to fact they were being watched and paid attention to. We now know this as the Hawthorne Effect.
Juran himself came to series of important conclusions in his near two decades with Western Electric. His most important and well known observation is one we have all experienced at one time or another. When the root causes of a given problem are categorized and sorted, “a relative few account for the bulks of the defects.” While some strongly associate this finding with manufacturing, Juran pointed to similar phenomenon managers experienced in looking at the causes for employee absenteeism or shop floor accidents.
This insight has been given several names over the years: the 80/20 Principle, The Vital Few and the Trivial Many, but for the most widely used moniker we need to wind back the clock twenty years and travel back across the Atlantic.
In 1896, an Italian economist named Vilfredo Pareto started the process of publishing the lecture notes he was using to teach classes at the Laussane School in Switzerland. It would take two years and three volumes for Pareto to fully release Cours d’economie politique, a preview to his 1906 landmark work Manuel d’economie politique.
Economics as a field of study was just starting to coming into own and the same basic concepts that are still taught in college economics courses were formed during this time. The thesis that people make rational choices was presented and accepted. The motivation for these actors crystallized with the theory that individuals maximize utility and firms maximize profits. Pareto himself contributes the idea that prices reach a point of equilibrium in the tug between supply and demand, but Pareto made an even important observation in Cours, one that moves him beyond his cohort of nineteenth century neoclassical economists.
Pareto in his lectures had been pointing out that land ownership is significantly skewed to a small portion of the population. His research showed that 80% of the land was in the possession of just 20% of the people. Further research indicated similar patterns in other countries like England and that the same skewed distribution held for the distribution of income as well.
Pareto dies in 1907, never aware of what impact that insight would have.
Juran in the late 1930s was promoted, becoming the corporate industrial engineer for Western Electric. Among his duties were sharing best practices with other companies. One of his visits led him to General Motors where he had a chance meeting with the manager in charge of executive compensation. The manager shared with Juran a model of salary distribution that matched the Italian economist’s findings. This was Juran’s first exposure to Pareto’s work and it would stick with him for a long time.
Juran, like many, spent the 1940’s in support of the war effort. The engineer took a job in the government as an administrator. The six week “temporary” assignment lasted four years. After leaving government, Juran pursued teaching and speaking.
In writing his first book, Quality Control Handbook, he was faced with the need for a shorthand description for section titled “Maldistribution of Quality Loss.” Juran showed a variety of graphs displaying the 80/20 phenomenon and under one attributed Pareto. Without the quality movement of the 1980’s and Juran’s rise to one of the period’s gurus, it is unlikely we would be using terms like Pareto’s Law or Partian distributions. Parteo’ Cours was never even translated into English from the original French.
Pareto and Juran always emphasized the importance of those Vital Few, but what if the Trivial Many weren’t trivial?
The standard assumption in creation of most products and services is that demand will appear at random intervals, spread out evenly throughout the day. The web server is designed to handle an average number of hits each day. Cellphone towers are erected to provide coverage based on the average number of calls users make given the demographics of the area. Albert-Laszlo Barabasi believes this thinking is flawed.
Barabasi is a scientist at Notre Dame who has been studying networks for last 20 years. What first brought Barabasi to prominence was his research into nodes and their connections within a network. The classic view in network theory stated nodes (which you can think of as people or websites) in a given network were connected to roughly the same number of nodes and that the variation in connections was random in nature, meaning some average number of connections with some nodes getting more and others less. In other words, everyone had roughly the same number of friends they stayed in contact with, plus or minus a few.
Barabassi’s insight was discovering the number of connection were not rough equally, but the exact opposite: they varied widely. True nature of networks are millions of nodes with a few connections and a few super nodes that had hundreds of millions of connections. The Internet with its millions of websites and handful of megasites like Google and Yahoo make this conclusion seems obvious, but ten years ago this was not as clear. This same phenomenon has since been seen in areas ranging from the distribution of protein interactions of yeast to how drug adoption is affected by the relationships physicians have with one another. And again, the appearance of partian distributions with the many and the few.
The latest research from Barabasi takes an even more interesting step. Scientists have long wanted to create models for human activity. The idea of asking a million people to log what they have done and where they have been over last seven days makes the task impractical. But now the data collected by mobile telecommunications companies, credit card processors, and internet service providers is giving us exactly that view and again we see the emergence of the many and the few.
Barabasi’s analysis shows again there is nothing random about what we do. In a world of averages, we would send our email, evenly spaced throughout the day. Now think about how you really behave. Seven rapid fire email replies on your Blackberry, ahead of your first morning meeting. 20 clicks navigating the New York Times website to find out the midday news, followed by a lunch. Barabasi describes our pattern of activity as one of bursts and lulls, requiring a shift in emphasis from Juran’s Vital Few to Barabasi’s Vital Many.
Maybe the few and the many indicate something more?
Pareto, Juran, and Barbasi all uncovered a different kind of phenomenon in their pursuits. The more we look the more we seem to find these situations that display the few and the many. Nature is full of them from the magnitudes of earthquakes to intensity of solar flares. Many sociological trends form with same distribution whether the loss of life in armed conflicts or sexual partners in social networks. In each of these examples, the Infinite Many and the Extreme Few are boldly evident. Yet, we act as if they are unfamiliar.
The Extreme Few are always a surprise whether it is a volcano erupting in Iceland or a Dow Jones decline of 40%. And the Infinite Many go unnoticed on the shelves of used bookstores or as small price movements in the companies of the Russell 3000. So, why do we expect the world to be the same day in and day out? And what is the underlying model that we may not even be aware of driving that thinking? For the answer, we just need to watch to the longest running game show on network television.
On November 5th 2009, The Price Is Right celebrated the airing of its 7,000th episode. The longest running gameshow on television now hosted by Drew Carey celebrated the anniversary by playing three games from the first episode that aired in September 1972 – Any Number, Bonus Game, and Double Prices. Missing from the festivities sadly was the most popular game in the history of The Price is Right.
Of course, I am talking about Plinko, the game that best combines the best of the classic game show: knowledge of product prices and random chance. In the game that debuted on January 3, 1983, each contestant is given the opportunity to acquire five round discs by guessing which number is incorrect in the two-digit price of a product. Each correct guess earns the player another disc.
After the pricing portion of the game is complete, the player takes the discs they won and proceeds up a small, seven step staircase to the top of the Plinko board. Looking down, the player sees 13 rows of pins, each one offset from the next, and a series of chutes at the bottom labeled with varying amounts of money. The player lays each disc flat on the board and lets go. With each pin hit the disc falls to the left or right as it bounced down the board until eventually landing in one of the chutes, each labelled with a dollar amount.
Those familiar with Japanese culture can’t help but notice the similarity of in both name and construction with the immensely popular game pachinko. Played in huge parlors, pachinko machines use steel balls that fall from the top of the machine and hit metal pins as they fall. Most balls fall to the bottom, while a select few drop into gates placed throughout the board which in turn releases more balls for the player to use.
The origin of both these games goes back even further though to an Englishman named Sir Francis Galton. A half-cousin of Charles Darwin, Galton was in his own right an incredible polymath developing one of first methods for classifying fingerprints, initiating some of the first scientific study of meteorology, and coined the unanswerable notion of “nature versus nuture.”
But Galton was also a teacher and one of his more difficult tasks was teaching students the concepts he was developing around probability theory. These concepts were difficult to convey in equation and theory, so Galton created his own Plinko board. He referred to his invention as The Quincunx, the name for the Roman coin with five markings arranged in the pattern of a four point square with fifth center point, the same pattern Galton used to arrange the pins on the board. With the construction of the Quincunx, Galton could physically show students how random behavior manifested itself.
As the beads fall through the pins of the Quincunx, they accumulate into piles at the bottom of the board. Most fall to the center, having bounced back and forth at each row. You can imagine the probabilities matching that of flipping a coin, with the equal likelihood of a head or tail resembling the chances of the bead falling to the right or left. And as you also know from flipping coins, you can get a run of heads or tails. This same mechanism means beads will from time to time end up at the edges of the board.
Piled up the beads make a familiar shape: the bell shaped curve. This picture is the visual indicator of the random world at work and there are important qualities to note. The centered peak communicates both the average as well as the midway point or median of the distribution. The height of the piles falls to the right and left showing the quickly decreasing likelihood of beads falling onto those outlying piles.
Looking at the piles of those beads, randomness starts to take on a different meaning. The majority of the beads fall close to average. Human height is a good example of this. The average male in the U.S. is 5′ 10″ and the average female is 7 inches shorter at 5’3″. Since height follows a bell curve or Gaussian distribution, we know where the peak lies. The more interesting part is that 99% of the population falls within nine inches on either side of the average. As you move away from the average, the probability of occurrences drops quickly. The bell curve predicts that only 28 people in the U.S. would reach a height of 7′ 1″, an altitude shared by basketball greats Shaquille O’Neil and Wilt Chamberlain.
So, when we use the word random, the popular intent is to describe when something unpredictable has taken place. Random in the real world is quite predictable and the realm of possibilities are relatively narrow. This is the predictable reality that Walter Stewhart used to anticipate and improve the quality of product coming off the assembly lines at Western Electric.
The other adjective commonly used for these distributions is normal. They appear so often and are such a part of how we view the world that their occurrence barely raises an eyebrow and that leaves us blind to other forces at work.
Chris Anderson described the Infinite Many in his book The Long Tail. He concentrated on how the Internet would bring the obscure to the masses with the infinite shelf space of the digital world. On the other end was Nassim Nicholas Taleb and his book The Black Swan. Taleb pointed to the events that, when looked at through a lens of normal distributions, were nearly impossible and instead caught us painfully off guard with their more common than predicted appearances. Power laws are where the extreme few meet infinite many.
Power Laws curves aren’t “normal”. They aren’t ordinary. And they don’t take place in a narrow range.
In the world of Power Laws, the Extreme Few aren’t 8 feet tall. No, the wide ranges in which these curves operate would predict that at least one person on the planet would be 800 feet tall. That claim may sound preposterous in the realm of human height, but it illustrates the point that Extreme Few are much greater than anything we would expect in the world of the normal distributions.
The experience curve is probably the best known occurrence of a power law equation in business. The topic was made famous by Boston Consulting Group in 1970’s when they confirmed earlier findings that showed costs fall 15 to 30 percent for every doubling of output. Moore Law’s represents the technology fueled version of this effect. Take disk drives: the cost of a megabyte of storage has been falling at 5 percent per quarter for the last thirty years.
The human senses operate on a scale that follows power laws as well. Imagine the holiday season approaching and a new determination on your part to outdo the rest of the neighbors with this year’s Christmas lights. While you’d think a trip to your discount retailer to double your inventory of stringed lights would solve the problem, the field of psychophysics has shown a more expensive solution is necessary. Getting twice as many lights would certain make a difference, but it would not make your house twice as bright. The very idea simply doubling the number of lights implies a linear relationship between wattage and perceived brightness, another prominent mental model in how we think about the world. Our sense of brightness instead works on a curve that says to light up the house twice as bright, we would actually need four times the number of lights. So, save up your money, you can always do it next year.
There are two mental modals that dominate business. The first model is that of averages. Think about the questions we ask. What’s the average order size for our customers? What’s the average wait time for customers who call? Albert-Laszlo Barabasi fought this thinking in his research to understand the frequency of human activity. Bell-shaped, normal distributions with their average-based peaks become the de facto picture we use to see the world.
Linearity is the another crutch. The picture here is the X-Y chart with a line pointing upward at a 45 degree angle. Think of how often we assume that one more unit of input will get get us that additional unit of output.
Power Laws are about unequal outcomes and unpredictable extremes, and using these new models in business can fundamentally change how we operate every function of the organization. Strategy, probably the most familiar with these dynamics, changes from a discussion of “do more to get more” to one that respects the position of the company in the market and leverages the unique advantages. Marketing becomes very interested in the perception of our senses and starts to realize that incremental shifts are barely noticeable. Innovation becomes a search that matches the foraging pattern of spider monkeys in the rain forest – many small moves punctuated by a few big moves to new areas.
Start looking around. Power laws are everywhere and they affect everything we do.