Last week we started with the history of decision-making, and this is the concluding part of the history. It has become very important to clearly establish some fundamentals of the subject matter before we delve into details. History always has something to tell, as was put forth by Kofi Arhin – “Yesterday always has something to tell today”.
Computer professionals eulogise Xerox PARC of the 1970s as a technological Eden where some of today’s indispensable tools sprouted – but comparable vitality and progress were evident two decades earlier at the Carnegie Institute of Technology in Pittsburgh. There, a group of distinguished researchers laid the conceptual—and in some cases the programming—foundation for computer-supported decision-making.
Future Nobel laureate Herbert Simon, Allen Newell, Harold Guetzkow, Richard M. Cyert, and James March were among the CIT scholars who shared a fascination with organizational behaviour and the workings of the human brain. The philosopher’s stone that alchemised their ideas was electronic computing.
By the mid-1950s, transistors had been around less than a decade, and IBM would not launch its groundbreaking 360 mainframe until 1965. But already scientists were envisioning how the new tools might improve human decision-making. The collaborations of these and other Carnegie scientists, together with research by Marvin Minsky at the Massachusetts Institute of Technology and John McCarthy of Stanford, produced early computer models of human cognition—the embryo of artificial intelligence.
AI was intended both to help researchers understand how the brain makes decisions and to augment the decision-making process for real people in real organisations. Decision support systems, which began appearing in large companies toward the end of the 1960s, served the latter goal – specifically targetting the practical needs of managers.
In a very early experiment with the technology, managers used computers to coordinate production planning for laundry equipment, relates Daniel Power – editor of the Web site DSSResources.com. Over the next decades, managers in many industries applied the technology to decisions about investments, pricing, advertising and logistics, among other functions.
But while technology was improving operational decisions, it was still largely a carthorse for hauling rather than a stallion for riding into battle. Then in 1979 John Rockart published the HBR article ‘Chief Executives Define Their Own Data Needs’, proposing that systems used by corporate leaders ought to give them data about key jobs the company must do well to succeed.
That article helped launch ‘executive information systems’, a breed of technology specifically geared toward improving strategic decision-making at the top. In the late 1980s, a Gartner Group consultant coined the term ‘business intelligence’ to describe systems that help decision-makers throughout the organisation understand the state of their company’s world. At the same time, a growing concern with risk led more companies to adopt complex simulation tools to assess vulnerabilities and opportunities.
In the 1990s, technology-aided decision-making found a new customer: customers themselves. The Internet, which companies hoped would give them more power to sell, instead gave consumers more power to choose from whom to buy. In February 2005, the shopping search service BizRate reports, 59% of online shoppers visited aggregator sites to compare prices and features from multiple vendors before making a purchase; and 87% used the Web to size-up the merits of online retailers, catalogue merchants, and traditional retailers.
Unlike executives making strategic decisions, consumers don’t have to factor what Herbert Simon called “zillions of calculations” into their choices. Still, their newfound ability to make the best possible buying decisions may amount to technology’s most significant impact to date on corporate success—or failure.
The Romance of the Gut
“Gut”, according to the first definition in Merriam-Webster’s latest edition, means “bowels”. But when Jack Welch describes his “straight from the gut” leadership style, he’s not talking about the alimentary canal. Rather, Welch treats the word as a conflation of two slang terms: “gut” (meaning emotional response) and “guts” (meaning fortitude, nerve).
That semantic shift—from human’s stomach to lion’s heart—helps explain the current fascination with gut decision-making. From our admiration for entrepreneurs and firefighters, to the popularity of books by Malcolm Gladwell and Gary Klein, to the outcomes of the last two U.S. presidential elections, instinct appears ascendant. Pragmatists act on evidence. Heroes act on guts. As Alden Hayashi writes in ‘When to Trust Your Gut’ (HBR February 2001): “Intuition is one of the X factors separating the men from the boys”.
We don’t admire gut decision-makers for the quality of their decisions so much as for their courage in making them. Gut-decisions testify to the confidence of the decision-maker, an invaluable trait in a leader. Gut-decisions are made in moments of crisis when there is no time to weigh arguments and calculate the probability of every outcome. They are made in situations where there is no precedent and consequently little evidence.
Sometimes they are made in defiance of the evidence, as when Howard Schultz bucked conventional wisdom about Americans’ thirst for a US$3 cup of coffee and Robert Lutz let his emotions guide Chrysler’s US$80million investment in a US$50,000 muscle-car. Financier George Soros claims that back pains have alerted him to discontinuities in the stock market that has made him fortunes. Such decisions are the stuff of business legend.
Decision-makers have good reasons to prefer instinct. In a survey of executives that Jagdish Parikh conducted when he was a student at Harvard Business School, respondents said they used their intuitive skills as much as they used their analytical abilities – but they credited 80% of their successes to instinct.
Henry Mintzberg explains that strategic thinking cries out for creativity and synthesis, and thus is better-suited to intuition than to analysis. And a gut is a personal, nontransferable attribute, which increases the value of a good one. Readers can parse every word that Welch and Lutz and Rudolph Giuliani wrote. But they cannot replicate the experiences, thought patterns, and personality traits that inform those leaders’ distinctive choices.
A gut is a personal, nontransferable attribute, which increases the value of a good one.
Although few dismiss outright the power of instinct, there are caveats aplenty. Behavioural economists such as Daniel Kahneman, Robert Shiller, and Richard Thaler have described the thousand natural mistakes our brains are heir to. And business examples are at least as persuasive as behavioural studies. Michael Eisner (Euro Disney), Fred Smith (ZapMail), and Soros (Russian securities) are among the many good businesspeople who have made bad guesses – as Eric Bonabeau points out in his article ‘Don’t Trust Your Gut’ (HBR May 2003).
Of course, the gut/brain dichotomy is largely false. Few decision-makers ignore good information when they can get it. And most accept that there will be times they can’t get it, and so will have to rely on instinct. Fortunately, the intellect informs both intuition and analysis, and research shows that people’s instincts are often quite good. Guts may even be trainable, suggest John Hammond, Ralph Keeney, Howard Raiffa, and Max Bazerman, among others.
In ‘The Fifth Discipline’, Peter Senge elegantly sums up the holistic approach: “People with high levels of personal mastery…cannot afford to choose between reason and intuition, or head and heart, any more than they would choose to walk on one leg or see with one eye”. A blink, after all, is easier when you use both eyes. And so is a long, penetrating stare. This was culled from the HBR (2006) written by Leigh Buchanan and Andrew O’Connell.