In late June, the Committee for Economic Development issued the latest in a series of reports by business think tanks illustrating a growing change in the way American business is thinking about value. “Built to Last: Focusing Corporations on Long-Term Performance,” lays out a strong case against a destructive short-term focus that has infected American capitalism in recent years. It calls on companies to stop issuing quarterly earnings guidance and instead take the long view: “Decision making based primarily on short-term considerations damages the ability of public companies — and, therefore, of the U.S. economy — to sustain superior long-term performance.”
What interests me, as a CQG (Certified Quality Geek), is the link that the think tanks referenced above assume exists between published measurements and decision making. It’s a critical idea in developing dashboards, and it reminds me of conversations I’ve had over the years with business leaders in the organizations I’ve worked for.
Early in my career as a quality geek, I brought fresh eyes (read “naivety”) to the job. I argued with Sr. Vice Presidents whose dashboards I was developing that a variety of output measures should be included for monthly review and discussion. I couldn’t understand what seemed to me to be their fear of information. What I couldn’t see then, thanks to my lack of real world experience, was the causal relationship that exists between the metrics that are published and the improvement actions that are implemented. Time and again I’ve borne witness to reactive “improvement” projects that spring from pressures received by business managers from their key stakeholders, projects aimed at improving a single metric that make little or no sense in the context of the total data set. Projects that drive me crazy.
The fact is, in most organizations, the things that get done have little to do with long-term strategy. They have everything to do with fears and insecurities that are created by pressures felt by the primary decision makers. Pressures are driven by poorly performing metrics; use that as your starting point and you have a wholly different insight into what makes for a good dashboard.