Principle 5: Improved Decisions (Item 7)
Effective use of facts, data and knowledge leads to improved decisions.
Comparative information is important because:
- comparative and benchmarking information might alert you to competitive
threats and new practices
- you need to know "where you stand" relative to competitors
and to best practices
- comparative and benchmarking information often provides impetus
for significant ("breakthrough") improvement or changes
- you need to understand your own processes and the processes of others
before you compare performance levels
- benchmarking information may assist business analysis and decisions
relating to core competencies, alliances and outsourcing
- Selecting exactly which comparisons and benchmark information you
will use is very important. Selection criteria should include a search
for the best (both within and outside your industry and markets).
You should show comparisons on KPI graphs so these can be used to help
drive performance improvement.
The search of the best is a crucial element. Just because the company
next door will share data with you, does not make it a good benchmarking
partner. You might be both among the very bad. You need to constantly
try to find the best performance or best practice in your industry and
in your market.
Keep looking outside your company and your industry. Many breakthrough
ideas come from adapting practices, products or service offering from
another industry. It is often easier to benchmark outside your industry
than within it because
- companies outside your industry are not competitors and may share
- have not been confined by your industry's conventions.
How have other industries reduced their response time? For example,
if response time means getting there quickly, all the following industries
would have much to learn from each other: police, fire, ambulance, taxis,
road side service, couriers, fast food delivery, tow truck industry.
The search for the best never stops. New practices and new leaders
constantly emerge, industries and technology constantly change. Today's
best practice will probably look very ordinary next year.
Almost all traditional KPIs are lag indicators in that they tell you
about what has happened. You can use these indicators to project into
the future. However, that assumes that the future will be similar to
the past and we know that is seldom true. They can be extremely
useful for monitoring process performance and as measures of success
in reaching desired outcomes. However, as predictors of the future they
are of limited value. It is like trying a drive a car while looking
only in the rear-view mirror. We need a different type of indicator
to predict the future.
Lag indicators usually measure the output of a system or process. They
include all financial KPIs (including revenue, expenses, sales, quantities
sold, RONA, ROI, profit).
An example of a lead indicator is "storm clouds brewing, wind
getting up; it might rain". Squirrels and geese can predict winter
snow.
Useful lead indicators are measurements of your attempts to influence
your future by undertaking activities that are thought to lay foundation
for future success. For example, investment made in capital, training,
product development, innovation, knowledge or technology. Notice that
these investments are also amongst the first that companies cut when
they need to show better results to shareholders.
Shareholders should be very wary of such tactics. Investment in capital,
training, product development, innovation, knowledge and technology
are investment in the future of the company. Continuity of investment
is a very useful predictor of sustainability. Cuts to investment mean
exactly the opposite. Taking money out of the company now to maintain
the illusion of success is a smokescreen, and is increasingly seen through
by shareholders and analysts.
Data or information that is not accurate, not available, not valid,
not reliable or too late is useless to you when you are making your
decisions. So you need processes to make certain the data and information
is accurate, available, valid, reliable and timely. None of this will
happen by chance.
You saw above that asking the right questions is important. However,
even if you are asking the right questions, you might still be getting
useless data and information. You cannot assume that because you are
using a computer to gather it, the data will be accurate, available
etc. Your experience is probably exactly the opposite. Our experience
with databases is that they are usually full of `dirty data'. Ie, data
that is full of errors (eg, incorrect dates, wrong units, numbers reversed,
missing digits, wrong formats, missing values). Sometimes these errors
appear small or insignificant. They all mean that the data is unreliable
and hard or impossible to use.
Cleaning up a database so that it can be used is often the single most
expensive and difficult part of any analysis. It is far better to put
the data through a set of good `data scrubbing' tools to trap any errors
during collection. For example, don't let the person collecting the
data move on if they have left out required data. Make estimates of
the data you are expecting, and reject data that is out of range.
Accurate, available, valid, reliable and timely data is critical for
generating factual information and making decisions. The computer saying
of garbage in garbage out applies. Where the measurement is direct through
some form of mechanical or electronic device, it is important that Repeatability
and Reproducibility Studies are carried out to ensure confidence in
the method of measurement. Instruments should be regularly checked for
calibration.
You also must ensure that the measurement is measuring what you think
it is measuring, ie, that it is `valid'. Otherwise your measurements
are measuring something else entirely and any interpretation you make
will be misleading and meaningless.
You also need to be sure that your analysis is valid. For example,
people often make errors when analyzing surveys. It is incorrect to
assume that survey responses will balance themselves out when aggregated.
For example, you cannot add the 50% "very satisfied" to the
50% "very dissatisfied" to get "on average satisfied".
It would be like adding apples and bananas. You should delve into the
reasons for those answers. (Was the question inappropriately worded
for its audience?) Any one of those 50 dissatisfied may give you real
material to work with.
It takes hard work to make certain the data is accurate; that you can
get it out of the computer; that it is about what you think it is about
(ie valid); that you would get the same result each time (ie reliable);
and that you can get it when you want it.
|