Suppose you had to get some work done on your bathroom. You meet with the potential plumbers, discuss the job and get quotes. You have all the information you need, you just have to decide. Let’s say that you want to be able to explain the decision to someone else so you want to make it objective. You devise a neat table setting out the important criteria and then score the plumbers. You might have cost, track record, customer service and so on. Give them marks out of 10. You might place greater weight on cost. Whichever gets the highest overall score should get the job, right? Wrong.

The weakness of the scoring matrix approach is found in the weightings given to the criteria. These should depict the relative importance of the criteria. Unfortunately they rarely do so because they are set too early. Objectivity requires that we should not know in advance what the scores are going to be. For example, one could end up with quotations from the plumbers within a relatively narrow price range. Let’s say that the price difference is miniscule. It is a mistake to give more weight to the price than to the track record of the plumber. The risk associated with poor workmanship would outweigh the cost savings. Weightings should come after scoring. It is only then that we are in a position to judge the relative importance of the criteria – given the range under consideration.
Using a “scientific” method makes a favourable impression. It is one of the most widely used methods of decision making. It is prevalent in business and the public sector, in recruitment and supplier selection. Some organisations make it mandatory. Unfortunately, it is grossly misunderstood. As a result, faulty decisions are commonplace. The next time you wonder how an unsuitable somebody managed to get a position in your organisation do not be surprised to learn that they ticked all the right boxes. It is just that some boxes should have been more important than others.