tqm a history and review of the european quality award modelיך הכל התחיל.pdf · of the...

17
A history and review of the European Quality Award Model Tito A. Conti Torino, Italy Abstract Purpose – Through the analysis of a crucial period of the history of quality in Europe – the creation of the European Foundation for Quality Management (EFQM) and the development of the European Quality Award – the author, who was a protagonist of the narrated events, aims to reveal some historical aspects that are generally ignored and that should explain some of the peculiarities of the award model. Taking stock of the present situation, some directions taken in the TQM/Excellence Model’s development and use are questioned, and the author reasserts his views on the whole matter. Design/methodology/approach – For the historical part the author has based his research on public documents, EFQM Newsletters and internal documentation and personal correspondence with the protagonists of the events that are mentioned. The author will be glad to share with students who want to conduct research in this area his personal records. The following discussion is mostly based on the author’s findings and experiences, compared with the most common practices. Findings – Since the purpose of the paper is to tell a story which the author was a protagonist of, to derive from it some lessons that are important for the future, the first part of the paper is dedicated to narrating those aspect of the European Quality Award Model’s development that are crucial to understanding why such a model, initially developed following the Malcolm Baldrige Award scheme, suddenly changed dramatically. In this part the author relates some personal anecdotes to make the story more alive and complete. The second part of the paper presents the author’s views on organisational improvement models and self-assessment and explains why he believes that the present course should be changed, if the risk of negative impacts on quality development is to be avoided. Originality/value – The paper tells a story of an out of the box approach that strongly affected the development of the European Quality Award Model, now the EFQM Excellence Model; and explains why, in the author’s view, further innovation is needed in quality management, if we really want to pursue continuous organisational improvement. Keywords Total quality management, European Foundation for Quality Management, Business excellence model, Self assessment, Quality awards Paper type Conceptual paper Introduction: the birth of the European Foundation for Quality Management (EFQM) and the European Quality Award Guess where the European Quality Award was born? In a hotel lift ... in Boston! This claim may surprise the reader but is substantially true. It was on the 4th December 1990. Together with Kees van Ham, the first EFQM Secretary General, I was in Boston to present to American colleagues the newly born foundation. In the short trip from the hotel reception to the rooms I gave Kees a document describing my proposal for the award model. Coming back for dinner a couple of hours later, in the lift, Kees said: This is a breakthrough! I will do what I can to make this the model for the European Quality Award. Two years had passed since C. van der Klugt, President of Philips, sent C. De Benedetti, President of Olivetti, a proposal to put some initiatives together, aimed at giving large The current issue and full text archive of this journal is available at www.emeraldinsight.com/0954-478X.htm TQM 19,2 112 The TQM Magazine Vol. 19 No. 2, 2007 pp. 112-128 q Emerald Group Publishing Limited 0954-478X DOI 10.1108/09544780710729962

Upload: others

Post on 25-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

A history and review of theEuropean Quality Award Model

Tito A. ContiTorino, Italy

Abstract

Purpose – Through the analysis of a crucial period of the history of quality in Europe – the creationof the European Foundation for Quality Management (EFQM) and the development of the EuropeanQuality Award – the author, who was a protagonist of the narrated events, aims to reveal somehistorical aspects that are generally ignored and that should explain some of the peculiarities of theaward model. Taking stock of the present situation, some directions taken in the TQM/ExcellenceModel’s development and use are questioned, and the author reasserts his views on the whole matter.

Design/methodology/approach – For the historical part the author has based his research onpublic documents, EFQM Newsletters and internal documentation and personal correspondence withthe protagonists of the events that are mentioned. The author will be glad to share with students whowant to conduct research in this area his personal records. The following discussion is mostly based onthe author’s findings and experiences, compared with the most common practices.

Findings – Since the purpose of the paper is to tell a story which the author was a protagonist of, toderive from it some lessons that are important for the future, the first part of the paper is dedicated tonarrating those aspect of the European Quality Award Model’s development that are crucial tounderstanding why such a model, initially developed following the Malcolm Baldrige Award scheme,suddenly changed dramatically. In this part the author relates some personal anecdotes to make thestory more alive and complete. The second part of the paper presents the author’s views onorganisational improvement models and self-assessment and explains why he believes that thepresent course should be changed, if the risk of negative impacts on quality development is to beavoided.

Originality/value – The paper tells a story of an out of the box approach that strongly affected thedevelopment of the European Quality Award Model, now the EFQM Excellence Model; and explainswhy, in the author’s view, further innovation is needed in quality management, if we really want topursue continuous organisational improvement.

Keywords Total quality management, European Foundation for Quality Management,Business excellence model, Self assessment, Quality awards

Paper type Conceptual paper

Introduction: the birth of the European Foundation for QualityManagement (EFQM) and the European Quality AwardGuess where the European Quality Award was born? In a hotel lift . . . in Boston! Thisclaim may surprise the reader but is substantially true. It was on the 4th December1990. Together with Kees van Ham, the first EFQM Secretary General, I was in Bostonto present to American colleagues the newly born foundation. In the short trip from thehotel reception to the rooms I gave Kees a document describing my proposal for theaward model. Coming back for dinner a couple of hours later, in the lift, Kees said:

This is a breakthrough! I will do what I can to make this the model for the European QualityAward.

Two years had passed since C. van der Klugt, President of Philips, sent C. De Benedetti,President of Olivetti, a proposal to put some initiatives together, aimed at giving large

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0954-478X.htm

TQM19,2

112

The TQM MagazineVol. 19 No. 2, 2007pp. 112-128q Emerald Group Publishing Limited0954-478XDOI 10.1108/09544780710729962

Page 2: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

European corporations a “kick-start” in total quality management. The 1980s had seenthe success of the Japanese quality based offensive in the marketplace – and the firstpositive results of the prompt American reaction were already evident. Europe wasclearly lagging. Something had to be done. The two presidents appointed theirrepresentatives: Kees van Ham on behalf of Philips, myself of Olivetti, with themandate of presenting a proposal soon.

We spent part of the first year enlarging the group: Kees and I made a tour of themajor European companies operating in the international arena, who had alreadyexperienced the new harsh quality-based competition; 14 of them reacted positively[1].Together with the representatives of the presidents of those companies, we preparedthe strategic plan for the new organisation that was named the “European Foundationfor Quality Management”.

In Brussels, on the 15th September 1988, in the historical room of the Val DuchesseCastle, where the charter of the “Coal and Steel Community” was signed by theFounding Fathers of the future European Union, the 14 founding members’ presidentssigned the letter of intent and approved the strategic mission of the organisation(EFQM, 1988). On the 19th October 1989, in Montreux, the Foundation was officiallyestablished (EFQM, 1989).

Among the EFQM’s strategic objectives, the creation of a European Quality Award,following the example of the American Malcolm Baldrige, had the highest priority[2].In fact, the award was seen as a launching pad for TQM in Europe. The target was tomake it public in 1991, so that the first awards could be handed over in 1992.

I happened to have a fresh and deep experience with TQM models at the time; in1988, as VP for quality at Olivetti, I had launched and directed a large self-assessment(the first of its kind and size in Europe), involving the corporate structure and the fourlargest national subsidiaries. I volunteered then to lead the EFQM Steering Committethat was created for developing the award. But Kees van Ham, who in the meantimehad been appointed Secretary General of the Foundation, claimed that role for Philips,as the first mover of the idea. Matt Vermass, a brilliant and competent manager whohad replaced van Ham as Quality Director at Philips, took responsibility for thecommittee that was made up of quality managers from the EFQM member companiesand consultants.

I agreed on the choice, since the request was reasonable and I held Mat Vermass inhigh esteem. I could not put aside, however, my experience with TQM models andself-assessment. Continuing to keep myself informed on the work of the awardcommittee, I soon became concerned about the turn of the events. In those years TheMalcolm Baldrige Model was all the rage in the western world (Bush and Dooley, 1989;Labovitz and Chang, 1990; De Carlo and Sterett, 1990). Created in 1987, it presenteditself as the first well structured TQM model, complete with a detailed assessmentprocedure (the Japanese Deming Prize was little known at that time in the west). InEurope too the Malcolm Baldrige Model had soon become the undisputed reference forquality managers and consultants. The members of the EFQM Award Committee hadno doubt that the logical architecture of the Malcolm Baldrige was the unavoidablechoice also for the European Award Model.

When the first EFQM Forum took place – on October 1990, in London – anadvanced draft of the model was circulated among the members of the executivecommittee (EFQM, 1990). It was considered as almost final (Table I). It was organized

The EuropeanQuality Award

Model

113

Page 3: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

into eight criteria, against the seven categories of the Malcolm Baldrige (Table II)[3],“societal recognition” being the new one added.

On the eve of the London Forum I was the victim of a car accident that prevented mefrom attending. Every cloud has a silver lining. Forced to stay in bed for more than amonth, I took advantage of it to develop a proposal for the award model, recapitulatingand rationalising my experience, in particular the recent one with the Olivettiself-assessment (one could then jokingly say that the model was born in a lift and

Per-thousandof total

1.0 Leadership 1501.1 Executives’ role in directing driving and renewing quality

initiatives 501.2 Executives involvement etc. 501.3 Participation etc. 50

2.0 Policy and strategy 1002.1 Development process 352.2 Communication process 302.3 Review

3.0 Management of processes 1503.1 Process planning 453.2 Process control 303.3 Process improvement 453.4 Change management 30

4.0 Management of resources 1004.1 Management of human resources 404.2 Management of financial resources 204.3 Management of information resources 204.4 Management of technical resources 20

5.0 Employee satisfaction 1005.1 Total environment 255.2 People related processes 255.3 Training and development 255.4 Recognition and reward 25

6.0 Customer satisfaction 2006.1 Understanding of customer requirement and expectation 406.2 Commitment to customer service and delivery 306.3 Complaint management 306.4 Management of customer satisfaction results 100

7.0 Societal recognition 1257.1 Corporate citizenship 257.2 Sustainable development 157.3 Environment care 157.4 Societal impact results 20

8.0 Business out-turn 1258.1 Product quality 258.2 Service quality 258.3 Business and support processes 258.4 Manufacturing processes 258.5 Business economic plan and results 25

Table I.European Quality AwardCriteria, draft 3,21 December 1990

TQM19,2

114

Page 4: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

conceived in a hospital bed . . .). The resulting model was bringing forth a logicalstructure that was quite different from that of the Malcolm Baldrige and its closerelative that was taking shape at EFQM.

While intellectually satisfied – because I had relieved myself of the brainchild thathad weighed on my minds for months – I was suddenly concerned with the problem ofhow to present the outcome to my colleagues on the award committee. I did not want tohurt the feelings of those who, for more than a year, had worked on the model. Despitemy convalescence, Kees van Ham had pressed me to accompany him to Boston topresent EFQM and its initiatives and discuss the state of quality in Europe (that was atime when European enthusiasm for the new ISO 9000 Standards and certification wasarousing concern in the US, due to the suspicion that certification was part of acommercial policy aimed at protecting the new European Unified Market). In this

Per-thousandof total

1. Leadership 1201.1 Senior management 301.2 Quality values 201.3 Management systems 501.4 Public responsibility 20

2. Information and analysis 602.1 Scope of data and information 252.2 Data management 152.3 Analysis and use of data for decision making 20

3. Strategic quality planning 803.1 Planning process 303.2 Plans for quality leadership 50

4. Human resources utilization 1504.1 Management 254.2 Employee involvement 404.3 Quality education and training 304.4 Employee recognition 204.5 Quality of worklife 35

5. Quality assurance of products and services 1405.1 Design and introduction of new products and services 255.2 Operation of processes5.3 Measurements and standards for prod/service/processes 155.4 Audit 205.5 Documentation 105.6 Quality assurance of operations and processes 255.7 Quality assurance, external 25

6. Quality results 1506.1 Quality of products and services 706.2 Operational and business process Q. improvement 606.3 Quality improvement applications 20

7. Customer satisfaction 3007.1 Knowledge of customer requirements/expectations 407.2 Customer relationship management 1257.3 Customer satisfaction measurement and results 135

Table II.Malcolm Baldrige

National Quality Award,1989 – categories/items

The EuropeanQuality Award

Model

115

Page 5: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

context I left for Boston, bringing along with me the document describing my model,determined to discuss it with Kees van Ham. I have already described what happenedin the lift from the hotel hall to the room floor and back. Kees, being convinced thatintroducing the European Quality Award with a substantial conceptual innovationwas a unique opportunity to give visibility to EFQM worldwide and a shot in the armfor quality in Europe, started thinking of how to manage the change.

The problem was convincing the members of the Award Steering Committee, adifficult task indeed. In fact, for more than a year the committee had worked in aprecise direction. Suddenly, to have to change direction while keeping the deadlineunchanged, looked a shocking request. Probably such request would not appear soupsetting if the advantages of the change were evident, easily graspable. But this wasnot so. When the committee received the text of the proposal, the reaction was cool;time passed by and the proponent was not even asked to illustrate it personally. A tacitjudgement of conceit for challenging the myth of the moment was in the air. At last,Kees van Ham took the only decision that could put an end to the stalemate. He calledfor a meeting of experts on 30 January 1991 in Brussels, asking for a final judgement[4].It was a tough confrontation, since the “jurors”, given the situation, wereunderstandably not so well disposed. But, at the end, I came out of it . . . acquitted.Actually, the jury moderator, Chris Hakes, in the minutes of February 14th sent to theparticipants[5], confirmed the final decision, to immediately adopt my proposal:

There were many questions and much discussion which all helped to clarify theunderstanding of the concept being proposed by Tito . . . All present agreed that Tito’sconcept of self-assessment would lead to a much better scheme. There was concern for thosethat had been involved in the consultation process so far. This would need to be handled withcare so as to avoid very easily losing their interest and support at this stage . . . (Hakes, 1991).

After that meeting the change in direction was official and the EFQM also took care ofthe publication of my proposal (Conti, 1991). I always appreciated van Ham’s andVermass’ open mindedness and their courage to take such a big risk at such a shortdistance from the deadline. In fact, the award model, incorporating most of myproposals, was presented at the EFQM Forum in Paris on 28th October 1991 (EFQM,1991).

The proposed changes and their conceptual motivationsResistance to the proposed changes was justified by the fact that the underlyingconceptual motivations were far from banal and clashed with deep-rooted beliefs. Letus recall them here, not just for historical reasons (to explain yesterday’s resistance)but to be aware of the ambiguities and distortions in interpreting and using qualitymanagement models that still live on after so many years.

In the 1980s, the so-called “TQM models” were in fact “sui generis” models. Theyaimed at representing their authors’ views of the new and complex concept of “totalquality management”; but in an indirect way, through a checklist of requirements.They were not organisational models, aimed at representing the organisation’sdynamics; they were guides for organisational assessment, to evaluate the “TQMmaturity level” of the organisation. As such, they echoed the traditional logic of thequality assurance checklists that for years had been used to assess the conformity ofthe “quality management system” to a given standard. Resistance to change, or lack of

TQM19,2

116

Page 6: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

understanding of it, made the application of old logics to new situations a rathercommon attitude.

One of the most significant features brought about by TQM models was there-discovery of the central role of customer/stakeholder perceived results. How suchresults had to be interpreted and used was the main divergence point between my viewand the Malcolm Baldrige criteria. In the model that I had developed (and tested in the1989 Olivetti self-assessment) customer/stakeholder perceived results had a centralrole. Within the framework of a true organisational model[6], both internal results andcritical systemic factors were assessed against such external results.

By giving results a pivotal role, I did not want by any means to lessen the role ofsystemic factors and processes (which are fundamental, being the causes of such results).On the contrary, I wanted to underline that results (the true results, those perceived bythe receivers) are the manifestation of the effectiveness of the processes and the systemas a whole, and then the starting point for improvement. If, besides being a checklist forassessment, the model has to be a guide for managing the organisation in any stage of itsactivity (to meet the objectives and strive for improvement), then results must beadequately enucleated and represented. My impression was that the Malcolm Baldrige –conditioned from a past where judgements on organisational quality were based more onthe assessment of the system than of its outcome (and results were not so reliable, beingmainly internal results) – had not adequately grasped the novelty of results. Why did ithappen? Let us recall some steps in the history of managing for quality that can providesome explanation.

At the dawn of quality, conformity to objectives was controlled at the end of theprocess, on results. Statistical process control first, then quality assurance, made thebig leap possible: moving quality related activities upstream, from the results to theprocesses and then to the whole organisational system. Among such activities, theinternal quality assurance assessments (internal audits) aimed at anticipating results –and then take appropriate actions to adjust the course – became important. However,the prevention philosophy did not disdain results – on the contrary, it looked for them(for effectiveness checks) when available.

In fact, by prolonging the quality assurance logic from the product developmentphase to the entire lifecycle, the most advanced organisations extended their qualityaudits to the commercial production phase, to guarantee the system’s fitness forpurpose in time. In such cases results are available and, if used, allow for the extensionof the audit concept from “conformity” to “conformity-and-effectiveness” assessment.The latter type of audit, besides guaranteeing that things are done according to therules, looks for the adequacy of the rules themselves. The name “management audit” isoften used for such verifications that aim at assuring the ability to generate conformingproducts in time.

The emphasis of TQM on customer focus and continuous improvement made itclear that customer perceived results were the primary reference; internal results –output and key process indicators – were subordinate to the former, in the sense thattheir alignment with them should be pursued to assure the effectiveness of theinitiatives taken to improve customer satisfaction. The “management dashboard”should always keep the two measures – “delivered quality” and “perceived quality” –in evidence, and closely control how they react to the improvement initiatives taken atprocess and system level.

The EuropeanQuality Award

Model

117

Page 7: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

From a list of critical factors to an organisational model for managingimprovementThe rationale of my proposal in relation to the model is schematically represented inFigure 1, where customer-centricity and continuous improvement are founded on theknowledge of customer expectations and on systematic “delivered quality” and“perceived quality” measurement. And they are pursued through a systematic analysisof the cause-effect relationship between organisational actions (left-hand part of themodel) and customer perceived results (right-hand part of the model). Such conceptsare further detailed in the lower part of Figure 1 that focuses on the elementary“process-product-customer” chain and highlights the key measurements that thesupplier must put in place in order to guarantee customer satisfaction in time.

Figure 2 shows how the European Quality Award Model (at the highest level, of thecriteria) looked when the Award Committee adapted the previously defined criteria tothe new concepts. The upper part of Figure 2(a) shows the model as presented onOctober 1991 in Paris (a rather strange “house-like” representation, held dear by somecommittee members). The lower part (b) represents the final scheme, published in 1992.Such a scheme is significant for the subdivision of the first level criteria between“enablers” and “results”, the major difference with respect to the Malcolm Baldrige. Tofully understand the logic of the model, representation of the sub-criteria (second level)is necessary. But we do not need it here: for our purpose it is sufficient to say that allthe organisational factors are grouped on the left part and results on the right.

Figure 1.The role of results in TQMmodels

TQM19,2

118

Page 8: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

When the model is used for a quality award, the customers’ voice shouldhave at least the same weight as the assessors’ judgementIf, in the standard based quality perspective, an organisation could enjoy a good ratingjust because the assessors found it conformed to the relevant standards, this was nolonger so in the new TQM perspective, where the voice of the customer should enjoyfirst place. In doing so, incidents like the one the Malcolm Baldrige Award incurred inits early stage could be avoided. In fact, the case of a car manufacturer who won theaward while being low in customer ratings (published meanwhile by a much respectedjournal), aroused sensation and discussions among experts at the time (Crosby andReimann, 1991; Quality Progress, 1991; Main, 1991; Garvin, 1991).

The problem was that the Malcolm Baldrige Model was still conditioned by the“quality assurance syndrome”, where conformity to the model was the predominantjudgment criterion. The customers’ voice came a poor second. At a glance, looking atthe model (Table II) one could say that customer satisfaction (as well as other qualityperception measurements) enjoyed first position. In fact, the first reaction of anAmerican colleague, a Malcolm Baldrige Award juror, to my comments was:

You are wrong: the Malcolm Baldrige Model assigns 30 per cent to Customer Satisfaction!

But, how was that 30 per cent subdivided? Under the same title – customer satisfaction– both the results, coming from the voice of the customers, and the actions put in place toachieve those results were placed. From Table II in fact, we can see that the “items” thatcompose the “customer satisfaction” category comprised both “customer satisfactionresults” and “knowledge of customer requirements and expectations”, “customerrelationship management”: that is, both the effects and the causes. Category scoring was

Figure 2.The European Quality

Award model

The EuropeanQuality Award

Model

119

Page 9: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

obtained by making a weighted average of the items, thus combining heterogeneousvariables, like organisational actions and customer perceived results. If, on the contrary,means are separated from results, only one third of the weight goes to the voice of thecustomers, two thirds to the assessors’ judgement. Figure 3 synthesizes the situation ofthe Malcolm Baldrige Model (1989 edition), showing that, considering all the categories,the customer/stakeholder voice weighed only 12 per cent, while 17 per cent went tointernal results; about 70 per cent depended on the assessors’ judgement. The situation ofthe European Quality Award (Table I) was similar with about 80 per cent of theweighting assigned to the assessors’ judgement.

Systematic assignment of higher weights to the “enablers” (the factors that dependon the assessors’ judgement) in comparison with “results” (that depend on thecustomer/stakeholder judgement) reveals dependence from the previous logic, mostlybased on the judgments of conformity of the organisation with the relevant model.Transferring that logic to TQM assessment was very risky also because mostassessors had not the professional profile and experience needed to assess, quite oftenintangible, organizational and management-related factors.

Apart from weight distribution, it is averaging non-homogeneous andasynchronous criteria – causes and effects – under one voice that makes littlesense. Such averages are conventionally accepted when the need to come to one singlenumber overcomes other considerations: such is the case of awards, where the averageshould be considered only as the result of an arithmetic operation, with no physical ororganisational sense. That is why, apart from the necessity of the awards to come to aconventional single figure, weights should be forgotten in all other cases. They can bemisleading and dangerous. More than making averages, verifying consistencybetween enablers and results makes sense, as well as identifying the reasons forpossible inconsistencies (for example the physiological delay between organisationalactions and the related effects).

Figure 3.Malcolm Baldrige, 1989:categories and weightsand subdivision of thelatter among “enablers”,“internal results”,“external results” (voice ofcustomers andstakeholder)

TQM19,2

120

Page 10: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

If the scheme of Figure 1(a) is adopted (where causes and effects are separate,effects/results on the right-hand side, enablers on the left-hand) the choices about theaward weighting philosophy becomes clear and immediate. For example, my proposalto EFQM was simple: give 50 per cent weight to enablers, 50 per cent to results. Theinitial reaction was harsh: I was accused of returning to the old vision, when qualitywas assessed on results. My answer was that the emphasis on processes and systemicfactors is fundamental when managing for improvement. When a quality award is theissue, it makes sense to recognised in a balanced way past performance (manifested byresults, as perceived by their receivers) and present/future potential performance (thatcan be assessed by experts, through accurate analysis/diagnosis of the processes andsystemic factors).

Self-assessment: from an awards’ by-product to a cornerstone of anyimprovement strategyThe problem was that I saw the award as an important but nevertheless contingentgoal. In my mind the main concern was to provide managers with a model that couldhelp them to govern their organisations from a continuous improvement perspective(in any phase of the Plan Do Check Act (PDCA) cycle). Using a metaphor, in the abovementioned proposal I said that the award should be considered – in relation toself-assessment – as “the tail that wags the dog” . . . but “that is acceptable as long asthe dog learns to move correctly by itself” (Conti, 1991). In plain terms (and having inmind what was happening in the US where a multitude of companies had started usingthe Malcolm Baldirige Model autonomously to assess themselves against the awardcriteria) I meant that the award, with all its emphasis on external assessment, had to beseen as the primer of a maturation process, at the end of which self-assessment shouldbecome the main instrument for improvement (and the model as the “compass” thatguides management in all the phases of the business cycle).

The third key point of my proposal (after separation of enablers from results in thecontext of a true organisational model and assignment of a higher weight to the voiceof the customer) was, in fact, to make self-assessment a pre-requisite for awardparticipation. The “application report” should then be derived from an internalself-assessment report, consultable by the award assessors. From colleagues workingwith American companies that had applied for the Malcolm Baldrige Award, I heard infact that the application report had rapidly become an “image” document (to impressthe assessors). Many applicants were spending more money to enhance the quality ofthe application than to improve the quality of the company. External consultants werehired for that purpose; a kind of “application report business” was emerging, similar tothe “quality manual business” that had developed in the area of ISO 9000 certification.Since the aim of the award was to recognize achievements through the use of TQMstrategies, it was simply common sense asking for proof of that through a history ofrunning a number of PDCA cycles. The report of the last “check” phase,self-assessment, should then be the basis for the preparation of the applicationreport (clearly, to facilitate the external assessors’ job, it was reasonable to ask that theinternal self-assessment report be transformed into a standard document, theapplication report).

Formally, this proposal too was accepted and introduced as an admission criterionin the application guidelines. In reality, the application reports of the participants to the

The EuropeanQuality Award

Model

121

Page 11: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

European Award followed more the logic of the Malcolm Baldrige Award (that is: ad hocreports on the status of the organisation vis-a-vis the criteria set by the award guidelines)than the original EFQM rules (by-product of the organisation’s own self-assessments,actively involving the whole organisation). Moreover, the policy followed by the EFQMsince 1992 seemed to encourage the view that self-assessment is a by-product of theAward (the tail that wags the dog) not vice-versa. Suggesting for self-assessment the useof the same model and the same process of the award is clear proof of that. As aconsequence of that policy, consultants grown up in the shadow of the award modelsturn out to be experts more in compiling application reports or in scoring than inorganisational diagnosis and therapy. My opinion is then that the battle “to make the dogable to wag its tail” has been far from won (at least in quantitative terms).

The positioning of “processes” in TQM modelsAmong my 1990s proposals, one was not accepted. It was about the criterion“processes”. My request was to divide the so-called “enablers” into two parts: the“systemic factors” (the true enablers: how the organisation is) and the “processes”(what the organisation does and how it does it). Obviously processes are part of thesystem, but their peculiar characteristic is to be tangible and measurable – and todisplay a direct relation with results (see Figure 4). Processes in fact generate results.

Figure 4.Systemic factors,processes andgoals/results

TQM19,2

122

Page 12: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

They generate “products” (in the broadest sense, comprising service) whoseconformance to the objectives can be checked by the producer (internalmeasurements of the delivered quality, or internal results). On the receiving side theexperience of the user with the product determines the perceived quality and then thedegree of satisfaction. Systemic factors, on the other hand, do not have, normally, directconnection with results. They usually display a broadband spectrum of relations withprocesses and then with results (in other words, each systemic factor may influencemany processes and many results). In that perspective I considered more than naturalincluding the “products of the processes” – the internal results – in the processcategory[7]. Consequently, the right-hand part of the model became the “externalresults” category, where the judge of quality is the receiver of the product”. Thatcreated a clear-cut interface (in the model) between “processes”, the central criterion –and “results”, the right-hand criterion (Figure 4).

Rejection of that proposal led to what I consider a flaw in the European AwardModel: internal results (the products of the processes) and even process indicators areplaced under criterion 9: business results (now key performance results). That createsconfusion in assessment, deprives the process category of the most significant part ofit, measurement, leaving there only the pure enabling features. Even more: the modelcannot be any longer considered a systemic model where on the left-hand side theorganisation and what it delivers and measures are represented; and on the right-handside the mission and goals (in the planning phase) or the results (in the implementationand checking phases) – that is, the outcome of the system – are represented. Theclear-cut interface of Figures 1 and 4 allows making a clear distinction betweenperceived quality and delivered quality, key to meeting objectives and improvingperformance. Notice that diagnostic self-assessment becomes more effective when ifstarts from results – in particular performance gaps (in the receiver perspective), thenenters the organisation by moving leftwards to the process that generate such results,then, if the root causes seem to be upstream, moves further left to the systemic factors.

Conclusion: from an historical perspective to a judgement on the presentand a glimpse into the futureAfter my brief involvement in building up the European Quality Award (a contingentgoal for the dawning European Union), my interest returned to organisationalimprovement (a permanent goal for all organisations). As a consequence, mysubsequent focus has been primarily on improving the effectiveness of the model andthe self-assessment process. A significant step was the introduction of the concept of“right-left self-assessment” (Conti, 1994), the diagnostic approach that, I believe, shouldbe considered as a must when performance improvement is the goal. In the last years,merging quality thinking with systems thinking, to further improve the model, hasbecome my main area of interest. From that perspective, I will conclude this paper withsome considerations about the present status and the future of TQM/Excellencemodels and their applications.

The EFQM Model has shown, in these 15 years, its validity in relation to the aimsfor which it was conceived: recognising excellence (that is, the highest levels oforganisational quality). More generally, it is effective as a “standard” for estimatingorganisational quality and then allowing comparison among different organisations.Any measurement standard is conventional, so is the EFQM. But for those who follow

The EuropeanQuality Award

Model

123

Page 13: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

the standard (and then accept its limits), consistency and comparison of measurementsin time are of paramount importance. That is why EFQM Model users, a few years ago,rejected the proposal of substantial changes aimed at updating the model. When amodel becomes a standard (even if informal, as in this case) stability in time is in factmore important than being at the cutting edge of the evolution (internationalstandards, like ISO 9000, are an extreme case in point, where intervals betweenupdating should be kept as long as possible, since change creates problems to users).

The Malcolm Baldrige follows a different strategy that may be defined asincremental improvement. Small updating is introduced every year that does not affectcomparability for lapses of time lower than, roughly, three to four years. The onlysubstantial change came in 1997, when the EFQM model’s concept of separatingenablers from results was incorporated, together with the attribution of about 50 percent weight to results. As a consequence, after 1997 the difference between the twomodels has sensibly reduced.

Where both models – EFQM and Baldrige – are lacking is in self-assessment. Butthat is an intrinsic lack that cannot be removed until the award organisations decide towaive the claim of covering both the award and self-assessment with the same modeland process. One cannot want to have one’s cake and eat it. One cannot manage amodel as a recognised standard for measuring (or better, estimating) organisationalquality and at the same time promoting it as an organisational improvement model,where flexibility and customisation are the name of the game. That is not simply aninternal problem of the award organisations. Due to the public image and visibility ofsuch organisations in the respective areas of influence; many model users blindlyfollow their suggestions. The diffusion of more effective approaches to self-assessmentdepends then largely on the policies followed by the award organisations. They shouldthen be aware of the responsibility that they have in freezing or fostering evolution inthe area of performance improvement, self-assessment in particular. The extent towhich “dogs will learn how to wag their tails” depends very much on them. In fact theyare not expected to do more but less, leaving self-assessment to the free competition,encouraging differentiation in the area of organisational improvement models anddiagnostic self-assessment.

Figures 5 and 6 summarize the differences between the two basic types of qualityassessments:

(1) Assessments aimed at estimating the maturity level of the organization inquality management, its fitness for purpose (organisational quality).

(2) Assessments aimed at performance improvement through the identification ofthe causes of existing performance gaps or obstacles to improving existinggoals.

The first, where assessors should be independent from the assessed organisation, canbe either external assessment (like award assessments) or internal (managementaudits). Its expected output, in addition to being descriptive, is numerical, with partialand global scorings. The second type of assessment involves the entire organisation inthe search for all opportunities for improvement. Its approach is bound to bediagnostic.

If we call self-assessments those assessments that are made by organisationsautonomously, for their own purposes and following their own rules, two kinds of

TQM19,2

124

Page 14: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

Figure 5.Award-derived

assessments vs diagnosticself-assessment

Figure 6.Synopsis of differences

between “award-like”assessments and

“diagnostic”self-assessment

The EuropeanQuality Award

Model

125

Page 15: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

self-assessment emerge: management audits (belonging to type 1 above) anddiagnostic self-assessment (type 2 above). In reality a third one exists, very peculiar:the self-assessment that organizations wanting to participate in an award (or beassessed by an external organization according to the award rules) make, to preparetheir application report. For that it looks logical to use the award rules, in relation toboth the model and the process. Except this latter case, self-assessment should never beenslaved to the award rules.

As far as the assessment process is concerned, the right-left approach could fit allthe situations, but it would not add much value to award-like assessments sincediagnosis is out of their scope (even if assessors would better understand the status ofthe assessed organisation if the results of a diagnostic process were displayed to them).What would be important is the extension of the diagnostic processes to allself-assessments made for improvement purpose.

As far as the model is concerned, apart from those cases where it is used as a“measurement standard” (awards and the like), all the constraints should be removed.Excellence requires differentiation and competition, also in the area of models. Even ifstarting with a “standard” model, the contingency view (that is adaptation of the modelto the characteristics of the organization) should be always pursued. According to thatconcept, I have introduced some examples of generic models that can be completed andcustomized by the user (Conti, 1997). Figure 7 gives an example of generic,customizable models. Customisation, besides making self-assessment more effective –helps in “selling” the model to managers, who are always reluctant to adopt modelsdeveloped elsewhere, in different contexts. If top managers are involved in theadaptation of the model to their own organization, they will more easily accept andmetabolize it – and use it as a normal business tool.

Freeing organisational improvement models from constraints imposed by specificapplications that in some way freeze their development will certainly help qualitymanagement development.

Figure 7.A general systemic modelfor improvementmanagement

TQM19,2

126

Page 16: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

Notes

1. The 14 companies were: Robert Bosch GmbH; British Telecommunications plc; Bull SA;Ciba-Geigy AG; Avions Marcel Dessault-Breguet Aviation; AB Electrolux; Fiat Auto SpA;Koninklijke Luchtvaart Maatschappji N.V.; Nestle SA; Ing. C. Olivetti & C. SpA; NV Philips’Gloeilampenfabrieken; Regie Nationale des Usines Renault; Gebr. Sulzer AG; VolkswagenAG.

2. To enrich the scarce anecdotes about the European award, it may be interesting to say thatthe name of the award was an important subject for discussion in both the executive and thegoverning committee. The prevailing idea seemed to be giving the award the name of one ofthe founding fathers of the European Union: Adenauer, De Gasperi, Schumann. Due to thedifficulties to choose one among them, the rather anonymous name “The European QualityAward” was chosen.

3. As can be seen from the figures, the Malcolm Baldrige Model had the form of a check list,subdivided into a hierarchy of elements in a tree-like structure, with seven main branches(the “categories”), and two levels of sub-branches, called respectively; “items” and “areas toaddress”. EFQM named the three levels of branches “criteria”, “sub-criteria” and “areas toaddress”. The elements of the verification list were chosen among those that were consideredby the authors the most critical factors, both from the organisation/management perspective(like leadership, human resources, planning, process management) and from the culturalchange perspective (like customer focus, continuous improvement, fact-based management).

4. The jury was composed by Chris Hakes (Bristol Quality Centre), Brian Codling (ICI)., Keesvan Ham (EFQM), Felix Hes (KDI), Roy Peacock (EFQM). Mat Vermas was a member butcould not attend the meeting.

5. A six page fax with minutes and schemes, sent by Chris Hikes to the participants (T. Conti’spersonal records).

6. More precisely, the model we refer to can be defined an “organisational model forperformance improvement”, intending with such expression a model of the organisationaimed at describing the cause-effect dynamics that lead to performance improvement. Froma systemic perspective it represents both the organisation as an open system and itsenvironment (or supra-system) where customers and stakeholders are located.

7. The rationale for incorporating measurement of internal results into the “process” categorylooks particularly evident in the case of service, where the product is often closely interlinkedwith the process.

References

Bush, D. and Dooley, K. (1989), “The Deming Prize and Baldrige Award: how they compare”,Quality Progress, January, pp. 28-30.

Conti, T. (1991), “Company quality assessments”, The TQM Journal, Vol. 3 No. 3.

Conti, T. (1994), “Time for a critical review of quality self-assessment”, paper presented at the1st EOQ Forum on Self-Assessment, Milan.

Conti, T. (1997), Organisational Self-assessment, Chapman & Hall, London.

Crosby, P.B. and Reimann, C. (1991), “Criticism and support for the Baldrige Award”, QualityProgress, May, pp. 41-4.

De Carlo, J. and Sterett, W.K. (1990), “History of the Malcolm Baldrige National Quality Award”,Quality Progress, March.

EFQM (1988), Letter of Intent, Brussels, 15 September 1988, European Foundation for QualityManagement, Brussels, available at: www.efqm.org

The EuropeanQuality Award

Model

127

Page 17: TQM A history and review of the European Quality Award Modelיך הכל התחיל.pdf · of the European Foundation for Quality Management (EFQM) and the development of the European

EFQM (1989), Policy Document, Montreux, 19 October 1989, European Foundation for QualityManagement, Brussels, available at: www.efqm.org

EFQM (1990), “Draft of the model”, Internal EFQM communication, January, T. Conti’s personalrecords.

EFQM (1991), European Quality Management Forum, Paris, 28/29 October 1991, EuropeanFoundation for Quality management, Brussels, available at: www.efqm.org

Garvin, D. (1991), “How the Baldrige Award really works”, Harvard Business Review, Vol. 69No. 6, pp. 80-93.

Hakes, C. (1991), “Minutes of EFQM meeting”, Brussels, T. Conti personal records.

Labovitz, H. and Chang, Y.S. (1990), “Learn from the best”, Quality Progress, May, pp. 81-5.

Main, J. (1991), “Is the Baldrige overblown?”, Fortune, Vol. 124 No. 1, pp. 62-5.

Quality Progress (1991), “More criticism and support for the Baldrige Award”, Quality Progress,August.

About the authorTito A. Conti is a Consultant in Organisational Diagnosis and Improvement based in Torino,Italy. He can be contacted at: [email protected]

TQM19,2

128

To purchase reprints of this article please e-mail: [email protected] visit our web site for further details: www.emeraldinsight.com/reprints