clarifying the debate on selection methods for engineering: arrow’s impossibility ... ·...
TRANSCRIPT
Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/257335275
Clarifyingthedebateonselectionmethodsforengineering:Arrow’simpossibilitytheorem,designperformances,and...
ArticleinResearchinEngineeringDesign·January2014
DOI:10.1007/s00163-013-0160-6
CITATIONS
5
READS
31
3authors:
Someoftheauthorsofthispublicationarealsoworkingontheserelatedprojects:
DesignforvaluesViewproject
JohannesFJacobs
DelftUniversityofTechnology
12PUBLICATIONS100CITATIONS
SEEPROFILE
IbovandePoel
DelftUniversityofTechnology
140PUBLICATIONS985CITATIONS
SEEPROFILE
PatriciaOsseweijer
DelftUniversityofTechnology
61PUBLICATIONS278CITATIONS
SEEPROFILE
AllcontentfollowingthispagewasuploadedbyIbovandePoelon11June2016.
Theuserhasrequestedenhancementofthedownloadedfile.Allin-textreferencesunderlinedinblueareaddedtotheoriginaldocumentandarelinkedtopublicationsonResearchGate,lettingyouaccessandreadthemimmediately.
Dear Author,
Here are the proofs of your article.
• You can submit your corrections online, via e-mail or by fax.
• For online submission please insert your corrections in the online correction form. Alwaysindicate the line number to which the correction refers.
• You can also insert your corrections in the proof PDF and email the annotated PDF.
• For fax submission, please ensure that your corrections are clearly legible. Use a fine blackpen and write the correction in the margin, not too close to the edge of the page.
• Remember to note the journal title, article number, and your name when sending yourresponse via e-mail or fax.
• Check the metadata sheet to make sure that the header information, especially author namesand the corresponding affiliations are correctly shown.
• Check the questions that may have arisen during copy editing and insert your answers/corrections.
• Check that the text is complete and that all figures, tables and their legends are included. Alsocheck the accuracy of special characters, equations, and electronic supplementary material ifapplicable. If necessary refer to the Edited manuscript.
• The publication of inaccurate data such as dosages and units can have serious consequences.Please take particular care that all such details are correct.
• Please do not make changes that involve only matters of style. We have generally introducedforms that follow the journal’s style.Substantial changes in content, e.g., new results, corrected values, title and authorship are notallowed without the approval of the responsible editor. In such a case, please contact theEditorial Office and return his/her consent together with the proof.
• If we do not receive your corrections within 48 hours, we will send you a reminder.
• Your article will be published Online First approximately one week after receipt of yourcorrected proofs. This is the official first publication citable with the DOI. Further changesare, therefore, not possible.
• The printed version will follow in a forthcoming issue.
Please note
After online publication, subscribers (personal/institutional) to this journal will have access to thecomplete article via the DOI using the URL: http://dx.doi.org/[DOI].If you would like to know when your article has been published online, take advantage of our freealert service. For registration and further information go to: http://www.springerlink.com.
Due to the electronic nature of the procedure, the manuscript and the original figures will only bereturned to you on special request. When you return your corrections, please inform us if you wouldlike to have these documents returned.
Metadata of the article that will be visualized in OnlineFirst
ArticleTitle Clarifying the debate on selection methods for engineering: Arrow’s impossibility theorem, designperformances, and information basis
Article Sub-Title
Article CopyRight Springer-Verlag London(This will be the copyright line in the final PDF)
Journal Name Research in Engineering Design
Corresponding Author Family Name JacobsParticle
Given Name Johannes F.Suffix
Division Biotechnology and Society, Department of Biotechnology
Organization Delft University of Technology
Address Julianalaan 67, Delft, 2628 BC, The Netherlands
Email [email protected]
Author Family Name PoelParticle van deGiven Name IboSuffix
Division Department of Philosophy, School of Technology, Policy and Management
Organization Delft University of Technology
Address Jaffalaan 5, Delft, 2628 BX, The Netherlands
Author Family Name OsseweijerParticle
Given Name PatriciaSuffix
Division
Organization Kluyver Centre for Genomics of Industrial Fermentation
Address PO Box 5057, Delft, 2600 GA, The Netherlands
Schedule
Received 10 June 2012
Revised 5 July 2013
Accepted 8 July 2013
Abstract To demystify the debate about the validity of selection methods that utilize aggregation procedures, it isnecessary that contributors to the debate are explicit about (a) their personal goals and (b) their methodologicalaims. We introduce three additional points of clarification: (1) the need to differentiate between theaggregation of preferences and of performances, (2) the application of Arrow’s theorem to performancemeasures rather than to preferences, and (3) the assumptions made about the information that is available inapplying selection methods. The debate about decision methods in engineering design would be improved ifall contributors were more explicit about these issues.
Footnote Information
Author Query Form
Please ensure you fill out your response to the queries raised below
and return this form along with your corrections
Dear Author
During the process of typesetting your article, the following queries have arisen. Please
check your typeset proof carefully against the queries listed below and mark the
necessary changes either directly on the proof/online grid or in the ‘Author’s response’
area provided below
Query Details required Author’s response
1. Please check and approve the edit made
in the article title.
Journal: 163
Article: 160
UNCORRECTEDPROOF
LETTER1
2 Clarifying the debate on selection methods for engineering:
3 Arrow’s impossibility theorem, design performances,
4 and information basis
5 Johannes F. Jacobs • Ibo van de Poel •
6 Patricia Osseweijer
7 Received: 10 June 2012 / Revised: 5 July 2013 / Accepted: 8 July 20138 � Springer-Verlag London 2013
9 Abstract To demystify the debate about the validity of
10 selection methods that utilize aggregation procedures, it is
11 necessary that contributors to the debate are explicit about
12 (a) their personal goals and (b) their methodological aims.
13 We introduce three additional points of clarification: (1) the
14 need to differentiate between the aggregation of prefer-
15 ences and of performances, (2) the application of Arrow’s
16 theorem to performance measures rather than to prefer-
17 ences, and (3) the assumptions made about the information
18 that is available in applying selection methods. The debate
19 about decision methods in engineering design would be
20 improved if all contributors were more explicit about these
21 issues.
22
23 1 Introduction
24 In his editorial ‘‘My method is better!’’, Reich (2010) has
25 pointed out that implicit presuppositions and arguments
26 need to be laid out in the open to enable a serious and
27 reflective debate on the validity of methods for dealing
28 with multi-criteria problems in engineering design. In his
29analysis of the debate, Reich suggested that misinterpre-
30tations could be avoided if authors clearly state which kind
31of goals they strive for when presenting certain arguments.
32He makes a distinction between the goal to improve design
33practice and the goal to obtain theoretical rigor. Katsiko-
34poulos (2009, 2012) also analyzed the debate and indicates
35an important distinction between methodological aims in
36order to achieve coherence or correspondence.1 He argues
37that the debate can be improved if authors would
38acknowledge this distinction.
39Several authors have thus investigated the debate and
40their suggestions will contribute to the development of a
41more reflective debate. However, we want to point out
42some further unclarities and implicit assumptions that also
43need to be addressed to engage in a fully open debate on
44matrix-based selection methods.
45This paper is structured as follows. In section two, the
46selection methods with a matrix-based structure, which are
47used in engineering design practice to deal with multi-
48criteria problems, are described. We argue that Arrow’s
49impossibility theorem affects decision-making in engi-
50neering design in two distinct ways, knowingly the
51aggregation of preferences and of performances. In the
52following section, we will modify the theorem to design
53performances as utilized in engineering design rather than
54to preferences as in Arrow’s original formulation. In sec-
55tion four, the presumed amount of information, which is
56available for the multi-criteria selection problem, is dis-
57cussed in terms of uncertainty, comparability, and mea-
58surability. In the final section, we will draw conclusions by
A1 J. F. Jacobs (&)
A2 Biotechnology and Society, Department of Biotechnology,
A3 Delft University of Technology, Julianalaan 67,
A4 2628 BC Delft, The Netherlands
A5 e-mail: [email protected]
A6 I. van de Poel
A7 Department of Philosophy, School of Technology,
A8 Policy and Management, Delft University of Technology,
A9 Jaffalaan 5, 2628 BX Delft, The Netherlands
A10 P. Osseweijer
A11 Kluyver Centre for Genomics of Industrial Fermentation,
A12 PO Box 5057, 2600 GA Delft, The Netherlands
1FL011 Coherence refers to the internal consistency of the decision-making
1FL02process (e.g., the method contains no logical contradictions), while
1FL03correspondence refers to the external consistency (e.g., success of the
1FL04made decision in the real world).
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Res Eng Design
DOI 10.1007/s00163-013-0160-6
Au
tho
r P
ro
of
UNCORRECTEDPROOF
59 suggesting several ways in which the current debate on the
60 validity of matrix selection methods can be improved.
61 2 Preferences and design performances
62 Selecting a single or a few promising design concepts from
63 a larger set are an important part of engineering design.
64 The selection of the ‘‘best’’ design concept(s) for further
65 design and development is a decision-making activity that
66 is based on various criteria, which are, in turn, based on the
67 design specification and requirements. Various methodol-
68 ogies have been developed to support this type of multi-
69 criteria selection. Methods that use a matrix structure are
70 frequently applied in engineering design practices, for
71 example, the analytic hierarchy process, Pahl and Beitz
72 method, Pugh’s concept selection,2 quality function
73 deployment, and weighted-product method (Akao 2004;
74 Hauser and Clausing 1988; Pahl and Beitz 2005; Saaty
75 1980). These matrix methods visually indicate how the
76 alternative designs are scored on various criteria by tabu-
77 lating the performances of the design concepts on each
78 criterion in a chart (see Table 1 for an example). A global
79 performance structure (e.g., the scores on the three alter-
80 natives in Table 1) is generated by means of an aggregation
81 procedure based on the performances on the various cri-
82 teria. The global performance structure is used to guide the
83 selection of the best alternative design for further
84 development.
85 This kind of decision methods is very similar to a multi-
86 criteria decision analysis in which the diverse perfor-
87 mances are aggregated into one overall performance score.
88 The aggregation of diverse measures into a global measure
89 is, however, not straightforward. A similar aggregation
90 problem has been studied within the field of voting theory
91 and welfare economics that later formed the field of social
92 choice theory. A huge literature was initiated, most
93 importantly by the works of Arrow (1950) and May (1952),
94 in which the aggregation of individual preferences into a
95 collective preference of a group is analyzed. Various
96 authors have claimed that the results of these theories are
97 applicable to multi-criteria decision analysis, because the
98 problems of social choice and multi-criteria decision
99 analysis are structurally identical (Franssen 2005; Bouys-
100 sou et al. 2009).
101 One of the best-known results from social choice theory
102 is the famous impossibility theorem that was proved by the
103economist Kenneth Arrow in 1950. Arrow’s impossibility
104theorem states that if there are a finite number of individ-
105uals and there are at least three options to choose from,3 no
106aggregation method can simultaneously satisfy five general
107conditions [see Sen (1995) for a short argument proof or
108see Blackorby et al. (1984) for a geometric proof]. These
109conditions are as follows:
110• Collective rationality The collective preference order-
111ing4 must be complete and transitive. The preference
112ordering is complete if it accounts for all considered
113options. Transitivity demands that if option A is
114preferred to B and B to C, A is also preferred to C.
115• Independence of irrelevant alternatives The collective
116preference ordering has the same ranking of prefer-
117ences among a subset of options as it would for a
118complete set of options.
119• Non-dictatorship The collective preference ordering
120may not be determined by a single preference order.
121• Unrestricted domain The profile of single preference
122orders is only restricted with respect to transitivity and
123completeness.
124• Weak Pareto principle Providing that one option is
125preferred to another option in all single preference
126orders, then so must be the resulting collective prefer-
127ence order.
128The theorem prevents the construction of a generally
129acceptable aggregation method that combines the single
130preferences into a collective preference structure, such that
131it represents the preferences of the group as a whole and
132satisfies the five conditions mentioned. Arrow’s impossi-
133bility theorem only bars general procedures, because
134aggregation may still be possible in specific cases in which
135the single preferences align in very specific ways; for
136example, when all agents prefer the same outcome over all
Table 1 Weighted-sum method with the performance measure that is
based on five-point ranks
Criteria Weight Concept A Concept B Concept C
Yield 1 4 5 2
Safety 1 5 1 3
Controllability 1 4 5 2
Revenues 1 1 2 4
Score – 14 13 11
Performance measure: 1 = inadequate, 2 = weak, 3 = satisfactory,
4 = good, 5 = excellent
2FL01 2 The earlier Pugh’s controlled conversion method (Pugh 1991, 1996)
2FL02 is mainly a tool to enhance design creativity. Several others have
2FL03 adapted the method into a selection tool by calculating net scores
2FL04 (Ulrich and Eppinger 2008) or overall ratings (Eggert 2005). These
2FL05 Pugh-like methods are also known as Pugh’s concept selection
2FL06 methods.
3FL013 The impossibility result does not hold in the situation that there are
3FL02only two alternatives to choose from; see for example May’s theorem.
4FL014 A single preference ordering refers to the set of ordinal measures
4FL02between different options, while the collective preference ordering
4FL03refers to the aggregated single preference orderings over these same
4FL04options.
Res Eng Design
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Au
tho
r P
ro
of
UNCORRECTEDPROOF
137 others or when all agents have single-peaked preferences
138 over the considered outcomes.
139 In engineering design, the choice between design con-
140 cepts is usually a team effort, involving designers, clients,
141 and managers which all may have different opinions on
142 what the ‘‘best’’ design is. The combination of all these
143 various single preferences into a final decision that selects
144 the ‘‘best’’ design among various alternatives is very sim-
145 ilar to the type of problem that is considered in social
146 choice theory. Various authors have identified these simi-
147 larities of selections made by groups and have claimed that
148 Arrow’s impossibility theorem affects engineering design
149 decisions in this manner (Hazelrigg 1996; Lowe and
150 Ridgway 2000; Van de Poel 2007).
151 This is, however, not the only way in which Arrow’s
152 impossibility theorem affects engineering design. The
153 aggregation problem of design performances into a global
154 performance structure is also similar in nature to the
155 aggregation problem of Arrow. Nonetheless, there is a
156 difference. The most commonly applied aggregation pro-
157 cedures are indeed quite similar to those studied in social
158 choice theory, but the object of aggregation is not. In the
159 case of social choice theory, as well as in the case of group
160 selection in engineering design, the object of aggregation is
161 individual preferences. In contrast, in the selection of
162 design alternatives, the object of aggregation is perfor-
163 mances on design criteria.
164 To summarize, there are two distinct ways Arrow’s
165 impossibility theorem can affect decision-making in engi-
166 neering design, knowingly the aggregation problem of
167 (a) preferences and (b) design performances.
168 The distinction between preferences and design perfor-
169 mances could help to clarify the existing debate. An
170 example is the discussion in this journal about the way
171 Arrow’s impossibility theorem affects the Pugh controlled
172 convergence method. Franssen challenges the Pugh con-
173 trolled convergence method by arguing that the method
174 ‘‘explicitly aims to arrive at a global judgment on the rel-
175 ative worth of several design concepts’’ (Franssen 2005,
176 p 54) and that the ‘‘preference order is not explicitly part of
177 Pugh’s method, but its existence must be presumed in order
178 to understand how the designer arrives at the comparative
179 judgments’’ (Franssen 2005, p 55). Franssen, therefore,
180 concludes that the Pugh controlled convergence method
181 will ‘‘run into the kind of difficulties associated with
182 Arrow’s theorem’’ (Franssen 2005, p 55). In essence, he
183 claims that the method presumes the formation of prefer-
184 ence, even though the method uses a set of design criteria
185 that would indicate the use of design performances. Fur-
186 thermore, Franssen claims that these presumed preference
187structures on various criteria are aggregated to achieve a
188selection, which implies that the method can be challenged
189with the impossibility theorem.
190Hazelrigg and Frey et al. also discuss the validity of the
191Pugh controlled convergence method. These authors dis-
192agree with each other about the question whether the Pugh
193method entails voting, because voting would entail the
194aggregation of preferences of individual persons, and as a
195result, the method could be challenged along the lines of
196Arrow’s impossibility theorem. Frey and coauthors ‘‘note
197that there is no voting in Pugh’s method’’ (Frey et al. 2009,
198p 43). Hazelrigg responds by pointing out that ‘‘voting is
199used in the Pugh method, firstly to obtain consensus on the
200relative merits of candidate designs compared to the datum
201design and second to aggregate the symbols … in the Pugh
202matrix’’ (Hazelrigg 2010, p 143). In a reply, Frey et al.
203argue against the position of Hazelrigg and hold that for
204building of consensus ‘‘there is not voting in the Pugh
205method’’ (Frey et al. 2010, p 147). The discussion is about
206the need of voting in the practical utilization of the Pugh
207controlled convergence method, since the method does not
208explicitly state how consensus should be reached (Pugh
2091991, 1996). Pugh controlled convergence method does
210explicitly prevent any general attempt to aggregate the
211performance measures in the matrix, because it is stated
212that ‘‘[t]he scores or numbers must not, in any sense, be
213treated as absolute; they are for guidance only and must not
214be summed algebraically’’ (Pugh 1991, p 77). Hence,
215Arrow’s impossibility theorem cannot be used to challenge
216the method on this specific point.
217Another example is the debate of Hazelrigg, Franssen,
218Scott, and Antonsson about the importance of Arrow’s
219impossibility theorem for engineering design in which the
220issue seems to be the switch between the two different
221measures. Hazelrigg (1996) claims that the theorem ‘‘holds
222great importance to the theory of engineering design’’ and
223that the methods ‘‘Total Quality Management (TQM) and
224Quality Function Deployment (QFD) are logically incon-
225sistent and can lead to highly erroneous results’’ (Hazelrigg
2261996, p 161). Hazelrigg claims that ‘‘customer satisfaction,
227taken in the aggregate for the group of customers, does not
228exist’’ (Hazelrigg 1996, p 163). Hence, the arguments of
229Hazelrigg are based on the aggregation of single prefer-
230ences into a collective preference structure. Scott and
231Antonsson (1999) have challenged the conclusion of Ha-
232zelrigg. In the view of Scott and Antonsson, the ‘‘engi-
233neering design decision problem is a problem of decision
234with multiple criteria’’ (Scott and Antonsson 1999, p 218).
235The authors claim that in ‘‘engineering design, there may
236be many people involved, but decisions still depend upon
Res Eng Design
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Au
tho
r P
ro
of
UNCORRECTEDPROOF
237 the aggregation of engineering criteria’’ (Scott and An-
238 tonsson 1999, p 220). Hence, the authors are considering
239 design performances instead of preferences. Scott and
240 Antonsson reject the conclusions of Hazelrigg on the
241 grounds that the conditions of Arrow’s theorem are
242 unreasonable for design performances. In turn, Franssen
243 (2005) criticizes the conclusion of Scott and Antonsson.
244 Franssen argues that ‘‘Arrow’s theorem applies to multi-
245 criteria decision problems in engineering design as well’’
246 (Franssen 2005, p 43). He claims that the aggregation
247 problem of preferences is structurally identical to that of
248 design performances and that the conditions are reasonable
249 for engineering design, if interpreted in the right way.
250 This leads us to the following point of clarification.
251 Although many authors have discussed the implications of
252 Arrow’s impossibility theorem on engineering design with
253 regard to the aggregation problem of performance mea-
254 sures, no one has to our knowledge explicitly adapted the
255 theorem and its conditions from the context of preferences
256 to the context of design performances.
257 3 Impossibility theorem for performance aggregation
258 We have adapted the five conditions of Arrow’s impossi-
259 bility theorem on preference aggregation into an Arrowian
260 type of impossibility theorem for performance aggregation
261 in engineering design5 as follows:
262 • Independence of irrelevant concepts The global per-
263 formance structure6 between two design concepts
264 depends only on their performances and not on the
265 performances of other design concepts.
266 • Non-dominance The global performance may not be
267 determined by a single performance structure.
268 • Unrestricted scope The performance structures on a
269 single criterion as well as the global performance
270 structure are only restricted with respect to transitivity,
271 reflexivity, and completeness. Completeness requires
272 that the performance structure accounts for all design
273 concepts. A performance structure is reflective if the
274 separate performance measure can be compared to
275 themselves. Transitivity requires that if performance A
276 is related to performance B and performance B in turn
277is related in the same way to performance C, then
278performance A is likewise related to performance C.7
279• Weak Pareto principle Providing that one design
280concept is strictly better than another design concept
281in all the single performance structures, then so must be
282the resulting global performance structure for these two
283concepts.
284The Arrowian impossibility theorem for performance
285aggregation in conceptual engineering design then states
286that if there is a finite number of evaluation criteria and
287there are at least three alternative design concepts, no
288aggregation method can simultaneously satisfy indepen-
289dence of irrelevant concepts, non-dominance, unrestricted
290scope, and weak Pareto principle. Similarly to the original
291theorem, this theorem only shows that such an aggregation
292is unattainable in general; so, it may still be possible in
293very specific cases, for instance, when all criteria give
294single-peaked performances over the considered design
295concepts. So, aggregation is possible when one design
296concept has the best performance on all the listed criteria.
297Such cases will be the exception rather than the rule in
298engineering design practice, in which most design deci-
299sions involve trade-offs and value-based choices. The four
300Arrowian conditions are thus already enough to prove the
301general impossibility result, even though we might wish to
302impose other conditions on decision-aiding tools for engi-
303neering design, such as non-manipulability (preventing
304unfair influence on the results) and separability (allowing
305reduction of the amount of design concepts in several
306subsequent phases).
307Prima facie The above-described conditions seem quite
308reasonable for engineering design practices. If one accepts
309the conditions, the Arrowian impossibility theorem indi-
310cates that the decision matrix methods, which aggregates
311the performance measures pij (i = 1, …, n; j = 1, …, m)
312over n options on m criteria into a global performance
313structure si (i = 1, …, n), can generate misleading con-
314clusions by introducing significant logical errors in the
315decision process. This point can be illustrated by an
316example, which is depicted in Tables 1 and 2. Suppose that
317there are three conceptual designs of which only one will
318have to be chosen for further development. The design
5FL01 5 The theorem is formulated in such a way that it is consistent with all
5FL02 three fundamental measures of scale as discussed in Sect. 4.2. The
5FL03 original theorem of Arrow, as presented in Sect. 2, is restricted to
5FL04 ordinal measures of preference.
6FL01 6 A performance structure refers to the set of relations between
6FL02 different design concepts. The global performance structure (e.g., the
6FL03 scores on the three alternative concepts in Table 2) is the resulting set
6FL04 of aggregated relations of the performance structures on the various
6FL05 criteria (e.g., the ranks on the three alternative concepts on the four
6FL06 criteria in Table 2).
7FL017 In a more formal sense, reflexivity means that for all a that are an
7FL02element of set A, it holds that aRa, with R representing a binary
7FL03relation on set A (e.g., a is greater than or equal to a). Transitivity
7FL04means that for all a, b, and c that are an element of set A, it holds that
7FL05aRb and bRc imply aRc (e.g., a is equal to b and b is equal to c,
7FL06implying that a is equal to c). The asymmetric part of R is the binary
7FL07relation P defined by aPb that is equivalent to aRb and not bRa. The
7FL08symmetric part of R is the relation I defined by aIb that is equivalent
7FL09to aRb and bRa. In social choice theory, the relation is normally taken
7FL10as binary; nonetheless, the binariness of choice is unimportant for the
7FL11impossibility result of Arrow’s theorem (see Sen 1986).
Res Eng Design
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Au
tho
r P
ro
of
UNCORRECTEDPROOF
319 team has made a list of four evaluation criteria, namely
320 production yield, process safety, controllability of the
321 system, and economic revenues. To facilitate the choice,
322 the design team uses the weighted-sum method (sometimes
323 referred to as the Pahl and Beitz method) and scores the
324 performance measure by a five-step point ranking. The
325 resulting decision matrix is given in Table 1 and indicates
326 that design concept A should be selected by the design
327 team. Now, suppose that the team uses a three-step point
328 rank instead, in which the concept with the best perfor-
329 mance on a criterion receives three points, the worst
330 scoring concept only one point and the other in-between
331 option two points (see Table 2). Using the same aggrega-
332 tion procedure, the weighted-sum method now indicates
333 that concept B should be selected by the design team
334 instead. The decision procedure of point ranking in com-
335 bination with weighted-sum aggregation thus gives ratio-
336 nally inconsistent results. In terms of Arrowian
337 impossibility theorem, this distortion is the result of vio-
338 lating the ‘‘independence of irrelevant concepts’’ condition.
339 The three-point rank allows a comparison of three design
340 concepts, while two additional concepts can be evaluated
341 on a five-point rank. Hence, the not shown additional
342 concepts influence the global performance structure and so
343 violate the Arrowian condition.8
344 4 Information availability
345 A third point of difficulty for a serious reflective debate is
346 the presumed amount of information that is available for
347 the selection of alternative solutions on the basis of mul-
348 tiple criteria in engineering design. We will discuss the
349 uncertainties that plague engineering design work as well
350 as the information basis of the performance measures in
351 relation to preference and performance aggregation.
3524.1 Design for an uncertain future
353The selection methods aim to support the process of
354selecting a single or a few promising design concepts from
355among various alternative concepts. In order to make a
356decision, the performance of the designed artifact has to be
357evaluated. The design concepts under evaluation are only
358abstract embodiments of designs, which are given physical
359shape in detailed design and subsequent construction. The
360decision maker thus needs to predict the final physical form
361of the artifact under design. Furthermore, to evaluate the
362performance of this still to-be created artifact, the decision
363maker needs to envision the mode of the artifact’s appli-
364cation and the effects this application will bring about. In
365other words, the decision process requires the prediction of
366future states. These predictions are made under uncertainty,
367which is the result of various factors: lack of knowledge
368(known unknowns), ignorance (unknown unknowns), sys-
369tem complexity, and ambiguity.
370The degree of uncertainty depends on the type of design.
371Vincenti (1992) as well as Van Gorp (2005) makes a dis-
372tinction between normal and radical design. In normal
373design, both the operational principle and the configuration
374are kept similar to already implemented designs, while in
375radical design, the operational principle and/or configura-
376tion deviates from the convention or is unknown. In normal
377design, the degree of uncertainty in predicting future states
378is smaller compared to radical design, because in the for-
379mer case, there is an effective basis of experience to base
380the predictions on. The degree of the uncertainty that is
381faced in the decision process is also dependent on the
382lifetime of the artifact and/or the duration of its (potential)
383effects. Forecasting becomes more difficult when it spans
384larger amounts of time. The degree of uncertainty also
385depends on the design phase (e.g., conceptual, preliminary,
386or detailed). As the design project proceeds, more infor-
387mation comes available and more features become defined.
388Moreover, the time span to the actual construction and
389application becomes discernibly smaller, so decreasing
390uncertainty. For the clarity of the debate, it is thus impor-
391tant that authors make explicit what degree of uncertainty
392the selection method is expected to incur in relation to the
393design type, design phase, and kind of artifact.
394In the impossibility theorems for preference aggrega-
395tion, as well as in the one for performance aggregation, it is
396assumed that adequate predictions can be made so that
397preferences and performance measures can be obtained
398without the burden of uncertainty. It is thus clear that the
399uncertainties, especially in conceptual phase of radical
400design, will complicate the selection procedure beyond the
401difficulties presented by the impossibility theorem.
402Although normal design work in the detailing phase has far
403fewer uncertainties about the possible future states that
Table 2 Weighted-sum method with a different performance mea-
sure that is instead based on three-point ranks
Criteria Weight Concept A Concept B Concept C
Yield 1 2 3 1
Safety 1 3 1 2
Controllability 1 2 3 1
Revenues 1 1 2 3
Score – 8 9 7
Performance measure: 1 = worst, 2 = neutral, 3 = best
8FL01 8 It should be noted that the point ranking as used in the example can,
8FL02 at best, be interpreted as ordinal scale measures. If the point ranks are
8FL03 measured on an ordinal scale (see Sect. 4.2), the weighted-sum
8FL04 method gives meaningless scores, because it uses an inapplicable
8FL05 arithmetic operation (addition of performance measures) to calculate
8FL06 the scores of the global performance.
Res Eng Design
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Au
tho
r P
ro
of
UNCORRECTEDPROOF
404 need to be assessed in selecting design concepts, it does
405 not avoid the clutches of the impossibility theorem. Even
406 in this case, the information basis is too limited for
407 proper aggregation due to problems of measurability and
408 comparability.
409 4.2 Measurability
410 In his work, Sen (1977) has pointed out that the impossi-
411 bility result of Arrows theorem can be interpreted from an
412 informational perspective. Arrow (1950) did not include
413 the uncertainties surrounding the interpretation of future
414 states, which are necessary for the formation of prefer-
415 ences. He presupposed an ideal situation in which the
416 options of selection are known. This makes his impossi-
417 bility so strong, because in practice even more difficulties
418 will arise. Furthermore, Arrow (1950) assumes that certain
419 information is not available as he presumes the incompa-
420 rability of preferences and supposes that the measurability
421 of preferences is restricted to ordinal scales. It is possible to
422 relax this restriction by assuming measurements on interval
423 or ratio scales, which would make intensity and ratio
424 information available for the decision. The most important
425 properties of the four fundamental measurement scales are
426 presented in Table 3 (Stevens 1946).
427 However, allowing for more information about prefer-
428 ence/performance measurements on interval scales does
429 not avoid the clutches of an Arrowian type of theorem.
430 Kalai and Schmeidler (1977) presented a proof of an
431 impossibility theorem that uses interval scale information;
432 however, this proof needs an additional condition. Hylland
433 (1980) extended the proof of Kalai and Schmeidler in such
434 a way that the added condition is not required.
435 One step further is to consider ratio scale information for
436 preference/performance measurements. However, this also
437 seems no way out of the impossibility result, because Tsui
438 and Weymark (1997) have shown that the theorem also
439holds when one allows for ratio scale measurements. Fur-
440thermore, it seems unfeasible to obtain such scales for the
441performance/preference measures, because design concepts
442are abstract embodiments that still need to be given phys-
443ical form. The performance/preference measures thus refer
444to mental notions about what accounts for a ‘‘good’’ design.
445The measure represents a (value) judgment, and arguably,
446this does not allow for ratio scale measurements. More-
447over, almost no examples of ratio scale measurement exist
448in the behavioral sciences. Summarizing, the Arrowian
449type of theorem for multi-criteria concept decisions in
450engineering design can be extended to interval and ratio
451scale measurements of preferences and performances,
452though the impossibility result is preserved.
4534.3 Comparability
454Let us turn to the comparability part of the informational
455restriction. Arrow’s theorem in social choice theory is set
456up ‘‘to exclude interpersonal comparison of social utility
457either by some form of direct measurement or by com-
458parison with other alternative social states’’ (Arrow 1950,
459p 342). For the Arrowian impossibility theorem for multi-
460criteria concept decisions in engineering design, this would
461mean that it rules out the possibility to make direct com-
462parisons between the performances of the conceptual
463designs on various design criteria. In the field of social
464choice theory, the possibility of interpersonal comparisons
465has been investigated. Hammond (1976) has shown that
466there are some possible aggregation methods when the
467information basis is limited to ordinal scale comparability,
468even when using stronger forms of the Pareto principle and
469non-dictatorship conditions plus an additional separability
470condition. D’Aspremont and Gevers (1977) have proved
471the feasibility of aggregation with interval scales and unit
472comparability. Deschamps and Gevers (1978) have devel-
473oped this further for cases of full comparability. Roberts
474(1980) showed that there are even more possible aggre-
475gation methods when allowing for ratio scale comparabil-
476ity, under the conditions of Arrow’s impossibility theorem.
477The mutual effects of comparability and measurements
478scales on the feasibility of aggregation are presented in
479Table 4. It thus becomes clear that if Arrow’s theorem is
480relaxed in such a way that it allows for more preference/
481performance information in the form of comparability,
482there are admissible aggregation methods. However, the
483comparability of preference as well as performance mea-
484sures is not straightforward.9 Comparability means that
Table 3 Four fundamental measurements of scale and their
properties
Scale Mathematical
group structure
Admissible
transformation
Example
Nominala Permutation One-to-one
substitution
Classification of
data
Ordinal Isotonic Monotonic
increasing
Mohs scale of
mineral hardness
Intervalb General linear Positive affine Celsius scale of
temperature
Ratio Similarity Scalar Kilogram scale of
weight
a Also refers to as a categorical scaleb In economics, the term ‘‘cardinal value scale’’ is used
9FL019 Even if one allows for interpersonal comparability, it is not obvious
9FL02that the analogon in inter-criteria comparability also holds. In
9FL03interpersonal comparisons, preferences could be compared with
9FL04respect to the same value (e.g., human welfare), whereas design
9FL05criteria may relate to different values.
Res Eng Design
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Au
tho
r P
ro
of
UNCORRECTEDPROOF
485 there is a relationship between the various measurement
486 scales for preferences or performances. For example,
487 assume that the strength of the product, the speed of pro-
488 duction, and needed investment cost are used as perfor-
489 mance measures that account for a ‘‘good’’ design. The
490 strength of the used product cannot be lowered under a
491 certain level regardless of the possible gains in production
492 speed or reduction in investment cost, because the design
493 will certainly fail and be of no value. This results in a weak
494 form of comparability, because trade-offs can only be made
495 within a limited range.10 So, to escape the impossibility
496 result, all the windows of comparability should match,
497 which would be the exception rather than the rule in
498 engineering design.
499 There is still a more pressing argument against the
500 comparability of preference/performance measures in
501 engineering design practice. The reality of modern engi-
502 neering design is that engineers are increasingly required to
503 factor in value-laden criteria (e.g., safety, sustainability,
504 and reliability) into their decision-making process as early
505 as the conceptual design phase. By their very nature, some
506 moral values are considered incomparable. These moral
507 values, which are precluded from trade-offs, are called
508 protected or sacred values (Baron and Spranca 1997; Tet-
509 lock 2003), and the inclusion of such value-laden measures
510 will make comparison between them next to impossible.
511 All things considered, it is in principle possible to avoid the
512 kind of difficulties associated with Arrow’s theorem by
513using comparable preference/performance measures; how-
514ever, it is our opinion that it would be very exceptional to
515find that all considered measures are comparable in engi-
516neering design practice.
517The current discussion would be improved if authors are
518explicit about the information basis that they assume to be
519available. This can be illustrated by the seemingly con-
520tradictory opinions of Hazelrigg (1999), Franssen (2005)
521and Keeney (2009). Keeney claims that the ‘‘Arrow’s
522impossibility theorem has been misinterpreted by many,’’
523(Keeney 2009, p 14) and he explicitly mentions Hazelrigg
524en Franssen as examples. However, this is not a case of
525misinterpretation, but of different assumptions about the
526information basis. Keeney provides a framework, very
527similar to that of Arrow’s theorem, in which the prefer-
528ences are expressed as von Neumann–Morgenstern
529expected utilities. In this framework, interpersonal com-
530parability of these preferences is assumed, because ‘‘the
531group expected utility Uj of an alternative is calculated
532using the individual’s expected utilities Ujk’’ (Keeney
5332009, p 14). In contrast, Hazelrigg and Franssen do not
534explicitly deviate from the original comparability
535assumption of Arrow and thus exclude interpersonal
536comparability.
5375 Conclusions
538In order to achieve the reflective debate Reich (2010)
539envisioned in his editorial, it is necessary that authors are
540explicit about their personal goals as well as methodolog-
541ical aims in terms of coherence and/or correspondence
542(Katsikopoulos 2009, 2012). We have presented here sev-
543eral additional issues that cloud the current debate. First of
544all, difficulties in the debate result from the fact that
545Arrow’s impossibility theorem can affect the decision-
546making in engineering design in two distinct ways: (a) it
547impedes methods to aggregate preferences and
548(b) obstructs ways to combine various performance criteria.
549Secondly, misconceptions are caused sometimes by an
550unclear translation of Arrow’s original impossibility theo-
551rem to the aggregation problem of design performances. In
552order to resolve this ambiguity, we have presented the
553Arrowian kind of impossibility theorem for performance
554aggregation. Thirdly, clarity is also required about the
555uncertainties associated with the predictability of future
556states in engineering design decisions, which are dependent
557on the considered (a) kind of designed artifact, (b) type of
558design, and (c) design phase. Finally, the debate would be
559clarified, if the assumed information basis for the decision
560is made explicit, especially with regard to comparability
561and measurability. We think the explicit consideration of
562all these issues will go a long way to come to a truly
Table 4 Aggregation of preference/performance structures into a
global preference/performance structure in accordance with the Ar-
rowian conditions depending on measurability and comparability
Information
basis
Scales of measure
Ordinal
scale
Interval scale Ratio scale
Incomparable Impossible
(Arrow
1950)
Impossible
(Hylland 1980)
Impossiblea
(Tsui and
Weymark
1997)
Comparableb Feasible
(Hammond
1976)
Feasiblec
(Deschamps and
Gevers 1978)
Feasible
(Roberts 1980)
a A feasibility result follows if only nonnegative or positive prefer-
ence/performance measures are usedb The notion of comparability varies with the choice of measurement
scalesc See d’Aspremont and Gevers (1977) for proof of feasibility result
under unit comparability instead of full comparability
10FL01 10 To make performance measures more ‘‘comparable,’’ mathemat-
10FL02 ical normalization procedures are sometimes applied. However, such
10FL03 normalization only shifts the comparability issue away from the
10FL04 aggregation method toward the normalization procedure and hence
10FL05 does not resolve the comparability issue.
Res Eng Design
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Au
tho
r P
ro
of
UNCORRECTEDPROOF
563 reflective debate on decision-making methods for engi-
564 neering design.
565 Acknowledgments The authors would like to thank all the partic-566 ipating engineers as well as engineering students for their help, Tanja567 Klop for the inspirational discussions on measurement theory, and568 Maarten Franssen for his invaluable comments on several versions of569 this paper. This research was financially supported by the 3TU Centre570 for Ethics and Technology (www.ethicsandtechnology.eu) and571 Kluyver Centre for Genomics of Industrial Fermentation which is part572 of the Netherlands Genomics Initiative/Netherlands Organization for573 Scientific Research (www.kluyvercentre.nl).
574 References
575 Akao Y (2004) Quality function deployment: integrating customer576 requirements into product design. Productivity Press, New York577 Arrow KJ (1950) A difficulty in the concept of social welfare. J Polit578 Econ 58:328–346. doi:10.1086/256963579 Baron J, Spranca M (1997) Protected values. Organ Behav Hum580 Decis Process 70:1–16. doi:10.1006/obhd.1997.2690581 Blackorby C, Donaldson D, Weymark JA (1984) Social choice with582 interpersonal utility comparisons: a diagrammatic introduction.583 Int Econ Rev 25:327–356. http://www.jstor.org/pss/2526200584 Bouyssou D, Marchant T, Perny P (2009) Social choice theory and585 multicriteria decision aiding. In: Bouyssou D, Dubois D, Pirlot586 M, Prade H (eds) Decision-making process: concept and method.587 ISTE Ltd., London, pp 741–770588 D’Aspremont C, Gevers L (1977) Equity and the informational basis589 of collective choice. Rev Econ Stud 44:199–209. http://www.590 jstor.org/stable/2297061591 Deschamps R, Gevers L (1978) Leximin and utilitarian rules: a joint592 characterization. J Econ Theory 17:143–163. doi:10.1016/0022-593 0531(78)90068-6594 Eggert RJ (2005) Engineering design. Pearson Education, New Jersey595 Franssen M (2005) Arrow’s theorem, multi-criteria decision problems596 and multi-attribute preferences in engineering design. Res Eng597 Des 16:42–56. doi:10.1007/s00163-004-0057-5598 Frey DD, Herder PM, Wijnia Y, Subramanian E, Katsikopoulos K,599 Clausing DP (2009) The Pugh controlled convergence method:600 model-based evaluation and implications for design theory. Res601 Eng Des 20:41–50. doi:10.1007/s00163-008-0056-z602 Frey DD, Herder PM, Wijnia Y, Subramanian E, Katsikopoulos K, de603 Neufville R, Oye K, Clausing DP (2010) Research in engineer-604 ing design: the role of mathematical theory and empirical605 evidence. Res Eng Des 21:145–151. doi:10.1007/s00163-010-606 0085-2607 Hammond PJ (1976) Equity, Arrow’s conditions, and Rawls’608 difference principle. Econometrica 44:793–804. http://www.609 jstor.org/stable/1913445610 Hauser J, Clausing D (1988) The house of quality. Harv Bus Rev611 66:63–74. doi:10.1225/88307612 Hazelrigg GA (1996) The implications of Arrow’s impossibility613 theorem on approaches to optimal engineering design. J Mech614 Des 118:161–164. doi:10.1115/1.2826864615 Hazelrigg GA (1999) An axiomatic framework for engineering616 design. J Mech Des 121:342–347. doi:10.1115/1.2829466617 Hazelrigg GA (2010) The Pugh controlled convergence method:618 model-based evaluation and implications for design theory. Res619 Eng Des 21:143–144. doi:10.1007/s00163-010-0087-0620 Hylland A (1980) Aggregation procedure for cardinal preferences: a621 comment. Econometrica 48:539–542. http://www.jstor.org/622 stable/1911117
623Kalai E, Schmeidler D (1977) Aggregation procedure for cardinal624preferences: a formulation and proof of Samuelson’s impossi-625bility conjecture. Econometrica 45:1431–1438. http://www.jstor.626org/stable/1912309627Katsikopoulos KV (2009) Coherence and correspondence in engi-628neering design: informing the conversation and connecting with629judgment and decision-making research. Judgm Decis Mak6304:147–153. http://journal.sjdm.org/cck/cck.pdf631Katsikopoulos KV (2012) Decision methods for design: insights from632psychology. J Mech Des 134:084504.1–084504.4. doi:10.1115/6331.4007001634Keeney RL (2009) The foundations of collaborative group decisions.635Int J Collab Eng 1:4–18. doi:10.1504/IJCE.2009.027438636Lowe AJ, Ridgway K (2000) Optimization impossible? The impor-637tance of customer segmentation in quality function deployment.638Qual Prog 33:59–64. http://www.asq.org/data/subscriptions/qp/6392000/0700/qp0700lowe.pdf640May KO (1952) A set of independent necessary and sufficient641conditions for simple majority decision. Econometrica64220:680–684. doi:10.2307/1907651643Pahl G, Beitz W (2005) Engineering design: a systematic approach.644Springer, London645Pugh S (1991) Total design: integrated methods for successful646product engineering. Addison-Wesley, Harlow647Pugh S (1996) Creating innovative products using total design.648Addison-Wesley, Reading649Reich Y (2010) Editorial: is my method better? Res Eng Des65021:137–142. doi:10.1007/s00163-010-0092-3651Roberts KWS (1980) Interpersonal comparability and social choice652theory. Rev Econ Stud 47:421–446. http://www.jstor.org/653stable/2297002654Saaty TL (1980) The analytic hierarchy process: planning, priority655setting, resource allocation. McGraw-Hill, New York656Scott MJ, Antonsson EK (1999) Arrow’s theorem and engineering657design decision making. Res Eng Des 11:218–228. doi:65810.1007/s001630050016659Sen A (1977) On weights and measures: informational constrains in660social welfare analysis. Econometrica 45:1539–1572. http://661www.jstor.org/stable/1913949662Sen A (1986) Chapter 22: social choice theory. In: Arrow KJ,663Intriligator MD (eds) Handbook of mathematical economics, vol664III. Elsevier, Amsterdam, pp 1073–1181. doi:10.1016/665S1573-4382(86)03004-7666Sen A (1995) Rationality and social choice. Am Econ Rev 85:1–24.667http://www.jstor.org/stable/2117993668Stevens SS (1946) On the theory of scales of measurement. Science669103:677–680. doi:10.1126/science.103.2684.677670Tetlock PE (2003) Thinking the unthinkable: sacred values and taboo671cognitions. Trends Cogn Sci 7:320–324. doi:10.1016/S1364-6726613(03)00135-9673Tsui K, Weymark JA (1997) Social welfare orderings for ratio-scale674measurable utilities. Econ Theory 10:241–256. doi:67510.1007/s001990050156676Ulrich KT, Eppinger SD (2008) Product design and development.677McGraw-Hill, New York678Van de Poel I (2007) Methodological problems in QFD and directions679for future development. Res Eng Des 18:21–36. doi:10.1007/680s00163-007-0029-7681Van Gorp AC (2005) Ethical issues in engineering design: safety and682sustainability. Simon Stevin Series in the Philosophy of Tech-683nology, Delft684Vincenti WG (1992) Engineering knowledge, type of design, and685level of hierarchy: further thoughts about what engineers know.686In: Kroes P, Bakker M (eds) Technological development and687science in the industrial age. Kluwer, Dordrecht, pp 17–34
688
Res Eng Design
123Journal : Large 163 Dispatch : 16-7-2013 Pages : 8
Article No. : 160h LE h TYPESET
MS Code : RIED-D-12-00057 h CP h DISK4 4
Au
tho
r P
ro
of
View publication statsView publication stats