maximum member sizes and multiple concurrent optimization
TRANSCRIPT
10th
World Congress on Structural and Multidisciplinary Optimization
May 19 -24, 2013, Orlando, Florida, USA
1
Maximum member sizes and multiple concurrent optimization paths within a binary
material topology optimization method
Christian Brecher1, Simo Schmidt
2, Sierk Fiebig
3
1 WZL at RWTH Aachen University, Aachen, Germany, [email protected]
2 WZL at RWTH Aachen University, Aachen, Germany, [email protected] 3 Volkswagen AG, Brunswick, Germany, [email protected]
1. Abstract
The constant increase in productivity and manufacturing efficiency drives the industry to incorporate optimization
techniques beyond the capabilities of manual optimization and numerous iterative loops within the product design
phase. As a result, topology optimization is becoming an integral part of the design process. Nevertheless, design
proposals obtained from topology optimization do not always meet all manufacturing constraints, and can be
difficult to interpret. Usually, these proposals have to be modified more or less extensively by experienced
engineers to be viable for prototyping and production. Oftentimes, these manual modifications and redesigns
adversely affect the optimality of the design proposal.
This problem is particularly evident for cast parts. One of the reasons for poor castability is the occurrence of
substructures with large cross-sectional areas connected to thin struts, large pockets of material, and other
discontinuities in wall thicknesses within the part, all of them increasing the risk of casting defect formation. Some
commercial topology optimization tools already allow for the consideration of maximum member sizes. This
restriction aims to decrease the possible range of wall thicknesses that can occur within the design proposal. The
implementation in mathematical optimization methods is fairly unproblematic. For heuristic iterative optimization
methods, which might start with a full design space and huge member sizes, the matter becomes rather difficult.
Using a new binary material topology optimization method developed at the Volkswagen AG as a starting point, a
mechanism was designed to enforce maximum member sizes via repetitive discrete events within the optimization
process. These act as a penalization of substructures with maximum member sizes exceeding a specified limit, but
allow for the structure to be repaired by the optimizer between these events. The modified optimization method
was successfully tested for robustness and convergence on two- and three-dimensional academic test problems and
industrial parts. In direct comparison to a commercial topology optimization product, the obtained results were
promising. To increase the coverage of the possible solution space, a mechanism for optimization branching was
developed. The possibility to simultaneously pursue both an unaltered and modified version of the structure at each
discrete penalization event generates a binary tree of variations which have been shown to converge in several
different local optima with a high degree of optimality, considering the additional member size restriction. This
work was done to provide a framework for further development of both maximum member size restrictions and
branching optimizations embedded in a heuristic iterative optimization process.
2. Keywords: industrial topology optimization, concurrent optimization paths, casting restrictions, discrete
penalization, maximum member size
3. Introduction
The field of topology optimization utilizing the Finite Element Method (FEM) has evolved rapidly in the past
decades. Fundamental work includes the development of mathematical optimization techniques in conjunction
with material models like the microstructure homogenization approach by Bendsoe and Kikuchi (1988) and the
popular Solid Isoptropic Material with Penalization (SIMP) model (Bendsoe, Zhou, Rozvany, 1989-1991). The
introduction of a heuristic structural optimization method presented by Mattheck et. al. (1992) proposed an entirely
different approach to the rigorous mathematical techniques, using heuristical growth and decay mechanisms
inspired by nature instead. Since then, a vast array of different optimization techniques was proposed. A review of
these developments can be found in Eschenauer and Olhoff (2001)[1], for example. Due to the heuristic nature of
some of these alternative optimization methods, concerns were raised as to the efficiency and usefulness of those
novel approaches (Rozvany, 2009, Sigmund, 2012). Nevertheless, some methods were accepted in industrial
applications (e.g. Harzheim and Graf, 1995, and successively TopShape, 2006, by the Adam Opel AG) due to the
ease of implementation, flexibility and low computational effort to achieve good, albeit not necessarily very close
to optimal, results [2].
Even though the acceptance of topology optimization methods within the industrial product development and
optimization cycle is constantly increasing, one of the disadvantages of all basic topology optimization methods is
2
the lack of consideration for manufacturing constraints, resulting in design proposals that are close to optimal but
more or less useless since they cannot be produced economically [3]. With increasing maturity of topology
optimization methods, manufacturing constraints like casting or extrusion restrictions, prescribed symmetries and
patterns were integrated into leading commercial structural optimization software packages like Tosca® and
OptiStruct™, both of which utilize SIMP-based mathematical optimization techniques [4][5][6].
Due to the complexity of the casting process, casting restrictions in terms of pull directions (the direction, in which
the casting dies are separated) alone do not always suffice to guarantee good castability. To address this problem,
the authors propose the consideration of maximum member sizes within the optimization process to reduce
geometric discontinuities like large lumps of material adjacent to thin connecting struts in the resulting design
proposals, thus improving the overall castability of the optimized structures.
The inclusion of member size restrictions into the optimization problem is in itself not a new concept and was
investigated in a few research papers, for example by Guest (2008) [7] who proposed a radius-based constraint to
on local cross-sectional diameters in conjunction with a SIMP-based mathematical optimization approach. The
consideration of maximum member sizes in commercial topology optimization products was not supported up
until recently (e.g. outlook in Zhou et. al., 2002)[4], and is still deemed an experimental option within
OptiStruct™, for example [6]. The authors could not find any papers on evolutionary or heuristic iterative
topology optimization methods which consider maximum member sizes.
In this paper, a method to consider maximum member sizes within a heuristic iterative topology optimization
method using a binary material model is presented and compared to a mathematical approach used in OptiStruct™.
Furthermore, a branching technique to allow for the concurrent optimization of multiple structural variants is
proposed.
4. The binary material model approach
The methods and techniques presented in this paper were implemented as an extension of a topology optimization
method developed at the Volkswagen AG [8][9]. This optimization method utilizes a binary material model and
operates on a regular Cartesian grid of FE elements (often called voxels). It contains aspects of heuristic iterative
procedures like BESO (Querin, Steven, Xie, 2000), where FE elements within the design space are activated
and/or deactivated based on their level of stress (or other sensitivities) to successively generate lightweight design
proposals. Additionally, a controller mechanism dynamically adapts the amount of removed and added material in
each iteration to the development of the applied optimization constraints. The optimizer is connected to the
necessary FE analysis and system response postprocessing through a slim interface, Fig. 1(a).
without
casting
restriction
with
casting
restriction
(a) overview of the optimizer and its environment (b) the casting
restriction principle
visible elements (teal)
step size controller
basic rate
reduction
rate
correction
rateAdding elements
removing
elements
Structure
connected
Heuristics(e.g. deleting
unconnected elements)
no
yes
Optimizer
FEA analysisresult
postprocessing
simulation
Write Out
FEA model
Read in
resultsInterface
pull direction
Fig. 1: Basics of the binary material method. Source: [9]
In each iteration, the system response in terms of sensitivities and normalized constraint values is read in by the
optimizer. Here, the maximum of all normalized constraints is termed the normalized constraint value φ. It is
defined to be equal to 1 if the limit value is reached. Values above 1 correspond to a constraint violation. The
binary material method is capable of repairing infeasible structures by adding material to highly stressed regions
(called hotspots in engineering terminology) of the structure, as long as the available design space suffices.
3
Another important property of the binary material method is the way in which casting restrictions are
implemented. Without restrictions, all elements within the structure are “visible” and can therefore be added and
removed freely. Applying a casting restriction in terms of a (die/mold) pulling direction makes all but the elements
on the front and back surface (w.r.t. the casting direction) “invisible”, Fig. 1(b). This automatically prohibits the
formation of undercuts within the structure.
The methods and techniques presented in this paper are based on the restriction that the consideration of maximum
member sizes has to be used in conjunction with a casting restriction. In industrial applications, this does not limit
to the practical usability of the method, since the definition of a pulling direction can be seen as a prerequisite for
any optimization constraints which are in a sense more specialized.
5. A fast approximation of local structural dimensions
Knowledge about the local dimensions of the structure at every iteration step is a necessary prerequisite for any
member size constraints. Since the size of FE models for industrial applications in use today often exceed one
Million FE elements, computational performance is essential and effectively limits the affordable complexity of
the approximation technique. Also, the gathered member size information has to be accessible and compatible with
applied casting restrictions, which limit the direction in which the optimizer can affect the structure. Therefore, the
proposed member size approximation is carried out in two phases, which will be briefly explained below.
5.1. Determining the element depth
A straightforward approach for determining local member sizes (i.e. wall thicknesses) would be to determine the
minimal distance between opposing surface Finite Elements. Major drawbacks of this approach include the
nontrivial determination of opposing surface elements within a local region, as well as computing the distances for
a vast amount of element pairs. The idea behind a faster approach is the fact that, for regular voxel-based FE
meshes, local wall thicknesses correlate to the number of elements from the innermost element to its nearest
surface. The local wall thickness can then be roughly estimated as twice the maximum element depth (with an
inherent uncertainty of 1 voxel edge length). The accuracy of this approximation further depends on the search
radius construction needed to identify the nearest surface.
Using element neighbor definitions, one can define general and full neighbor relationships. The former share at
least one node, the latter share one element surface (or edge, for two-dimensional structures), as shown in Fig. 2(a).
If, starting from a source element, a search radius is constructed by alternating between those two neighbor types,
the approximation of a circle/sphere is considerably more accurate than by using only one neighbor type, Fig. 2(b).
general neighbors (top) – full neighbors (bottom)
(a) element neighbor definitions (b) search radius construction
alternating beween
neighbor definitions
using only
general neighbors
using only
full neighbors
Fig. 2: Search radius construction using element depth and element neighbor definitions
In an iterative procedure termed the onion shell loop, starting from the surface layer of the structure, each
following layer of elements is assigned incremental element depth values, until the innermost element layer is
reached. Each next layer is determined by alternating between the two above-mentioned neighbor definitions. This
procedure is equivalent to creating a search radius for each finite element within the structure to determine its
individual element depth separately, but is faster by two orders of magnitude. Calculating the element depth
4
distribution of a structure with half a million Finite Elements takes less than 3 seconds on an average workstation.
Fig. 3 shows a colored representation of the resulting element depth distribution for a three-dimensional cantilever
beam structure.
element depth
1 2 3 4 5 6 7 8
Fig. 3: Colored representation of the element depth distribution within a cantilever beam model
5.2. Utilizing the casting projection surface
The maximum member size restriction is assumed to always be used in conjunction with a casting restriction.
Therefore, the information about the element depth distribution within the structure has to be accessible on the
front and back surface of the structure on an imaginary projection plane normal to the pull direction, Fig. 4(a).
Here, a projection ray is defined as a single row of finite elements parallel to the pull direction. By assigning the
highest element depth value within each row to all Finite Elements within the projection ray, the maximum
element depth information is accessible on the front and back surface of the structure. The resulting projected
element depth distributions for different casting directions are shown in Fig. 4(b).
projected element depth
1 2 3 4 5 6 7 8
pull
dir
ecti
on
section cut
pull
dir
ecti
on
pro
ject
ion r
ay
casting projectionprojection plane
design space
(a) Nomenclature regarding the casting projection concept (b) directional element depth projection
Fig. 4: The casting projection principle and its application to the cantilever beam shown in Fig. 3
By element depth projection, all Finite Elements within the structure are divided into groups with identical
projected element depths (termed thickness groups), which can then be operated on separately. Due to the fact that
each thickness group is a set of projection rays, which are by definition parallel to the pull direction, the formation
of undercuts can be avoided easily.
5
6. A discrete maximum member size penalization technique
The implementation of member size restrictions in heuristic iterative optimization procedures is difficult. Two
main concerns lead to the decision to implement a discrete penalization technique to enforce structures which do
not exceed maximum member sizes.
First, the inclusion of a fixed member size restriction (a “hard limit”) proves to be challenging in a heuristic
environment. More importantly, the authors wanted to ensure the existence of a solution for every optimization
problem that was solvable by the unmodified optimization method without member size restrictions. One could
easily imagine imposing a member size limit that physically prohibits any resulting structure to support the defined
loading. Commercial optimization software like OptiStruct™ would either produce an unviable design proposal or
abort the optimization. By using a penalization scheme (i.e. a “soft limit”), element depths exceeding the defined
maximum could be avoided, as long as the structural integrity allowed for it. In theory, the design proposal would
converge into the unrestricted solution in the event of every bit of material being necessary and already in optimal
distribution, ensuring the existence of a solution rather than fulfilling the member size restrictions.
Second, the effectiveness of continuous penalization was investigated and turned out to be rather poor, since the
growth rule governing the optimizer (the hotspot correction mechanism HSC) was constantly reversing the
changes from the previous iteration induced by the penalization scheme, effectively slowing the optimization
progress to a halt, Fig. 5(a).
6.1. The element depth reduction event (EDR)
Rather than trying to penalize exceeding member sizes continuously, the authors propose a technique to reduce the
maximum element depth value to the user-defined allowable limit at discrete and recurring points during the
optimization process. The underlying heuristic is based on the idea to modify the topology noticeably in one step,
thus changing the flux of forces within and driving the optimizer to converge into a different topology with smaller
maximum member sizes, Fig. 5(b).
p phigh element depth
high element depthhigh element depth
low element depth
low element depth
low element depth
low element depth
solid
void
notch stress
added material
(a) continuous penalization: similar topology, high element depth
(b) discrete reduction event: changed topology, low element depth
HSC
HSC
substructure under
compressional loading p
HSC: hotspot correction mechanism
Fig. 5: Continuous penalization of high element depths vs. discrete element depth reduction events
Two subtractive mechanisms are proposed to reduce the maximum element depth to the allowable limit in one
step: shrink and split.
During shrinking elements of a thickness group are removed from the back, front or both surfaces inwards until the
allowable limit is reached, Fig. 6(a). Shrinking is therefore not applicable for structures that are two-dimensional
w.r.t. the pull direction. Depending on the stress levels of the front and back surface elements, either a one-sided
shrink (removing all elements from the side with the less stressed surface element) or a symmetric shrink is
performed (removing elements from both sides symmetrically).
During splitting all projection rays of elements from within the targeted thickness group are removed, splitting the
structure along said thickness group, Fig. 6(b).
6
pull
dir
ecti
on
..
..
..
..
shrink
..
..
..
..split
..
..
..
..
..
projection ray
..
..
..
..
projection ray
(a) shrink mechanism: removal of elements from the front and back of each projection ray
(b) split mechanism: removal of all projection rays within the targeted thickness group
element depth1 2 3
Fig. 6: Cross-sectional view of the splitting and shrinking mechanisms for an element depth limit of 2
Since the splitting mechanism potentially creates unconnected substructures, as demonstrated in Fig. 7(a), a
heuristic was developed to leave thin struts (adjacent projection rays) at highly stressed regions to connect both
adjacent thickness groups, Fig. 7(b). Through an efficient combination of shrinking and splitting, the reduction of
element depths of two- and three-dimensional structures down to the allowable limit in one EDR can be achieved
with relatively low amounts of damage to the structural integrity. As a trade-off, the reduction is not perfect. Some
localized regions, where connecting struts are kept, can still contain element depths above the allowable limit. This
does not have a negative effect, since the structural integrity has to be maintained. In fact, it was observed that
more material would have been added by the optimizer while repairing the structure, if no connecting struts were
left during the EDR event, leading to higher overall element depths.
(a) the “stamp-out-effect” of splitting (b) splitting with applied heuristics for connecting strut placement
almost fully
unconnected region
flux of forces
separated
splitsplit
connecting struts
forced
displ.
fixed
stress distribution (Mises) element depth distribution
element depths above limit
are shown in red
stress distribution (Mises)
Fig. 7: The “stamp-out-effect” of splitting is avoided by keeping connecting elements at highly stressed positions
7
6.2. The discrete penalization cycle
After an EDR event, the structure is weakened, possibly to the point where it violates some or all of the
optimization constraints (e.g. displacement/stiffness/stress constraints). At this point, the aforementioned repair
capabilities of the optimizer come into play. The structure is repaired until all constraints are again fulfilled. After
the repair phase, the structure is allowed to be optimized without interference for a specifiable amount of iterations.
This so-called cooldown phase is necessary to remove excess material that was amassed during the repair phase,
and to “smooth” the structures flux of forces after an EDR event. A single EDR event is not sufficient for a
permanent reduction of element depths to the allowable limit, due to the repairs by the optimizer, which partially
reverse the changes induced by the EDR event. Thus, subsequent EDR events are necessary to further reduce the
element depths. The resulting cycle is termed the discrete penalization cycle.
For various academic problems the achievable average element depth converges after about 3-5 EDR events. To
test the efficiency of the penalization technique described in this paper, the popular cantilever beam problem was
chosen. The design space consists of 160x30x100 Finite Elements (young’s modulus 1000, poisson ratio 0.25,
edge length 1mm). The allowable element depth limit Dmax
was chosen to be 3, equating to a maximum member
size of roughly 6 mm. The structure is clamped on one side, and a vertical displacement of 5 mm is prescribed at
the center of the opposing free end. At the point of loading, a minimum reaction force of 4 kN is defined as an
optimization constraint.
EDR4MMS solution
EDR1
15
16
25
26
35
36
46
47
Projected element depth
EDR2 EDR3
Fig. 8: Development of the element depth distribution within a 3d cantilever beam example over the course of four
EDR events and subsequent repair phases
Fig. 8 shows the development of the element depth distribution over four EDR events. Here, the projected element
depths are color-coded in four groups of multiples of the allowable limit Dmax
. The numbers left of each structure
denote the iteration. The top right structure is termed the maximum member size solution and will hereinafter be
referred to as the MMS solution.
coo
ldo
wn
ph
ase
ED
R 2
ED
R 3
ED
R 4
ED
R 1
rep
air
ph
ase
226.925 226.271 222.875
160.000
210.000
260.000
310.000
360.000
410.000
460.000
0,50
0,70
0,90
1,10
1,30
1,50
1,70
1,90
2,10
2,30
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51
num
ber
of
elem
ents
no
rmal
ized
const
rain
tval
ue
iteration φ φ_max Elementeelements
Fig. 9: Graph representation of the optimization progress and the discrete penalization cycle
8
The optimization progress is shown in Fig. 9. Here, the different phases within the penalization cycle can be easily
distinguished. After three repair phases, the optimization was stopped, due to sufficiently small changes in element
numbers to interpret the penalization efficiency. By comparing the MMS solution to a fully converged reference
solution without maximum member size restrictions, the overall element depth reduction can be related to the
amount of additional material necessary, Fig. 10. In other words, the local optimum with additional constraints is
worse than the less constrained reference solution, which is to be expected. On the other hand, by “investing” less
than 8% more material, the amount of elements above the allowable limit is reduced from 85% to less than 50%.
Additionally, the average element depth of those elements is drastically reduced.
reference solution after 100 iterations
amount of
additional material
needed for the
MMS solution:
+7,65%
MMS solution after 46 iterations
84.67% 49.82%
D ≤ Dmax
D ≤ 2xDmax
D ≤ 3xDmax
D > 3xDmax
Fig. 10: Element depth comparison of solutions obtained with and without maximum member size restriction
Since a two-dimensional representation is difficult to visualize, both the reference and MMS solution were
postprocessed by a smoothing algorithm, and are shown side by side in Fig. 11. The MMS solution shows a dual
upper and lower strut design, resembling an I-beam cross-section.
reference solution (green)
MMS solution (grey)
Fig. 11: Comparison of smoothed solutions with and without maximum member size restriction
Finally, the MMS solution was compared to a solution obtained by a mathematical optimization approach, namely
OptiStruct™, and termed the OS solution. Since the maximum member size restriction is a mathematical
constraint, i.e. a hard limit, in OptiStruct™, the allowable element depth limit was set to 8 rather than 6 as not to
9
favor the soft limit approach by the authors too much. The resulting design proposal is shown in Fig. 12. The OS
solution contains 45% more elements than the MMS solution and produces an infeasible design with a reaction
force of 2505 N, 37% less than required. Also, 4.6% of all element depths were still above the allowable limit of 4.
MMS solution: solid upper strut
OS solution: multiple scattered upper struts
0%
50%
100%
150% MMSOS
elements additional
material
elements
with D>Dmax
reaction
force
maxiumum member size restriction efficiency
OS solution: isometric view
Fig. 12: Direct comparison with a solution obtained by OptiStruct™
7. Branching and consecutive optimization paths
The introduction of EDR events allows for a straightforward realization of optimization branching. Before each
event, the structure is duplicated, one copy passes through unaltered, while the other is modified by the EDR event.
Through repetition, a binary tree of variations is generated, which differ in the amount of penalization events
performed upon them, Fig. 13. Each subtree follows its own discrete penalization cycle, resulting in an
asynchronous branching behavior. The specifiable maximum number of EDR events limits the number of possible
concurrent structural variants. Additionally, a repair time limit eliminates all structural variants that failed to be
repaired within a specifiable number of iterations to discard less promising structural variants.
reference sol.
0x EDR
2nd branching
variant 2
1x EDR
variant 1
2x EDR
unmodified
EDRrepair cooldown
cooldown unmodified
repair
unmodified
EDR
variant 3
repair time exceeded, discardedEDR
1st branching
2nd branching
Fig. 13: Principle of optimization branching, generating a binary tree of structural variants
In a two-dimensional version of the previously shown cantilever beam, a total of 17 variations were created with
enabled parallel optimization functionality. The reference solution and the three best variants are shown in Fig. 14.
Again, element depth values above the allowable limit are shown in red. The displayed performance value sets the
additionally needed amount of material in relation to the reduction of elements above the allowable limit. Across
10
all variants, the percentage of elements above the allowable limit was reduced from 41% to 21%, while it was
reduced to 7% for the best variant. Analogously to the three-dimensional example, the necessary amount of
additional material was rather small, around 5% across the board.
EDR: 4
EDR: 5
EDR: 4
reference solution EDR: # of performed EDR events; performance value (higher is better)
EDR: 0
Fig. 14: Overview of concurrently optimized variants after 80 iterations and up to 5 EDR events
8. Conclusions
By direct comparison with a commercial topology optimization tool it was demonstrated that (for an isolated
example) a discrete maximum member size penalization technique can be combined with a heuristic iterative
optimization method and achieve promising and competitive results. Since the maximum member size limit is not
a hard limit at which casting defects will occur, the use of a soft limit approach can be beneficial. Considerable
reductions of element depths with the use of very little additional material were achieved. As a result, the
castability can be noticeably improved without gaining too much additional weight.
The introduction of an optimization branching mechanism allows for broader coverage of the solution space. The
element depth reduction performance between the generated variants differs noticeably, warranting further
investigation into optimal element depth reduction strategies. Concurrent optimizations will be a useful tool for
future in-depth investigations into the maximum member size penalization approach introduced in this paper.
9. References
[1] H. Eschenauer, N. Olhoff, Topology optimization of continuum structures: A review, Appl. Mech. Rev., 54
(4), 331-390, 2001.
[2] L. Harzheim and G. Graf, A review of optimization of cast parts using topology optimization II-Topology
optimization with manufacturing constraints, Struct. Multidisc. Optim., 31, 388-399, 2006.
[3] H. Thomas, M. Zhou and U. Schramm, Issues of commercial optimization software development, Struct.
Multidisc. Optim., 23, 97-110, 2002.
[4] M. Zhou, R. Fleury, Y.K. Shyy, H. Thomas and J.M. Brennan, Progress in topology optimization with
manufacturing constraints, American Institute of Aeronautics and Astronautics, Inc., 2002.
[5] FE-Design TOSCA.Structure, User’s manual. FE-Design GmbH, Karlsruhe, Germany, www.fe-design.de.
[6] Altair OptiStruct, Optistruct 11 User’s manual. Altair Engineering Inc., Troy, MI, www.altair.com.
[7] J. K. Guest, Imposing maximum length scale in topology optimization, Struct. Multidisc. Optim., 37,
463-473, 2009.
[8] S. Fiebig and J.K. Axmann: Intelligenter Leichtbau durch neue Topologieoptimierung für
Betriebsspannungen und plastisches Materialverhalten, 16. Kongress SIMVEC – Berechnung, Simulation
und Erprobung im Fahrzeugbau 2012, VDI-Berichte 2169, 695-712, November 20-21, Baden-Baden, 2012.
[9] S. Fiebig and J.K. Axmann, Combining nonlinear FEA simulations and manufacturing restrictions in a new
discrete Topology Optimization method, 9th World Congress on Structural and Multidisciplinary
Optimization, June 13 -17, Shizuoka, Japan, 2011.