مدرس: افشين فيروزی affirouzi@gmail.com firouzi@aut.ac.ir نيمسال اول 87-1386...
Post on 20-Jan-2016
217 Views
Preview:
TRANSCRIPT
مدرس: افشين فيروزیaffirouzi@gmail.com
firouzi@aut.ac.ir
1386-87نيمسال اول
دانشگاه علوم و تحقيقات
دانشکده فنی
گروه مهندسی و مديريت ساخت
الگوریتم ژنتیک
Definition:
Optimization is the process of adjusting the inputs to or characteristics of a device, mathematical process, or experiment
to find the minimum or maximum output or result
In the mathematical approach, optimization finds zeros of the function derivative. We Like the SIMPLICITY of root finding.
• Many times the derivative does not exist or is very difficult to find.
• Another difficulty With optimization is determining if a given minimum is the best (global) minimum or a suboptimal (local) minimum.
Trial and Error vs. Function : Adjusting the rabbit ears on a TV to get the best picture and audio reception. Experimentalists prefer this approach. Discovery and refinement of penicillin. Mathematical formula describes the objective function in function optimization. Theoreticians love this theoretical approach.
Single Variable vs. Multi Variable: If there is only one variable, the optimization is one- dimensional. A problem having more than one variable requires multidimensional optimization. Optimization becomes increasingly difficult as the number of dimensions increases. Many multidimensional optimization approaches generalize to a series of one-dimensional approaches.
Dynamic vs. Static: Dynamic optimization means that the output is a function of time, while static means that the output is independent of time. Finding the rest route.
Discrete vs. Continuous Variable: Optimum solution consists of a certain combination of variables from the finite pool of all possible variables.
Constraints vs. Unconstraint:
Constrained optimization incorporates variable equalities and inequalities into the cost function. Unconstrained optimization allows the variables to take any value.
Exhaustive Search
Interval sampling at 0.1
Graph works for one and two dimensional equations!
Analytical Optimization
Unconstrained Minimization Example
function f = objfun(x)f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);
1. A number, or population, of guesses of the solution to the problem; (Chromosomes)
2. A way of calculating how good or bad the individual solutions within the population are; (Fitness Function)
3. A method for mixing (Cross over) fragments of the better solutions (Selection) to form new, on average even better solutions; ( and
4. A mutation operator to avoid permanent loss of diversity within the solutions.
GARather than starting from a single point (or guess) within the
search space, GAs are initialized with a population of guesses. These are usually random and will be spread throughout the
search space.A typical algorithm then uses three operators, selection,
crossover and mutation (chosen in part by analogy with the natural world) to direct the population (over a series of time steps
or generations) towards convergence at the global optimum.
Typically, these initial Population are held as binary encodings (or strings) of the true variables, although an increasing number of GAs use "real-
valued" (i.e. base-10) encodings. This initial population is then processed by the three main operators.
Selection attempts to apply pressure upon the population in a manner similar to that of natural selection found in biological systems. Poorer performing
individuals are weeded out and better performing, or fitter, individuals have a greater than average chance of promoting the information they
contain within the next generation.Crossover allows solutions to exchange information in a way similar to that used by a natural organism undergoing sexual reproduction. One method (termed single point crossover) is to choose pairs of individuals promoted
by the selection operator, randomly choose a single locus (point) within the binary strings and swap all the information (digits) to the right of this
locus between the two individuals. Mutation is used to randomly change (flip) the value of single bits within
individual strings. Mutation is typically used very sparingly. After selection, crossover and mutation have been applied to the initial
population, a new population will have been formed and the generational counter is increased by one. This process of selection, crossover and
mutation is continued until a fixed number of generations have elapsed or some form of convergence criterion has been met.
A trivial problem might be to maximize a function, f(x), where:
f(x) = x2 ; for integer X and 0 < X < 4095.
First Step:
Binary GA
string = chromosome =genotype.
Solution Vector = genotype within the environment = organism= phenotype
Mapping of real numbers to binary string:To carry out transformation, the binary string (or genotype) is firstly converted to a base-10 integer, z. This integer is then transformed to a real number, r, using:
Quantization
Quantization samples a continuous range of values and categorizes the samples into nonoverlapping subranges. Then a unique discrete value is
assigned to each subrange. The difference between the actual function value and the quantization level is known as the quantization error.
A gene with 2 bits has 22=4 possible values.
MULTIPARAMETER PROBLEMS
Extending the representation to problems with more than one unknown proves to be particularly simple.
The M unknowns are each represented as sub-strings of length /, These sub-strings are then
concatenated (joined together) to form an individual population member of length L, where:
For example, given a problem with two unknowns a and b, then if a = 10110
and b = 11000 for one guess at the solution, then by concatenation, the genotype is
Selection
Roulette wheel: With this approach the probability of selection is proportional to an individual's fitness.
Sum the fitness of all the population members. Call this sum fsum.
Choose a random number, Rs, between 0 and fsum.
Add together the fitness of the population members (one at a time) stopping immediately when the sum is greater than Rs. The last individual added is the selected individual and a copy is passed to the next generation.
Algorithm
From the Npop of every generation Nkeep survive and pass to the next generation.
Rank Selection:
Cross over point is randomly selected.
The Simple Genetic Algorithm uses single point crossover as the recombination operator (in the natural world, between one and eight
crossover points have been reported .The pairs of individuals selected undergo crossover with probability Pc,
A random number Rc is generated in the range 0-1 and the individuals undergo crossover if and only if Rc < Pc, otherwise the pair proceed
without crossover. Typical values of Pc are 0.4 to 0.9. (If Pc = 0.5 then half the new population will be formed by selection and crossover,
and half by selection alone.)
11001011+11011111 = 11001111
11001011 + 11011111 = 11011111
11001011 + 11011101 = 11011111
11001011 + 11011111 = 11001001 (AND)
Cross Over
Elitism
Fitness-proportional selection does not guarantee the selection of any particular individual, including the fittest. Unless the fittest individual is much, much fitter than any other it will occasionally not be selected. To not be selected is to die. Thus with fitness-proportional selection the best solution to the problem discovered so far can be regularly thrown away.
Ensuring the propagation of the elite member is termed elitism and requires that not only is the elite member selected, but a copy of it does not become disrupted by crossover or mutation.
Here, the use of elitism is indicated by ε (which can only take the value 0 or 1); if ε = 1 then elitism is being applied, if ε = 0 then elitism is not applied.
A single point mutation changes a 1 to a 0, and visa versa. Mutation points are randomly selected from the Npop * Nbits total number of bits in the population matrix. Increasing the number of mutations increases the algorithm’s freedom to search outside the current region of variable space.
The probability of mutation, Pm is typically of the order 0.001, i.e. one bit in every thousand will be mutated.
However, just like everything else about GAs, the correct setting for Pm will be problem dependent. (Many have used Pm=1/L, others Pm= 1/(N*sqrt(L) ), where N is the population size).
Mutation
Algorithm
1- Generate an initial (g = 1) population of random binary strings of length ∑M
k=1 (lk) where M is the number of unknowns and /k the length of binary string required by any unknown k. In general lk≠lj for k ≠j.
2- Decode each individual, /, within the population to integers zi,k and then to real numbers ri,k to obtain the unknown parameters.
3- Test each individual in turn on the problem at hand and convert the objective function or performance, Ωi, of each individual to a fitness fi, where a better solution implies a higher fitness.
4- Select, by using fitness proportional selection, pairs of individuals and apply with probability Pc single point crossover. Repeat until a new temporary population of N individuals is formed.
5- Apply the mutation operator to every individual in the temporary population, by stepping bit-wise through each string, occasionally flipping a 0 to a 1 or vice versa. The probability of any bit mutating is given by Pm and is typically very small (for example, 0.001).
6- If elitism is required, and the temporary population does not contain a copy of an individual with at least the fitness of the elite member, replace (at random) one member of the temporary population with the elite member.
7- Replace the old population by the new temporary generation.
8- Increment, by 1, the generational counter (i.e. g = g + 1) and repeat from Step 2 until G generations have elapsed.
Continuous GATrail Example
Initial Population with 8 chromosomes
Selected chromosomes for mating with 50% rate
Blending Method Cross Over
Method I:
Mutation
With the mutation rate 20% and the total number of chromosomes (population size=7) in a 2 variable problem the number of mutated genes in matrix is 0.2*7*2=3.
The locus of the mutation is identified randomly.
Then the chosen gene is replaced by a uniform random number in the domain of variable.
Gaussian Distribution
D is better than C in respect to both objectives so C is Dominated or Inferior solution.
روش مجموع وزنی
top related