Namespaces
Variants
Actions

User:Tgawa63

From Encyclopedia of Mathematics
Jump to: navigation, search

GENETIC ALGORITHMS

1. Genetic algorithms (GAs): Basic form

A generic GA (also known as an evolutionary algorithm [EA]) assumes a discrete search space $H$ and a function \[ f:H\to\mathbb{R}, \] where $H$ is a subset of the Euclidean space $\mathbb{R}$.

The general problem is to find \[ \arg\underset{X\in H}{\mathop{\min }}\,f \] where X is a vector of the decision variables and fis the objective function.

With GAs it is customary to distinguish genotype–the encoded representation of the variables–from phenotype–the set of variables themselves. The vector X isrepresented by a string (or chromosome)s of length l madeup of symbols drawn from an alphabet A using the mapping [c:{{A}^{l}}\toH\].

The mapping c is not necessarily surjective. The range of c determines the subset of Al for exploration by a GA. The range of c, \[\Xi\subseteq {{A}^{l}}\]is needed to account for the fact that some strings in the image Al under c may represent invalid solutions to the original problem.

The string length l depends on thedimensions of both H and A, with the elements of the stringcorresponding to genes and the valuesto alleles. This statement of genesand alleles is often referred to as genotype-phenotype mapping. Given the statements above, the optimization becomes:

                                                                          \[\arg\underset{S\in L}{\mathop{\min g}}\,\],

given the function

                                                                                                         \[g(s)=f(c(s))\].

Finally, with GAs it is helpful if c is a bijection. The important property of bijections as they applyto GAs is that bijections have an inverse, i.e., there is a unique vector x for every string and a unique stringfor each x.


2. Genetic algorithms and their operators The operators bywhich GAs search for the optimal solution are set out in the followingstatements from Coello et al. (Coello, 2002) (their notation is used with some slightmodifications):

Let H be a nonempty set (the individual or search space), \[{{\left\{{{u}^{i}} \right\}}_{i\in \mathbb{N}}}\] a sequence in \[{{\mathbb{Z}}^{+}}\](the parent populations), \[{{\left\{ {{u}^{'(i)}} \right\}}_{i\in\mathbb{N}}}\] a sequence in \[{{\mathbb{Z}}^{+}}\](the offspring population sizes), \[\phi :H\to \mathbb{R}\] a fitness function, \[\iota :\cup_{i=1}^{\infty }{{({{H}^{u}})}^{(i)}}\to \] {true, false} (the termination criteria), \[\chi \in \]{true, false}, r a sequence \[\left\{ {{r}^{(i)}} \right\}\] of recombination operators τ(i) : \[X_{r}^{(i)}\toT(\Omega _{r}^{(i)},T\left( {{H}^{{{u}^{(i)}}}},{{H}^{u{{'}^{(i)}}}} \right))\],m a sequence of {m(i)} of mutation operators in mi, \[X_{m}^{(i)}\to T(\Omega _{m}^{(i)},T\left({{H}^{{{u}^{(i)}}}},{{H}^{u{{'}^{(i)}}}} \right))\], s a sequence of {si} selection operators s(i): \[X_{s}^{(i)}\timesT(H,\mathbb{R})\to T(\Omega _{s}^{(i)},T(({{H}^{u{{'}^{(i)+\chi {{\mu}^{(i)}}}}}}),{{H}^{{{\mu }^{(i+1)}}}}))\], \[\Theta _{r}^{(i)}\inX_{r}^{(i)}\] (the recombination parameters), \[\Theta _{m}^{(i)}\inX_{m}^{(i)}\] (the mutation parameters), and \[\Theta _{s}^{(i)}\inX_{s}^{(i)}\] (the selection parameters).

Coello etal. (2002) define the collection μ (the number of individuals) via Hμ.The population transforms are denoted by\[T:{{H}^{\mu }}\to {{H}^{\mu }}\],where\[\mu \in \mathbb{N}\]. However, some GA methods generate populationswhose size is not equal to their predecessors’. In a more general framework \[T:{{H}^{\mu}}\to {{H}^{{{\mu }'}}}\]can accommodate populations that contain the same ordifferent individuals. This mapping has the ability to represent all populationsizes, evolutionary operators, and parameters as sequences.

The execution of a GA typically begins by randomly sampling with replacement from Al.The resulting collection is the initial population, denoted by P(0). In general, a population is acollection \[P=({{a}_{1}},{{a}_{2}},...,{{a}_{\mu }})\]of individuals,where\[{{a}_{i}}\in {{A}^{l}}\], and populations are treated as n-tuples of individuals. The number of individuals (μ)is defined as the population size.

The work of Lamont and Merkle (Lamont, 1997) defines the termination criteria and the other genetic operators (GOs) in more detail.

Since H is a nonempty set,\[c:{{A}^{l}}\to H\], and\[f:H\to \mathbb{R}\], the fitnessscaling function can be defined as \[{{T}_{s}}:\mathbb{R}\to \mathbb{R}\]and a related fitness function as\[\Phi \triangleq {{T}_{s}}\circ f\circ c\]. In this definition it is understood that the objective function f is determined by the application, while the specification of the decoding function c[1]and the fitness scaling function Ts are design issues.

Execution of a GA typically begins by randomly sampling with replacement from Al. The resulting collection is the initial population,denoted with P. More generally, a population is a collection P = {a1 ,…,aμ} of individuals ${{a}_{i}}\in {{A}^{l}}$. Again, the number ofindividuals (μ) is referred to as thepopulation size.

Following initialization, execution proceeds iteratively. Each iteration consists of an application of one or more GOs. The combined effect of the GOs applied in a particular generation $t\in N$ is to transform the current population P(t) into a new population P(t+1).In the population transformation $\mu ,{\mu}'\in {{\mathbb{Z}}^{+}}$(the parent and offspring population sizes, respectively). A mapping $T:{{H}^{\mu }}\to{{H}^{{{\mu }'}}}$ is called a populationtransformation (PT). If$T(P)={P}'$,then P is a parent population and P/ is the offspring population. If$\mu ={\mu }'$, then it is called simply the population size.

The PT resulting from an GO often depends on the outcome of the random experiment.In Lamont and Merkle (1997) this result isreferred to as a random population transformation (RPT or random PT). To defineRPT, let $\mu \in {{\mathbb{Z}}^{+}}$and $\Omega $ be a set (the sample space). A randomfunction $R:\Omega \to T({{H}^{\mu }},\bigcup\limits_{{\mu }'\in{{\mathbb{Z}}^{+}}}^{{}}{{{H}^{{{\mu }'}}}})$ is called an RPT. The distribution of PTs resulting from the application of an EO depends on the operator parameters; in other words, an EO maps its parameters to an RPT. Now that both the fitness function and RPT have been defined, the EO can be defined in general:let$\mu \in {{\mathbb{Z}}^{+}}$, X be a set (the parameter space) and $\Omega $ a set. The mapping $\Zeta :X\toT\left( \Omega ,T\left[ {{H}^{\mu }},\bigcup\limits_{{\mu }'\in{{\mathbb{Z}}^{+}}}^{{}}{{{H}^{{{\mu }'}}}} \right] \right)$ is an EO. The set of EOs is denoted as$EVOP\left( H,\mu ,X,\Omega \right)$.

There are three common GOs: recombination,mutation, and selection. These three operators are roughly analogous to their similarly named counterparts in genetics. The application of them in GAs is strictly Darwin-like in nature, i.e., “survival of the fittest.”

In defining the recombination operator let $r\in EVOP\left( H,\mu ,X,\Omega \right)$. If there exists $P\in {{H}^{\mu }},\Theta \in X$, and $\omega \in \Omega $, such that one individual in the offspring population ${{r}_{\Theta }}\left( P \right)$ depends on more than one individual of P,then r is referred to as a recombination operator. A mutation is defined in the following manner: let $m\in EVOP\left( H,\mu ,X,\Omega \right)$. If for every $P\in {{H}^{\mu }}$, for every $\Theta \in X$, for every $\omega\in \Omega $, and if each individual in the offspring population ${{m}_{\Theta}}\left( P \right)$ depends on at most one individual of P, then m is called a mutation operator.

Finally, for a selection operator: let $s\in EVOP\left( H,\mu ,X\times T\left( H,\mathbb{R}),\Omega \right) \right)$. If $P\in {{H}^{\mu }}$,$\Theta \in X$,$\Phi:H\to \mathbb{R}$in all cases, and s satisfies $a\in {{s}_{\left( \Theta ,\Phi \right)}}(P)\Rightarrow a\in P$, then s is a selection operator.


Endnotes [1] Remember that if the domain of c is total, i.e., the domain of c is all of A I,c is called a decoding function. The mapping of c is not necessarily surjective. The range of c determines the subset of Al available for exploration by the evolutionary algorithm. References Coello, C. V., Van Vieldhuisen, D., Lamont, J.B.: (2002). Evolutionary Algorithms for Solving Multiobjective Problems. Kluwer Academic Lamont, L., Merkle, J.D.: (1997). A Random Based Framework for Evolutionary Algorithms. In T. Back, Proceedings of the Seventh International Conference on Genetic Algorithms. Morgan Kauffamn.

How to Cite This Entry:
Tgawa63. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Tgawa63&oldid=27736