Namespaces
Variants
Actions

Genetic Algorithms

From Encyclopedia of Mathematics
Revision as of 21:26, 16 October 2012 by Tgawa63 (talk | contribs)
Jump to: navigation, search

Bold text Genetic Algorithms1.
Genetic algorithms (GAs): basic formA genericEA (also known as a genetic algorithm [GA]) assumes a discrete search space H and a function \[f:H\to \Re \], where H is a subset of the Euclidean space\[\Re\]. The general problem is to find \[\arg\underset{x\in H}{\mathop{\min }}\,f\], where x is a vector of the decision variables and f is the objective function.With EAs it is customary to distinguish genotype–the encoded representation of the variables–from phenotype–the set of variables themselves. The vector x is represented by a string (or chromosome) s of length lmade up of symbols drawn from an alphabet A using the mapping \[c:{{A}^{l}}\to H\]In practice we may need a search space \[\Xi \subseteq {{A}^{l}}\]to reflect the fact that some strings in the image Alunder c may represent invalid solutions to the problem. The string length l depends on the dimensions of both H and A, with the elements of the string corresponding to genes and the values to alleles. This statement of genes and alleles is often referred to as genotype-phenotype mapping.Given the statements above, the optimization becomes one of finding \[\arg \underset{S\in L}{\mathop{\min g}}\,\], where the function \[g(s)=f(c(s))\]. With EAs it is helpful if c is a bijection. The important property of bijections as they apply to EAs is that bijections have an inverse, i.e., there is a unique vector x for every string and a unique string for each x.The execution of an EA typically begins by randomly sampling with replacement from Al. The resulting collection is the initial population denoted by P(0). In general, a population is a collection \[P=({{a}_{1}},{{a}_{2}},...,{{a}_{\mu }})\]of individuals, where\[{{a}_{i}}\in {{A}^{l}}\], and populations are treated as n-tuples of individuals. The number of individuals (μ) is defined as the population size. Following initialization, execution proceeds iteratively. Each iteration consists of an applicat ion of one or more of the evolutionary operators (EOs): recombination, mutation and selection. The combined effect of the EOs applied in a particular generation $t\in N$ is to transform the current population P(t) into a new population P(t+1).In the population transformation $\mu ,{\mu}'\in {{\mathbb{Z}}^{+}}$(the parent and offspring population sizes, respectively). A mapping $T:{{H}^{\mu }}\to{{H}^{{{\mu }'}}}$ is called a PT. If$T(P)={P}'$ then P is a parent population and P/ is the offspring population. If$\mu={\mu }'$ it is simply the population size.The PT resulting from an EO often depends on the outcome of a random experiment. In Lamont and Merkle (Merkle, 1997) this result is referred to as a random population transformation (RPT or random PT). To define RPT, let $\mu \in {{\mathbb{Z}}^{+}}$and $\Omega $ be a set (the sample space). A random function $R:\Omega \to T({{H}^{\mu }},\bigcup\limits_{{\mu }'\in{{\mathbb{Z}}^{+}}}^{{}}{{{H}^{{{\mu }'}}}})$ is called a RPT. The distribution of PTs resulting from the application of an EO depends on the operator parameters. In other words, an EO maps its parameters to an RPT.Since H is a nonempty set where\[c:{{A}^{l}}\to H\], and\[f:H\to \mathbb{R}\], the fitness scaling function can be defined as \[{{T}_{s}}:\mathbb{R}\to \mathbb{R}\]and a related fitness function as\[\Phi \triangleq {{T}_{s}}\circ f\circ c\]. In this definition it is understood that the objective function f is determined by the application.Now that both the fitness function and RPT have been defined, the EOs can be defined in general: let$\mu \in {{\mathbb{Z}}^{+}}$, \[\aleph \] be a set (the parameter space) and $\Omega $ a set (the sample space). The mapping $X:\aleph \to T\left( \Omega ,T\left[ {{H}^{\mu }},\bigcup\limits_{{\mu}'\in {{\mathbb{Z}}^{+}}}^{{}}{{{H}^{{{\mu }'}}}} \right] \right)$ is an EO. The set of EOs is denoted as$EVOP\left( H,\mu ,\aleph ,\Omega\right)$.There are three common EOs: recombination, mutation, and selection. Th ese three operators are roughly analogous to their similarly named counterparts in genetics. The application of them in EAs is strictly Darwin-like in nature, i.e., “survival of the fittest.” In defining the EOs we follow Lamont and Merkle. Let $r\in EVOP\left( H,\mu ,\aleph ,\Omega \right)$. If there exist $P\in{{H}^{\mu }},\Theta \in \aleph $ and $\omega\in \Omega $ such that one individual in the offspring population ${{r}_{\Theta }}\left( P\right)$ depends on more than one individual of P then r is referred to as a recombination operator. A mutation is defined in the following manner: let $m\in EVOP\left( H,\mu ,\aleph ,\Omega\right)$. If for every $P\in {{H}^{\mu }}$, for every $\Theta \in X$, for every$\omega \in \Omega $, and if each individual in the offspring population ${{m}_{\Theta}}\left( P \right)$ depends on at most one individual of P then m is called a mutation operator. Finally, for a selection operator let $s\in EVOP\left( H,\mu ,\aleph \times T\left(H,\Re ),\Omega \right) \right)$. If $P\in {{H}^{\mu }}$,$\Theta \in \aleph $,$f:H\to \Re $ in all cases, and s satisfies $a\in {{s}_{\left( \Theta,\Phi \right)}}(P)\Rightarrow a\in P$, then s is a selection operator.

How to Cite This Entry:
Genetic Algorithms. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Genetic_Algorithms&oldid=28473