PEXESO library
PEXESO is the C++ Library of Optimizing Methods, created as a marginal result of about one year research about this topics at the Faculty of Civil Engineering of the Czech Technical University in Prague. It is intended to a Linux environment and to be compiled and used as a Shared Library (libpexeso.so).
Index of this document:

1. What is that - the Optimization Methods
2. Idea of the PEXESO Library
3. Related Links
4. Algorithmic model of PEXESO library
5. Description of Selection Strategies
6. Description of Recombination Operators
7. The most simple example
8. If we want to see some progress
9. Why running so long
10. Non-rectangular domain
11. If you have KRESLITKO...
12. Class px_task function reference
13. Class px_strategy function reference
14. Loading strategies
15. Assembling your own optimization method
16. Download latest release
17. How to compile & install PEXESO
18. Release history
19. References to the literature
20. Index of Optimizing Tasks that have been recently solved by PEXESO

1. What is that - the Optimization Methods

Generally, Optimization is a task that has finding the best value of some function as its aim. For example, let's have a function

f(x)=x^2-5x+8. 

Than, let's find a minimum of this function. We can write down a derivative of this function: 

f'(x)=2x-5,

and look, where this derivative is equal to zero. Obviously, f'(x)=0 for x=2.5. Now, let's evaluate our function in the point x=2.5: we get a value 1.75, which is the global minimum of our function. This task we've just solved is calling 'minimization' which is a special case of 'optimization' (because 'optimization' could also mean 'maximization'.

The example we've got above is very simple. But the real-world tasks where there is some interest to find the global minimum or maximum of some function, can be very complicated. The first difficulty that can be met is that the function we like to optimize has not an analytic expression or it is too complicated to be derivated. Another thing is, if the function is not continuous or have more than one or two parameters (I mean these 'x'). 

For the more complicated problems, numeric methods are usually used. These methods can be divided into several groups:

  • gradient methods, 
  • simulated annealing, 
  • multilayered global optimalizators,
  • evolutionary computation.
  • Of course, this document cannot deal with all these methods in detail. I think the best place where You can obtain lots of information about the Global Optimization is this site: http://solon.cma.univie.ac.at/~neum/glopt.html. Also, You can buy some books in the bookstore.
    `2. Idea of the PEXESO library

    PEXESO library provides functionality that covers several types of already discovered optimization methods of the evolutionary type. The main idea is, that the one, who tries to solve some optimization problem, is able to compose and tune all the evolutinary algorithm by himself. There is a possibility to choose among several types of selection strategies ('steady state', 'trial vectors', 'tournament selection' or 'total offspring') and compose own combination of evolutionary recombination operators ('simplified differential', 'hard mutation, 'local mutation' and a set of Rainer Storn's differential operators). Another selection strategies and another recombination operators will be added in the near future. Seems that now this library is able to compose evolution strategies, differential evolution, differential genetic algorithms and many other combinations of operators and selection strategies that don't have a name until now. 

    Except the possibilty of composing the evolutionary method by yourself, the one can also choose from three pre-composed optimization strategies:

  • Differential Evolution, see this page
  • SADE technology, see this page
  • HADE (Highly Abstracted Differential Evolution).
  • PEXESO is a C++ library. This approach, whatever unpleasant for many developers, is considered by me as the most simple, after all. There are two main classes, that model the optimization task and the optimization method. 

    Class px_task models the optimized task in global manner: it contents the optimized function itself, the domain on which the function is defined as long as the stopping criteria and the way how to display information about the progress of the computation and the results. The one who wants to optimize his own task by the PEXESO library, helas, must inherit its own class 'my_optimization_task' from this one, something like:

    class my_optimization_task : public px_task

    and so on. There are five methods that must be written each time, and many other, that can be left as they are or redefined if necessary.

    On the other hand, class px_strategy stands as an ancestor of any selection strategy and also holds pointers to the genetic (recombination) operators. There is no need to inherit anything in this part of using PEXESO, only the developer must choose between several particular classes that are inherited from px_strategy:
     

  • px_steady_state,
  • px_trial_vectors,
  • px_tournament,
  • px_total_offspring,
  • the easiest way is to use a function px_create_any, for example:
    px_strategy *s=px_create_any( "Tournament",10,t ) ;
    (parameters will be discussed later). Any of these strategies than should be filled by (up to 8) recombination operators using the functions:
  • px_strategy::set_main_operator(...);
  • px_strategy::set_marginal_operator(...);
  • The idea is that we've got one operator which is called main (it must be present everytime) and several operators that are optional. The optional (marginal) operators have their 'probabilities'. When the algorithm creates a new generation, for each chromosome (trial vector) it performs a random experiment to determine, if any of these marginal operators are going to be used; otherwise, the main operator is used. So, if we have a marginal operator with its own probability of 5% and a main operator, its mean, that from 100 new chromosomes 5 will be created using the marginal operator and 95 using the main operator.

    Important note: the PEXESO library is only able to operate on the real domains.

    3. Related Links

    Let's note some links to sites that concern the Global Optimization, Evolutionary Computation and so on.

  • The site on Global Optimization: http://solon.cma.univie.ac.at/~neum/glopt.html 
  • The homepage of Differential Evolution by Rainer Storn: http://www.icsi.berkeley.edu/~storn/code.html
  • The papers of Zbyszek Michalewitz: http://www.coe.uncc.edu/~zbyszek/papers.html 
  • The homepage of the SADE Technology: http://klobouk.fsv.cvut.cz/~ondra/sade/sade.html
  • Using the Differential Evolution on the Reinforced Concrete Beam Optimization: http://klobouk.fsv.cvut.cz/~ondra/mr.beam/mr.beam.html
  • The empirical comparison of different types of Evolutionary Algorithms: http://klobouk.fsv.cvut.cz/~ondra/about_ga/about_ga.html
  • 4. Algorithmic model of PEXESO library

    Every evolutionary computation method can be modelled by the same algorithmic scheme, which can be written in consequent form:
     

    void EVOLUTIONARY_METHOD ( task_to_optimize *ot )
    {
        double bsf ;
        FIRST_GENERATION( ot ) ;
        bsf=EVALUATE_CHROMOSOMES( ot,0,number_of_selected ) ;
        int stop=0,generation=0 ;
        do
        {
             GENERATE_OFFSPRING( ot ) ;
             bsf=EVALUATE_CHROMOSOMES( ot,nember_of_selected_chromosomes,number_of_all_chromosomes ) ;
             PERFORM_SELECTION() ;
             if ( TO_BE_STOPPED( bsf,generation )) stop=1 ;
             generation++ ;
        }
        while ( !stop ) ;
    }

    In this code, bsf is a variable that contains the best result reached so far (Best So Far). Let's now discuss each sub-functions one by one:
     

  • FIRST_GENERATION - generates the first set of chromosomes
  • EVALUATE_CHROMOSOMES - assigns a value of the optimized function to each of the chromosomes
  • GENERATE_OFFSPRING - generates the new (trial) set of chromosomes as the recombinations of the current ones; this function calls all the recombination operators
  • PERFORM_SELECTION - this function selects the servivors from both the sets of current and new chromosomes; this function is the one that mainly differs in different classes inherited from px_strategy
  • TO_BE_STOPPED - checks whether the optimum has been reached or the number of generations allowed has been overrun (of course, what exactly this functions checks can differ for different optimization tasks)
  • 5. Description of Selection Strategies

    Four types of Selection Strategies are implemented. Important functions used by them are: create_new_one (using the recombination operators creates new chromosome) and select_better (that selects which of two chromosomes is better and if it is a new one, replaces the current one with it).
     

  • Steady state: in each generation only one new chromosome is generated; than another chromosome is changed from the population and these two are compared; if the new one is better, then replaces the current one.
  • Trial vectors: for each of the chromosomes the potential substitutor is generated and if is better than the current one, the current one is going to be replaced by it.
  • Tournament: at first, the whole new population is generated and than, the survivors are selected both from new and the current populations this way: each time two chromosomes are selected randomly and worse one is going to be killed.
  • Total offspring: works int the same manner as the previous one except that the new generations is two times bigger than current and the survivors are selected only from the new chromosomes.
  • 6. Description of Recombination Operators

    Now, let's introduce all types of differential operators. Each of them is given by an equation, where:
     

    ch[i] is an i-th chromosome,
    ch[i][j] is the j-th coordinate of i-th chromosome,
    ch' is always a chromosome from the offspring
    rand_ch is completely new random chromosome
    best_ch is the best chromosome in current generation
    rand( a,b ) is a random double between a and b
    range[j] is the range of the domain for coordinate j
    n is the number of coordinates
    Iexp(CR) is a subset of {1,2,...,n} for parameter CR in exponencial sense
    Ibin(CR) is a subset of {1,2,...,n} for parameter CR in binomial sense
    p,q,r,s,t are random indexes
  • Simplified Differential (parameter CR):
  • ch'[i]=ch[i]+CR*( ch[p]-ch[q] ),
  • Hard Mutation (parameter MR):
  • ch'[i]=ch[i]+MR*( rand_ch-ch[i] ),
  • Local Mutation (parameter LR):
  • ch'[i][j]=ch[i][j]+LR*rand( -1,1 )*range[j],
  • Best One Exponencial (parameters F,CR):
  • ch'[i][j]=best_ch[j]+F*( ch[p][j]-ch[q][j] ) for each j in Iexp(CR),
  • Rand One Exponencial (parameters F,CR):
  • ch'[i][j]=ch[p][j]+F*( ch[q][j]-ch[r][j] ) for each j in Iexp(CR),
  • Best Two Exponencial (parameters F,CR):
  • ch'[i][j]=best_ch[j]+F*( ch[p][j]-ch[q][j] )+F*( ch[r][j]-ch[s][j] ) for each j in Iexp(CR),
  • Rand Two Exponencial (parameters F,CR):
  • ch'[i][j]=ch[p][j]+F*( ch[q][j]-ch[r][j] )+F*( ch[s][j]-ch[t][j] ) for each j in Iexp(CR),
  • Rand To Best One Exponencial (parameters F,CR):
  • ch'[i][j]=ch[i][j]+F*( best_ch[j]-ch[i][j] )+F*( ch[p][j]-ch[q][j] ) for each j in Iexp(CR),
  • Best One Binomial (parameters F,CR):
  • ch'[i][j]=best_ch[j]+F*( ch[p][j]-ch[q][j] ) for each j in Ibin(CR),
  • Rand One Binomial (parameters F,CR):
  • ch'[i][j]=ch[p][j]+F*( ch[q][j]-ch[r][j] ) for each j in Ibin(CR),
  • Best Two Binomial (parameters F,CR):
  • ch'[i][j]=best_ch[j]+F*( ch[p][j]-ch[q][j] )+F*( ch[r][j]-ch[s][j] ) for each j in Ibin(CR),
  • Rand Two Binomial (parameters F,CR):
  • ch'[i][j]=ch[p][j]+F*( ch[q][j]-ch[r][j] )+F*( ch[s][j]-ch[t][j] ) for each j in Ibin(CR),
  • Rand To Best One Binomial (parameters F,CR):
  • ch'[i][j]=ch[i][j]+F*( best_ch[j]-ch[i][j] )+F*( ch[p][j]-ch[q][j] ) for each j in Ibin(CR).
    7. The most simple example

    Writing an optimization application using the PEXESO library contents of these steps:
    1. create your own optimization task (inherit it from px_task),
    2. use some of PEXESO's precomposed optimizing algorithms or compose your own one
    3. write main() 
    4. compile it using -lpexeso

    In our example, we'll try to solve the Chebychev trial polynomial problem with the Differential Evolution. It's the same trial task as you can find on Rainer Storn's page on Differential Evolution.

    So, let's define our class chebychev:

    #include <pexeso.h>

    #define Dim 9
    #define M 60

    class chebychev : public px_task
    {
        public:
            double fitness ( double *ox ) ;
            double get_desired_extreme ( void ) ;
            int get_dimension ( void ) ;
            void new_point ( double *ox ) ;
            void get_limits ( int odir , double &omin , double &omax , double &oprec ) ;
    } ;

    Note: in this example, we declare and define only the mathods, that are absolutely necessary (and that are pure virtual in px_task). Constants Dim and M will be used in the code consequently. 

    So, we need these functions:

  • fitness (it is the optimized function itself; this method returns the evaluation of given chromosome - potential solution vector ox)
  • get_desired_extreme (this is a method, that is beeing used by another methods to determine, whether it is worth to continue computations; if you don't know the best value, return here the biggest number that could be expected)
  • get_dimension (simply returns the number of parameters of the optimized function)
  • new_point (create here random new point in the domain and store it in the vector ox)
  • get_limits (assign here variables omin and omax, which are the most far limits of the domain in the direction of odir, and oprec, which is the desired precision; if no such, fill this with zero)
  • Now, let's code these methods:
    double chebychev::fitness ( double *ox )
    {
        int i,j ;
        double px,x=-1,dx=M,result=0 ;

        dx=2.0/dx ;
        for ( i=0 ; i<=M ; i++ )
        {
            px=ox[0] ;
            for ( j=1 ; j<Dim ; j++ )
            {
                px=x*px+ox[j] ;
            }
            if (( px<-1.0 ) || ( px>1.0 )) result+=( 1.0-px )*( 1.0-px ) ;
            x+=dx ;
        }
        px=ox[0] ;
        for ( j=1 ; j<Dim ; j++ ) px=1.2*px+ox[j] ;
        px-=72.661 ;
        if ( px<=0.0 ) result+=px*px ;

        px=ox[0] ;
        for ( j=1 ; j<Dim ; j++ ) px=-1.2*px+ox[j] ;
        px-=72.661 ;
        if ( px<=0.0 ) result+=px*px ;

        return ( -result ) ;
    }

    Note: PEXESO does maximization, so if you minimize something, please return -value.
     
    double chebychev::get_desired_extreme ( void )
    {
        return 0.0 ;
    }

    int chebychev::get_dimension ( void )
    {
        return Dim ;
    }

    void chebychev::new_point ( double *ox )
    {
        int j ;
        for ( j=0 ; j<Dim ; j++ )
        {
            ox[j]=(( double )rand()/RAND_MAX )*( 200.0 )-100.0 ;
        }
    }

    void chebychev::get_limits ( int odir , double &omin , double &omax , double &oprec )
    {
        omin=-100.0 ;
        omax=100.0 ;
        oprec=0.0 ;
    }

    We want to optimize this function using the Differential Evolution, which belongs to the precomposed strategies of PEXESO, so we don't need to compose an optimalization algorithms by ourselves. We'll simply use the function px_setup_as_storn. So, now let's proceed to write the main:
    int main ( void )
    {
        px_task *T=new chebychev ;
        px_strategy *S=px_setup_as_storn( "RandToBestOneExponencial",0.85,1.0,T ) ;
        S->run( T ) ;

        delete S ;
        delete T ;

        return 1 ;
    }

    Do you want to discuss the main step by step? So,
    1. You have to instanciate the chebychev class you've defined:
     

    px_task *T=new chebychev ;

    2. You have to create an optimization strategy. Because we want the Differential Evolution, we should use a function px_setup_as_storn. "RandToBestOneExponencial" is the name of the operator, 0.85 is F and 1.0 is CR.
     

    px_strategy *S=px_setup_as_storn( "RandToBestOneExponencial",0.85,1.0,T ) ;

    3. Let's run the optimization:
     

    S->run( T ) ;

    4. What was allocated, must be also unallocated:
     

    delete S ;
    delete T ;
    OK, we've written this. Now, let's compile it:
     
    g++ -o chebychev chebychev.c -lpexeso
    and run it:
     
    ./chebychev

    Well, it's running, even it is not writing anything to screen. If you wait some time, it will end and print the results to the screen.
    This example is in full in doc/example1.c.

    8. If we want to see some progress

    The thing we don't like most on the first run, is that we don't see, what is going on during the computation. If we want to see the progress, we can redefine the method px_task::info_about_status:

    void px_task::info_about_status ( double **osolutions , double *oforces , int obtg , int ogeneration , int opopulation ) ;
    which has these parameters:
     
    osolutions: are all the vectors (chromosomes) that are in the population
    oforces: are all the forces of the chromoseomes
    obtg: is an index of the best chromosome in the current generation
    ogeneration: is a number of current generation
    opopulation: is a number of chromosomes in the population
    So, let's code this:
     
    class chebychev : public px_task
    {
         ...
         void info_about_status ( double **osolutions , double *oforces , int obtg , int ogeneration , int opopulation ) ;
         ...
    } ;

    void chebychev::info_about_status ( double **osolutions , double *oforces , int obtg , int ogeneration , int opopulation )
    {
        printf( "in generation (%d) the best value is (%f)\n",ogeneration,oforces[obtg] ) ;
    }

    This will print the value of the best chromosome for each generations. Everything else can stay the same. Look for the example2.c.
    9. Why running so long

    Now, we can see what is going on during the computation, and this is still not good. It is running too long and for last about 9000 of about 10000 generations it is reporting that the best value is zero, which is the defined result. Also, at the and we could see something like this: stagnation by ( 9958 ). It means that nothing has changed for last 9958 generations. Why it is so? Well, there is a function to_be_stopped in the px_task, which tells the algorithms, when to stop. Now, this function only checks if the number of generations in not bigger than 10240, and if so, then stops the algorithm. 

    So, if we are looking for a reasonable behaviour of the algorithm, we should redefine this method. Our new one should check, if the best value is close enough (for example par 1.0E-7) to zero and than stop (stop means return 1, continue means return 0):

    class chebychev : public px_task
    {
         ...
         int to_be_stopped ( double obsf , int ogeneration , int oiteration , int ostagnation ) ;
         ...
    } ;

    int chebychev::to_be_stopped ( double obsf , int ogeneration , int oiteration , int ostagnation )
    {
        if ( obsf>-1.0E-7 ) return 1 ;
        return 0 ;
    }

    OK, try it or see example3.c.
    10. Non-rectangular domain

    Well, most of the optimization tasks that come to focus, are defined on some n-dimensional rectangular domain. If it is always, we could go with only a simple function get_limits (as described above) but because of certain cases where this is not true, it is necessary to deal somehow with this thing. Let's try to solve the consequent example: let our function be too simple: 
     

    f(x[0],x[1])=x[1],

    but defined on non-rectangular domain like this:

    (OK, I know that we could simple solve it in the polar coordinates, but I just want to explain something to you, so don't treat me as a stupid cow, please.)
    We will solve it on the whole rectangle (1.0-1.05)x(1.0-1.05), so our chromosomes can fly far from domain and we have to catch them and return them to the domain - see it on the picture above, please. We, for example, can conserve an angle and reduce the distance from the center...

    We will need a different function new_point, that will generate points only in the desired domain:

    void thin_arc::new_point ( double *ox )
    {
        double ro,fi ;
        ro=(( double )rand()/RAND_MAX )*0.05+1.0 ;
        fi=(( double )rand()/RAND_MAX )*M_PI_2 ;
        ox[0]=ro*cos( fi ) ;
        ox[1]=ro*sin( fi ) ;
    }
    And also, we must make this returning to domain. So, there are 2 redefinable methods in class px_task, that are called somewhere from the algorithm. One of them, check_domain tells the algorithms, if it should check the chromosomes if there are or are not in the domain, and the second, return_to_domain, does this thing itself-it returns chromosomes into the domain. Normally, as it is made in px_task, check_domain returns 0 and return_to_domain does nothing and is never called, because check_domains returns zero, which means that the algorithms will not call it. We simply have to redefine these two methods:
     
    int thin_arc::check_domain ( void )
    {
        return 1 ;
    }

    int thin_arc::return_to_domain ( double *ox )
    {
        double ro,fi ;

        fi=atan( ox[1]/ox[0] ) ;
        ro=sqrt( ox[0]*ox[0]+ox[1]*ox[1] ) ;

        if ( fi<0.0 ) fi=0.0 ;
        if ( fi>M_PI_2 ) fi=M_PI_2 ;

        if ( ro<1.0 ) ro=1.0 ;
        if ( ro>1.05 ) ro=1.05 ;

        ox[0]=ro*cos( fi ) ;
        ox[1]=ro*sin( fi ) ;
    }

    Now, let's see the definition of class thin_arc and the other methods:
    #include <pexeso.h>
    #include <stdlib.h>
    #include <math.h>

    class thin_arc : public px_task
    {
        public:
            double fitness ( double *ox ) ;
            double get_desired_extreme ( void ) ;
            int get_dimension ( void ) ;
            void new_point ( double *ox ) ;
            void get_limits ( int odir , double &omin , double &omax , double &oprec ) ;
            int check_domain ( void ) ;
            int return_to_domain ( double *ox ) ;
            int to_be_stopped ( double obsf , int ogeneration , int oiteration , int ostagnation ) ;
            void info_about_status ( double **osolutions , double *oforces , int obtg , int ogeneration , int opopulation ) ;
    } ;

    double thin_arc::fitness ( double *ox )
    {
        return ox[1] ;
    }

    double thin_arc::get_desired_extreme ( void )
    {
        return 1.05 ;
    }

    int thin_arc::get_dimension ( void )
    {
        return 2 ;
    }

    void thin_arc::get_limits ( int odir , double &omin , double &omax , double &oprec )
    {
        omin=0.0 ;
        omax=1.05 ;
        oprec=0.0 ;
    }

    int thin_arc::to_be_stopped ( double obsf , int ogeneration , int oiteration , int ostagnation )
    {
        if ( obsf>1.0499999 ) return 1 ;
        return 0 ;
    }

    void thin_arc::info_about_status ( double **osolutions , double *oforces , int obtg , int ogeneration , int opopulation )
    {
        printf( "in generation (%d) the best value is (%f)\n",ogeneration,oforces[obtg] ) ;
    }

    int main ( void )
    {
        px_task *T=new thin_arc ;
        px_strategy *S=px_setup_as_storn( "RandToBestOneExponencial",0.85,1.0,T ) ;
        S->run( T ) ;

        delete S ;
        delete T ;

        return 1 ;
    }

    OK, we have example4.c.
    11. If you have KRESLITKO...

    Well, if you have KRESLITKO installed on your linux box, you can try to compile and run example5.c. You get a graphical experience how does the PEXESO library work if optimizing a task like examlpe4 (the previous one) was:

    Here, you can see a red thin-arc, which is our domain, blue crosses which are the chromosomes and blacj rectangle, which locates the average coordnates of all the chromosomes in the population. Have a nice time with that.

    12. Class px_task function reference

    So far, enough with tutorials. Now let's describe the function reference of the two main classes, firstly the px_task.

    px_task ( void ) ;
     
     
     

     

    a constructor; it initializes values of variables:
  • log (the file where logging is stored)
  • print_level (level of logging to screen - you may interpret this as you want)
  • file_print_level (level of logging to file - you may interpret this as you want)
  • bsf (best so far - the best value ever found)
  • ~px_task ( void ) ; a destructor
    double fitness ( double *ox ) ; pure virtual method that must be redefined; returns an evaluation of chromosome ox by the optimized function itself
    double get_desired_extreme ( void ) ; pure virtual method that must be redefined; returns the extreme that should be reached; if you don't have an idea, simply return some very very big number
    int get_dimension ( void ) ; pure virtual method that must be redefined; returns the dimension of the problem (the number of parameters of the optimized function)
    void new_point ( double *ox ) ; pure virtual method that must be redefined; fills the vector ox by a random new point from a domain
    void get_limits ( int odir , double &omin , double &omax , double &oprec ) ; pure virtual method that must be redefined; fills the omin and omax (the most far limits of the domain for given coordinate odir) and oprec (the desired precision of the solution; if no such, fill zero here)
    double get_penalty ( void ) ;

     

    if there is some penalization of unfeasible solutions, you may return the penalty value here - but it is not used anytime in this version and is only for future compatibility
    int check_domain ( void ) ; px_task returns 0 here, which means, that no domain checking will be done; if you want to do this checking, return 1 from this method
    int return_to_domain ( double *ox ) ;

     

    if you need to do returning to domain for chromosomes that have fallen outside, do it here for a given chromosome ox; return 0 if the chromosome was not returned and 1 if it was returned; current implementation in px_task does nothing and returns 0
    int to_be_stopped ( double obsf , int ogeneration , int oiteration , int ostagnation ) ;
     
     
     
     

     

    in this method you may check, whether the computation has reached some stopping criteria; algorithm sends you the best so far value (obsf), number of generations (ogeneration), number of fitness calls (oiteration) and number of generation in which the algorithm has been stagnated (ostagnation); decide here and return 1 if you want the computation to be stopped, otherwise return 0; current implementation in px_task stops computation after 10240 generations
    void create ( char *ofname ) ; this method, which is doing nothing in px_task, you can redefine to read some data from the file with given name ofname (for example)
    void new_case ( void ) ;

     

    this method is intended to be used in cases, where you do some benchmark computations on some randomly shaped function; you can use this method to randomly re-create your function for repeated computation
    void setup_logging ( int oprint_level , int ofile_print_level , char *ofname=0 ) ; this method is being used for setting variables print_level, file_print_level and for opening file log (see the constructor); can be used as is
    void setup_visualisation ( int ovis_type ) ; this method is being used for setting the type of visualization if there is more possibilities; currently in px_task it does nothing
    void info_about_yourself ( void ) ; this method could give any possible information about the optimized function to the user that run the program; it is called before the optimization process starts
    void info_about_status ( double **osolutions , double *oforces , int obtg , int ogeneration , int opopulation ) ;
     
     

     

    this method is called in each generation and is intended to inform a user about the state of the optimization process; an algorithms gives you all current chromosomes in osolutions, values of all of them in oforces, the index of the best chromosome in this generation in obtg, number of generations until now in ogeneration and number of chromosomes in opopulation
    void info_about_results ( double *osolution , double obsf , int ogeneration , int oiteration , int ostagnation ) ;
     

     

    at the end of computation, algorithm calls this function, that should inform a user about the found result; it sends the best chromosome - the best solution in osolution, the best value found in obsf, number of generations in ogeneration, number of fitness calls in oiterations and number of generations of stagnation in ostagnation
    void get_stats ( int &ogenerations , int &oiterations , double &obest_value , int &osuccess ) ;
     
     
     

     

    this method is intended for the repeated computation in some benchmark test, for example; the meta-algorithms may asks the px_task about results of just ended computation; so this methods filles variables ogenerations with number of generations after all, oiterations with number of fitness calls after all,  obest_value with the best value found and osuccess with 1 if the desired optimum was found or with 0 otherwise
    Of course, you don't need to wory about most of these methods. You need to redefined only five of them, the others may stay as they are, and you can redefined them anytime you need it. Most of them are intended more to the development environment that is beeing used by us (called NAVARRO). 
    13. Class px_strategy function reference

    Now let's describe the methods themselves of class px_strategy.

    px_strategy ( px_task *ot , int opop_limit=0 ) ;

     

    a constructor; send him please your optimization task as ot and if you feel a need, you can limit the number of population, but unless you are not sure what are you doing please leave this alone
    ~px_strategy ( void ) ; a destructor
    void set_main_operator ( char *otype , double *orate ) ;
     
     
     
     
     
     
     
     
     
     
     
     

     

    sets one operator as a main one; pass here the name from this set:
    "SimplifiedDifferential",
    "HardMutation",
    "LocalMutation",
    "BestOneExponencial",
    "RandOneExponencial",
    "BestTwoExponencial",
    "RandTwoExponencial",
    "RandToBestOneExponencial",
    "BestOneBinomial",
    "RandOneBinomial",
    "BestTwoBinomial",
    "RandTwoBinomial",
    "RandToBestOneBinomial"
    as otype and the vector of its parameters os orate
    void set_marginal_operator ( char *otype , double oprobability , double *orates ) ; sets one of marginal operators; the difference from the previous is that marginal operator must have a probability, that should be passed as oprobability
    void clear_pool ( void ) ;

     

    this method is intended for repeated computations in some kind of benchmark test; is simply clears the values that have been found in previous run of computation, so the algorithm can be restarted from knowing nothing
    void run ( px_task *ot ) ; this method runs the computation and does everything around
    14. Loading strategies

    We provide three precomposed strategies:

    px_strategy *px_setup_as_storn ( char *ooperator , double of , double ocr , px_task *ot ) ;

     

    this creates the Differential Evolution (look at http://www.icsi.berkeley.edu/~storn/code.html for detailed information about this method); pass a name (see chapter 13) of the operator as ooperator, parameter F as of and parameter CR as ocr, than your optimization task as ot
    px_strategy *px_setup_as_sade ( double ocr , double omr , double olr , double orad , px_task *ot ) ;
     

     

    this creates the SADE technology (look at http://klobouk.fsv.cvut.cz/~ondra/sade/sade.html for detailed information about this method); pass CR parameter as ocr, hard_mutation_rate as omr, local_muatation_rate as olr and radioactivity as orad, than your optimization task as ot
    px_strategy *px_setup_as_hade ( char *ooperator , char *oselection , int opopulation_rate , double of , double ocr , double omr , double olr , double orad , px_task *ot ) ;
     
     
     
     
     
     
     

     

    this creates the HADE technology (documentation not available yet); pass name of the operator (see Chapter 13) as ooperator, selection type from the list:
    "SteadyState",
    "TrialVectors",
    "Tournament",
    "TotalOffspring",
    population rate (is a number by which is the dimension of problem multiplied to get the population size) as opopulation_rate, parameter F as of, parameter CR (like in Differential Evolution) as ocr, hard mutation rate as omr, local mutation rate as olr, radioactivity as orad and your optimization task as ot
    And a function that return empty strategy of certain type (empty means that it should be yet filled by the operators):
     
    px_strategy *px_create_any ( char *oselection , int opopulation_rate , px_task *ot ) ; this creates empty strategy with selection type given by oselection and with population rate given by opopulation_rate; pass also your optimization task as ot
    15. Assembling your own evolutionary method

    Let's now try to compose our own optimization method. It should be, for example, of the steady_state type selection, with the local_mutation as a main operator and hard_mutation with probability 10% as a marginal operator. We can than run this method with a problem of thin_arc and see, how will this poor method try to solve that trivial task. At first, let's write down a function, that creates our desired method and returns a pointer to it:

    px_strategy *create_our_own ( px_task *ot )
    {
        px_strategy *s ;
        s=px_create_any( "SteadyState",20,ot ) ;

        double rate[1] ;
        rate[0]=0.0025 ;
        s->set_main_operator( "LocalMutation",rate ) ;
        rate[0]=0.25 ;
        s->set_marginal_operator( "HardMutation",0.1,rate ) ;

        return s ;
    }

    Now, let's replace a line that loads the method in main:
     
    px_strategy *S=create_our_own( T ) ;

    Procees as is usual. Can see the example6.c. You may start this computation and let it run until you or the computer are bored to death.

    16. Download latest release

    Latest release is: PEXESO-0014 
    Date: May 24 2001
    Download location: http://klobouk.fsv.cvut.cz/~ondra/release/pexeso-0014.tgz.

    17. How to compile & install PEXESO

    The PEXESO library is anything-on-the-world-independent. So, after you unpack the pexeso-xxxx.tgz by:

    tar xzvf pexeso-xxxx
    you can immediately run 'make' and than 'make install' as root; if you don't like it, you may run 'make uninstall' also as root. Compiling the application is standard, only you should use a library option: -lpexeso. 
    18. Release history
     
    Archive name: Date of release: What's new:
    pexeso-0014.tgz May 24, 2001 the first release ever
    19. References to the literature
     
    Author Title Where published
    Storn, R. & Price, K. Differentisal Evolution - a Simple and Efficient Hueristic for Global Optimization over Continuous Spaces Journal of Global Optimization, 11, 341-359
    Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs Springer-Verlag 1992
    Michalewicz, Z. & Hinterding, R. &
    Michalewicz, M
    Evolutionary Algorithms Chapter 2 in Fuzzy Evolutionary Computation, W. Pedrycz (editor), Kluwer Academic, 1997
    Hrstka, O. &
    Kucerova, A.
    Search for optimization method on multidimensional real domains CTU Reports, Contribution to Mechanics of Materials and Structures, 4, 87-104, 2000
    20. Index of Optimizing Tasks that have been recently solved by PEXESO
     
    Problem description Where to find a documentation
    Periodic unit cell problem solving none
    Finding a parameters of a retention graph of soil none
    Reinforced concrete beam optimization http://klobouk.fsv.cvut.cz/~ondra/mr.beam/mr.beam.html
    FSWE Type0 (the study of computational complexity with respect to a problem dimension) http://klobouk.fsv.cvut.cz/~ondra/sade/sade.html
    FSWE Type1 (the study of behavoiour of evolutionary methods for the strongly multi-modal functions) none
    Test set of functions http://klobouk.fsv.cvut.cz/~ondra/about_ga/about_ga.html
    The Chebychev trial polynomial problem none