最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 正文

An empirical analysis of collaboration methods in

来源:动视网 责编:小OO 时间:2025-09-24 12:14:51
文档

An empirical analysis of collaboration methods in

R.PaulWiegandGeorgeMasonUniversityComputerScienceDepartmentFairfax,VA22030paul@tesseract.orgWilliamC.LilesCentralIntelligenceAgencyWashington,DC20505wliles@gmu.eduKennethA.DeJongGeorgeMasonUniversityComputerScienceDepartmentFairfax,VA22030kdejong@gm
推荐度:
导读R.PaulWiegandGeorgeMasonUniversityComputerScienceDepartmentFairfax,VA22030paul@tesseract.orgWilliamC.LilesCentralIntelligenceAgencyWashington,DC20505wliles@gmu.eduKennethA.DeJongGeorgeMasonUniversityComputerScienceDepartmentFairfax,VA22030kdejong@gm
R.Paul Wiegand George Mason University Computer Science Department Fairfax,V A22030

paul@tesseract.org

William C.Liles

Central Intelligence Agency

Washington,DC20505

wliles@gmu.edu

Kenneth A.De Jong

George Mason University

Computer Science Department

Fairfax,V A22030

kdejong@gmu.edu

Abstract

Although a variety of coevolutionary methods

have been explored over the years,it has only

been recently that a general architecture for co-

operative coevolution has been proposed.Since

that time,theflexibility and success of this co-

operative coevolutionary architecture(CCA)has

been shown in an array of different kinds of prob-

lems.However,many questions about the dy-

namics of this model,as well as the efficacy

of various CCA-specific choices remain unan-

swered.One such choice surrounds the issue of

how the algorithm selects collaborators for eval-

uation.This paper offers an empirical analysis of

various types of collaboration mechanisms and

presents some basic advice about how to choose

a mechanism which is appropriate for a particular

problem.

1Introduction

In recent years there has been a growing interest in coevo-lutionary algorithms as interesting and useful extensions to the more traditional Evolutionary Algorithms(EAs). The important difference in moving to coevolutionary al-gorithms is that thefitness of an individual is a function of the other individuals in the population.Two basic classes of coevolutionary algorithms have been developed:com-petitive coevolution in which thefitness of an individual is determined by a series of competitions with other indi-viduals(see,for example,Rosin and Belew(1996)),and cooperative coevolution in which thefitness of an individ-ual is determined by a series of collaborations with other2Background

Although evolutionary algorithms(EAs)have been with us for half a century,significant research into the use of co-evolutionary systems did not really begin until the early 1990’s.From the start early research focused on applica-tions of complex tasks,such as competitive approaches us-ing a parasite-host relationship as a model to coevolve sort-ing networks and problem sets(Hillis,1991),and cooper-ative approaches for coevolving job-shop schedules(Hus-bands and Mill,1991).

However,although both cooperative and competitive ap-proaches were explored from the beginning,most research since that time has dealt with applications of competitive approaches.Most popularly competitive coevolution has been applied to game playing strategies(Rosin and Belew, 1995,1996;Rosin,1997;Pollack and Blair,1998).Addi-tionally Angeline and Pollack(1993)demonstrate the ef-fectiveness of competition for evolving better solutions by developing a concept of competitivefitness to provide a more robust training environment than independentfitness functions.Competition was also successfully harnessed by Schlierkamp-V oosen and M¨u hlenbein(1994)to facili-tate strategy adaptation in breeder genetic algorithms. More recently a variety of coevolutionary methods have been applied to machine learning problems.There has been particular attention to neural network coevolution(Pare-dis,1994;Juill´e and Pollak,1996;Mayer,1999;Potter and De Jong,2000).Additionally,some work in using coevolu-tionary algorithms for concept learning has been done(Pot-ter and De Jong,1999).

Moreover,there have been recent attempts to lay out gen-eral frameworks for coevolutionary models(Potter and De Jong,1994;Paredis,1996).However,attempts to un-derstand the dynamics of these frameworks have been few and far between.Some basic empirical work is laid out by Potter and De Jong(1994)and Potter(1997),indicating that there is a possible link between variable inter-activity and collaboration selection.Even more recently,some ba-sic theoretical work has been done to take ideas from sim-ple genetic algorithm theory provided by V ose(1999),and apply it to coevolution(Ficici and Pollack,2000).This work explores the mechanics of a simple competitive co-evolutionary algorithm from a game theoretic viewpoint. These questions of coevolutionary dynamics are not aca-demic.The question of selecting collaborators for eval-uation,for instance,has not only been an issue for our application activities,but has also cropped up with other researchers who are applying the techniques to problems such as inventory control optimization Eriksson and Olsson (1997).Indeed,since this is a key element of the success of applying this cooperative coevolution architecture(CCA),we believe it merits particular attention in order to improve our ability to apply CCAs in the future.

3Coevolution and Collaboration

When applying a CCA to a particular problem,a standard approach is to decompose the problem into subcomponents and assign each subcomponent to a subpopulation.These subpopulations may or may not be homogeneous with re-spect to the representation used or the EA being used to evolve a particular subcomponent.

Evolution proceeds independently,except for evaluation. Since any given individual from a subpopulation represents only a subcomponent of the problem,collaborators will need to be selected from the other subpopulations in order to assessfitness.Each generation,all individuals belonging to a particular subpopulation have theirfitness evaluated by selecting some set of collaborators from other subpopula-tions to form complete solutions.Afterward,the CCA pro-ceeds to the next subpopulation,which will in turn draw collaborators from each of the other subpopulations.A simple algorithm of this process is outlined below infig-ure1.

for each species do

initialized population

evaluate()

while not terminated do

for each species do

select()

recombine()

evaluate()

survive()

Figure1:The structure of a Cooperative Coevolutionary Algorithm(CCA).

Computing thefitness of individuals in a coevolutionary system can be done in a variety of ways.Some compet-itive coevolutionary algorithms perform bipartite evalua-tions,applying each individual in one population to each in the other(Hillis,1991).Additionally,it is not uncommon for single population competitivefitness models to perform exhaustive pair-wise evaluations(Axelrod,19).Such approaches can be computationally expensive in multi–population models,since the number of objective function evaluations used for each assessment offitness grows ex-ponentially by the number of species.Less expensive ap-proaches,such as single elimination tournaments have also been addressed(Angeline and Pollack,1993).Alternatively,early work in the CCA model suggests two methods for selecting collaborators for the purpose offit-ness evaluation:

CCA-1Choose the best individuals from alternative sub-populations,as defined byfitness obtained from the last evaluation process of that group.

CCA-2Select two individuals:the best and a random indi-vidual.Evaluate both with the current individual and use the higher objective function value for the current individual’sfitness score.

We could of course imagine many more such meth-ods.Rather than speculate about potential collaboration choices,however,it is more useful to define some basic at-tributes of this choice.In our opinion,there are three such attributes:

The degree of greediness of choosing a collaborator (collaborator selection pressure).

The number of collaborators per subpopulation to use for a givenfitness evaluation(collaboration pool size).

The method of assigning afitness value given multiple collaboration-driven objective function results(col-laboration credit assignment).

We will attempt to address all three of these attributes in our research.

3.1A Clearer Picture of Collaboration

Since an individual represents only a subcomponent of a problem solution,collaborator subcomponents from each of the other subpopulations must be assembled to form a complete solution.We call the process of choosing these collaborator subcomponents collaborator selection.To do this,we can use the last evaluatedfitness scores of the individuals in the alternative subpopulations to bias how we choose these collaborators.The degree of bias in this choice is what we are calling collaborator selection pres-sure.

This newly assembled complete candidate solution(a col-laboration)can now be plugged into the objective function and assigned a collaboration score.If only one collabora-tor from each subpopulation is selected,there will only be a single collaboration score,and this score may be used as thefitness value.

However,we may choose to try different combinations of collaborators from the other subpopulations.In which case evaluation of an individual will consist of multiple collab-orations.The number of collaborators selected from each subpopulation is what we call the collaboration pool size. Since each of these collaborations will have their own col-laboration score from the objective function,these multi-ple scores must be in some way resolved to a singlefitness value(collaboration credit assignment).

4Experiment Methods

4.1Fitness Landscapes

For our initial experimental studies on collaboration meth-ods we elected to study simple function optimization prob-lems.These types of problems are nice for the CCA,since a natural decomposition of the problem is very straightfor-ward.Each subpopulation represents a particular variable of the function.In all cases,we chose to examine only two variable landscapes,since increasing the dimension-ality creates a combinatorial problem and raises questions about how multiple collaborators are applied.Future work on this matter is needed.

We use threefitness functions:Rosenbrock,Rastrigin,and a quadratic which is not directly aligned with the axes.The first two were chosen to be consistent with the Potter and De Jong(1994)study.They represent two difficult prob-lems,one of which is not linearly separable(Rosenbrock) and the other which is linearly separable(Rastrigin).The third function was chosen because,although it is a simple problem conceptually,the lack of axis alignment has been shown to introduce problems for certain kinds of evolution-ary algorithms(Salomon,1996).A table summarizing the functions used,as well as the constraints of those functions is show in table1.In all cases the functions were to be minimized.

4.2EA Characteristics

As previously stated,subpopulations of the CCA are ho-mogeneous in our study.Again,where possible EA char-acteristics remain similar to previous studies.In all cases, the details of the evolutionary mechanisms are as follows: representation:binary

(16bits per function variable)

selection:fitness proportionate

with linear scaling

genetic operators:two-point crossover&

bit-flip mutation

mutation probability:1/chromosome length crossover probability:0.6

(sub)population size:100

termination criteria:100,000function evaluationsTable1:Function Test Suite

Function Bounds Name

2Optimistic

2Hedge

2Pessimistic

3Optimistic

3Hedge

3Pessimistic

CCA-1uses a very greedy method,selecting the best in-

dividual from the previous generation.CCA-2,however,

weakens this pressure somewhat by using a two collabora-

tor mechanism in which the second collaborator is chosen

at random.It is still quite greedy however,since the best

individual is still used.We can imagine weakening this

pressure even more by allowing for different combinations

of selection mechanisms aside from selecting the best.Ta-

bles3,4,and5show results for all three functions using

various combinations of collaborator pool size two,as well

as three selection mechanisms:best,random,and worst.

Notice that no improvements are obtained on the Rosen-

brock function regardless of whether the best individual is

included as one of two collaborators.Thisfinding is con-

sistent with ANOV A at95%confidence.In the Rastrigin

function and the quadratic functions however,there is a

clear significant advantage to using the best individual as

one of the collaborators.We will return to this later in the

paper.

We can weaken selection even further,as well as provide

ourselves with a way of controlling the degree of collabo-

rator selection pressure by using previous generation eval-

uation results to bias a non-deterministic choice of current

collaborations.We use a method similar to tournament se-lection,varying tournament sizes from1(random selec-tion)to4,which is a fairly strong bias.More extreme sizes were used,but are not presented in detail in this paper. Table3:Rosenbrock minimization results using various combinations of selection choices for each of two collab-orators:best,worst,and random.These show averages of 50trials,as well as95%confidence intervals.

Rosenbrock

56.5157.67

Random

Random Best

Worst

3.460.54

Table5:Off-Axis quadratic minimization results using var-ious combinations of selection choices for each of two col-laborators:best,worst,and random.These show averages of50trials,as well as95%confidence intervals.

Quadratic

295.07 2.85

Random

Figure 2:Results for Rosenbrock ()minimization exper-iments.The -axis represents the final reported result from the EA after 100,000function evaluations.The points plot-ted are averages of 50trials,and the whiskers show the 95%confidence

intervals.

Figure 3:Results for Rastrigin ()minimization experi-ments.The -axis represents the final reported result form the EA after 100,000function evaluations.The points plot-ted are averages of 50trials,and the whiskers show the 95%confidence

intervals.

Figure 4:Results for the off-axis quadratic ()minimiza-tion experiments.The -axis represents the final reported result from the EA after 100,000function evaluations.The points plotted are averages of 50trials,and the whiskers show the 95%confidence

intervals.

Figure 5:The contributions of collaborators were tracked using three different collaboration mechanisms for each function (best,worst,and random;best and worst;and best and random).The number of times each collaborator is re-sponsible for yielding the better fitness score is illustrated as ratios in the above chart.Averages across 50trials were used.

tion returns CCA performance to at least an insignificant difference from that of the GA.

While our research does not address the issue of how large the collaboration pool size should be for a given problem,it does suggest that a relatively conservative adjustment from one to two collaborators will frequently yield substantial benefit.Indeed,in all three functions this change is statis-tically significant for the data shown in ourfigures.

Of course even such a“conservative”adjustment from one to two collaborators is disappointing news in terms of com-putational complexity.Whether some sampling technique can be used to reduce the combinatorial problem remains to be seen.

6Further Analysis

In order to look more closely at what kind of use the CCA is making of its collaborators,we decided to setup some multi-collaborator runs and track the frequency with which the CCA makes use of any particular collaborator forfit-ness assessment.We did this by setting up three groups for each function,one requiring three collaborators,and two requiring just two.In thefirst group the best,worst,and random collaborators were selected and the function was evaluated with each.The second group selects the best and worst collaborator and the third selects best and random. Since we are using an optimistic strategy for credit assign-ment,we kept track of the number of times each collab-orator produces the best resultingfitness score.Figure5 on page6shows these ratios for these groups for all three functions.

The Rastrigin function always uses the best collaborator. Given the premise that using the best individual for collab-oration gives us something like a simple line-search,it is not surprising that the best collaborator is always the one used by the algorithm during evolution.Even the off-axis quadratic makes predominant use of the best individual,al-though it seems evident that its own alignment properties create a need for small use of other collaborators.

What is more interesting is that not only does the Rosen-brock make use of all three,but it doesn’t seem to matter whether we use a random collaborator or the worst individ-ual as a collaborator.Curiously,use of the worst individual seems to overshadow use of the random individual in the three collaborator case.We believe this is an artifact of the properties of this particular landscape,though clearly more investigation is needed to fully explain this.7Conclusions and Future Work

There are several clear lessons to take home from this re-search.First and foremost it is evident that using an op-timistic approach is generally the best mechanism for col-laboration credit assignment.This may not always be true for every type of problem,but it seems that it is a very safe first guess for static objective functions.

The next question a practitioner should ask is how much non-linear interaction there is likely to be among the sub-components with respect tofitness.If this can be reason-ably assessed,it is the key to making the next decisions about collaboration.Clearly if the problem is a simple problem that is linearly separable,a greedy approach to col-laborator selection is warranted.Additionally,it may well be that the number of collaborators may be limited(perhaps even to just1).

For more complicated problems with large degrees of vari-able interactivity,the selection pressure of collaboration is far less important than the number of collaborators.More-over,if computationally feasible,increasing the number of collaborators seems to benefit CCA performance in gen-eral.

However,it is also clear that there is no magic bullet,of course.The simple off-axis quadratic still perplexes the non-greedy CCA.Combining random(or worst,or arbi-trary)collaborators with best collaborators against these problems resulted in no significant degradation of solution quality over the greedy CCA-1.So combining these meth-ods(as in CCA-2by Potter and De Jong(1994))may be a goodfirst stab at solving a problem when the degree of variable interactivity is unknown.

The fact that in the non-linear case collaborator selection pressure seems to be unimportant may be a clue for some resolution to the multi–collaborator combinatorial prob-lem.It suggests that how you sample the collaboration space is not very important.This encourages the possibility that some simple sampling methods can give us some relief to this problem.As state earlier,more work here is needed. Although we feel this is a goodfirst step toward under-standing how collaboration works in the CCA,much work remains.Ideally it would be nice to have some theory that allows us tofind the minimal number of collaborators nec-essary to solve a given problem,for instance.

Even without such a theory though many questions are raised by this research which deserve attention.First of all,if the CCA makes even use of different collaborators in non-linear problems like Rosenbrock as it seems to, what kind of run-time behavior does this usage have?Is there some sort of periodicity,as one collaborator selec-tion method dominates the other for some time,then trendsreverse?Or is there some punctuation of equilibrium that occurs to cause the method to re-balance?Or perhaps the usage is random and unpredictable.Exploring this question is one of our foremost questions.

Additionally,while increasing from two to three collabo-rators in the Rosenbrock problem is clearly superior,CCA runs do not seem to make even distributed use of these col-laborators.Our observations show that two selection meth-ods can overshadow the third,even though the addition of a third method improves performance.We would like to answer this question in the future,as well. References

P.Angeline and J.Pollack.Competitive environments evolve better solutions for complex tasks.In S.Forrest, editor,Proceedings of the Fifth International Conference on Genetic Algorithms,pages2–270,San Mateo,CA, 1993.Morgan Kaufmann.

R.Axelrod.Evolution of strategies in the iterated prisoner’s dilemma.In L.Davis,editor,Genetic Algorithms and Simulated Annealing.Morgan Kaufman,19.

R.Eriksson and B.Olsson.Cooperative coevolution in in-ventory control optimisation.In G.Smith,N.Steele, and R.Albrecht,editors,Proceedings of the Third Inter-national Conference on Artificial Neural Networks and Genetic Algorithms,University of East Anglia,Norwich, UK,1997.Springer.

S.Ficici and J.Pollack.A game-theoretic approach to the simple coevolutionary algorithm.In Proceedings from the Sixth Parallel Problem Solving from Nature,pages 467–476.Springer-Verlag,2000.

D.Hillis.Co-evolving parasites improve simulated evo-lution as an optimization procedure.Artificial Life II, SFI Studies in the Sciences of Complexity,10:313–324, 1991.

P.Husbands and F.Mill.Simulated coevolution as the mechanism for emergent planning and scheduling.In R.Belew and L.Booker,editors,Proceedings of the Fourch International Conference on Genetic Algorithms, pages2–270.Morgan Kaufmann,1991.

H.Juill´e and J.Pollak.Co-evolving interwined spirals.In L.Fogel,P.Angeline,and T.B¨a ck,editors,Proceedings of the Fifth Annual Conference on Evolutionary Pro-gramming,pages461–468.MIT Press,1996.

H.Mayer.Symbiotic coevolution of artificial neural net-works and training data sets.In Proceedings from the Fifth Parallel Problem Solving from Nature,pages511–520.Springer-Verlag,1999.

J.Paredis.Steps towards co-evolutionary classification net-works.In R.A.Brooks and P.Maes,editors,Artificial

Life IV,Proceedings of the fourth International Work-shop on the Synthesis and Simulation of Living Systems., pages359–365.MIT Press,1994.

J.Paredis.Coevolutionary computation.Artificial Life Journal,2(3),1996.

J.Pollack and A.Blair.Coevolution in the successful learn-ing of backgammon strategy.Machine Learning,32(3): 225–240,1998.

M.Potter.The Design and Analysis of a Computational Model of Cooperative CoEvolution.PhD thesis,George Mason University,Fairfax,Virginia,1997.

M.Potter and K.De Jong.A cooperative coevolutionary approach to function optimization.In Proceedings from the Third Parallel Problem Solving from Nature,pages 249–257.Springer-Verlag,1994.

M.Potter and K.De Jong.The coevolution of antibod-ies for concept learning.In Proceedings from the Fifth Parallel Problem Solving from Nature,pages530–539. Springer-Verlag,1999.

M.Potter and K.De Jong.Cooperative coevolution:An ar-chitecture for evolving coadapted subcomponents.Evo-lutionary Computation,8(1):1–29,2000.

C.Rosin.Coevolutionary Search Among Adversaries.PhD thesis,University of California,San Diego,1997.

C.Rosin and R.Belew.Methods for comptetitive co-evolution:Finding opponents worth beating.In L.Es-helman,editor,Genetic Algorithms:Proceedings of the Sixth International Conference,pages373–380.Morgan Kaufmann,1995.

C.Rosin and R.Belew.New methods for competetive co-evolution.Evolutionary Computation,5(1):1–29,1996. R.Salomon.Performance degradation of genetic algo-rithms under coordinate rotation.In L.Fogel,P.Ange-line,and T.B¨a ck,editors,Proceedings of the Fifth An-nual Conference on Evolutionary Programming V,pages 153–161.MIT Press,1996.

D.Schlierkamp-V oosen and H.M¨u hlenbein.Strategy adaptation by competing subpopulations.In Proceed-ings from the Third Parallel Problem Solving from Na-ture,pages199–108.Springer-Verlag,1994.

M.V ose.The Simple Genetic Algorithm.MIT Press,1999.

文档

An empirical analysis of collaboration methods in

R.PaulWiegandGeorgeMasonUniversityComputerScienceDepartmentFairfax,VA22030paul@tesseract.orgWilliamC.LilesCentralIntelligenceAgencyWashington,DC20505wliles@gmu.eduKennethA.DeJongGeorgeMasonUniversityComputerScienceDepartmentFairfax,VA22030kdejong@gm
推荐度:
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top