精华内容
下载资源
问答
  • 差分进化(Differential Evolution)算法Python实现
    2021-06-07 17:05:16

    【算法简介】
    Differential Evolution, or DE for short, is a stochastic global search optimization algorithm.
    It is a type of evolutionary algorithm and is related to other evolutionary algorithms such as the genetic algorithm.

    【代码实现】

    import numpy as np
    import random
     
    class Population:
        def __init__(self, min_range, max_range, dim, factor, rounds, size, object_func, CR=0.75):
            self.min_range = min_range
            self.max_range = max_range
            self.dimension = dim
            self.factor = factor
            self.rounds = rounds
            self.size = size
            self.cur_round = 1
            self.CR = CR
            self.get_object_function_value = object_func
            # 初始化种群
            self.individuality = [np.array([random.uniform(self.min_range, self.max_range) for s in range(self.dimension)]) for tmp in range(size)]
            self.object_function_values = [self.get_object_function_value(v) for v in self.individuality]
            self.mutant = None
     
        def mutate(self):
            self.mutant = []
            for i in range(self.size):
                r0, r1, r2 = 0, 0, 0
                while r0 == r1 or r1 == r2 or r0 == r2 or r0 == i:
                    r0 = random.randint(0, self.size-1)
                    r1 = random.randint(0, self.size-1)
                    r2 = random.randint(0, self.size-1)
                tmp = self.individuality[r0] + (self.individuality[r1] - self.individuality[r2]) * self.factor
                for t in range(self.dimension):
                    if tmp[t] > self.max_range or tmp[t] < self.min_range:
                        tmp[t] = random.uniform(self.min_range, self.max_range)
                self.mutant.append(tmp)
     
        def crossover_and_select(self):
            for i in range(self.size):
                Jrand = random.randint(0, self.dimension)
                for j in range(self.dimension):
                    if random.random() > self.CR and j != Jrand:
                        self.mutant[i][j] = self.individuality[i][j]
                    tmp = self.get_object_function_value(self.mutant[i])
                    if tmp < self.object_function_values[i]:
                        self.individuality[i] = self.mutant[i]
                        self.object_function_values[i] = tmp
     
        def print_best(self):
            m = min(self.object_function_values)
            i = self.object_function_values.index(m)
            print("轮数:" + str(self.cur_round))
            print("最佳个体:" + str(self.individuality[i]))
            print("目标函数值:" + str(m))
     
        def evolution(self):
            while self.cur_round < self.rounds:
                self.mutate()
                self.crossover_and_select()
                self.print_best()
                self.cur_round = self.cur_round + 1
     
    #测试部分
    if __name__ == "__main__":
        def f(v):
            return -(v[1]+47)*np.sin(np.sqrt(np.abs(v[1]+(v[0]/2)+47))) - v[0] * np.sin(np.sqrt(np.abs(v[0]-v[1]-47)))
        p = Population(min_range=-513, max_range=513, dim=2, factor=0.8, rounds=100, size=100, object_func=f)
        p.evolution()



    【参考文献】
    https://machinelearningmastery.com/differential-evolution-global-optimization-with-python/
    https://blog.csdn.net/qq_36503002/article/details/104972459
    https://github.com/Light-V/algorithm-implement/blob/master/de/de.py


     

    更多相关内容
  • python代码随处可见,利用python进行相关的操作和实现时每一个python入门者必不可少的内容,这里利用python 的相关知识,简单的进行了实验,希望对大家有所帮助
  • from scitbx.array_family import flexfrom stdlib import randomclass differential_evolution_optimizer(object):def __init__(self,evaluator,population_size=50,f=None,cr=0.9,eps=1e-2,n_cross=1,max_iter=100...

    from scitbx.array_family import flex

    from stdlib import random

    class differential_evolution_optimizer(object):

    def __init__(self,

    evaluator,

    population_size=50,

    f=None,

    cr=0.9,

    eps=1e-2,

    n_cross=1,

    max_iter=10000,

    monitor_cycle=200,

    out=None,

    show_progress=False,

    show_progress_nth_cycle=1,

    insert_solution_vector=None,

    dither_constant=0.4):

    self.dither=dither_constant

    self.show_progress=show_progress

    self.show_progress_nth_cycle=show_progress_nth_cycle

    self.evaluator = evaluator

    self.population_size = population_size

    self.f = f

    self.cr = cr

    self.n_cross = n_cross

    self.max_iter = max_iter

    self.monitor_cycle = monitor_cycle

    self.vector_length = evaluator.n

    self.eps = eps

    self.population = []

    self.seeded = False

    if insert_solution_vector is not None:

    assert len( insert_solution_vector )==self.vector_length

    self.seeded = insert_solution_vector

    for ii in xrange(self.population_size):

    self.population.append( flex.double(self.vector_length,0) )

    self.scores = flex.double(self.population_size,1000)

    self.optimize()

    self.best_score = flex.min( self.scores )

    self.best_vector = self.population[ flex.min_index( self.scores ) ]

    self.evaluator.x = self.best_vector

    if self.show_progress:

    self.evaluator.print_status(

    flex.min(self.scores),

    flex.mean(self.scores),

    self.population[ flex.min_index( self.scores ) ],

    'Final')

    def optimize(self):

    # initialise the population please

    self.make_random_population()

    # score the population please

    self.score_population()

    converged = False

    monitor_score = flex.min( self.scores )

    self.count = 0

    while not converged:

    self.evolve()

    location = flex.min_index( self.scores )

    if self.show_progress:

    if self.count%self.show_progress_nth_cycle==0:

    # make here a call to a custom print_status function in the evaluator function

    # the function signature should be (min_target, mean_target, best vector)

    self.evaluator.print_status(

    flex.min(self.scores),

    flex.mean(self.scores),

    self.population[ flex.min_index( self.scores ) ],

    self.count)

    self.count += 1

    if self.count%self.monitor_cycle==0:

    if (monitor_score-flex.min(self.scores) ) < self.eps:

    converged = True

    else:

    monitor_score = flex.min(self.scores)

    rd = (flex.mean(self.scores) - flex.min(self.scores) )

    rd = rd*rd/(flex.min(self.scores)*flex.min(self.scores) + self.eps )

    if ( rd < self.eps ):

    converged = True

    if self.count>=self.max_iter:

    converged =True

    def make_random_population(self):

    for ii in xrange(self.vector_length):

    delta = self.evaluator.domain[ii][1]-self.evaluator.domain[ii][0]

    offset = self.evaluator.domain[ii][0]

    random_values = flex.random_double(self.population_size)

    random_values = random_values*delta+offset

    # now please place these values ni the proper places in the

    # vectors of the population we generated

    for vector, item in zip(self.population,random_values):

    vector[ii] = item

    if self.seeded is not False:

    self.population[0] = self.seeded

    def score_population(self):

    for vector,ii in zip(self.population,xrange(self.population_size)):

    tmp_score = self.evaluator.target(vector)

    self.scores[ii]=tmp_score

    def evolve(self):

    for ii in xrange(self.population_size):

    rnd = flex.random_double(self.population_size-1)

    permut = flex.sort_permutation(rnd)

    # make parent indices

    i1=permut[0]

    if (i1>=ii):

    i1+=1

    i2=permut[1]

    if (i2>=ii):

    i2+=1

    i3=permut[2]

    if (i3>=ii):

    i3+=1

    #

    x1 = self.population[ i1 ]

    x2 = self.population[ i2 ]

    x3 = self.population[ i3 ]

    if self.f is None:

    use_f = random.random()/2.0 + 0.5

    else:

    use_f = self.f

    vi = x1 + use_f*(x2-x3)

    # prepare the offspring vector please

    rnd = flex.random_double(self.vector_length)

    permut = flex.sort_permutation(rnd)

    test_vector = self.population[ii].deep_copy()

    # first the parameters that sure cross over

    for jj in xrange( self.vector_length ):

    if (jj

    test_vector[ permut[jj] ] = vi[ permut[jj] ]

    else:

    if (rnd[jj]>self.cr):

    test_vector[ permut[jj] ] = vi[ permut[jj] ]

    # get the score please

    test_score = self.evaluator.target( test_vector )

    # check if the score if lower

    if test_score < self.scores[ii] :

    self.scores[ii] = test_score

    self.population[ii] = test_vector

    def show_population(self):

    print "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++"

    for vec in self.population:

    print list(vec)

    print "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++"

    class test_function(object):

    def __init__(self):

    self.x = None

    self.n = 9

    self.domain = [ (-100,100) ]*self.n

    self.optimizer = differential_evolution_optimizer(self,population_size=100,n_cross=5)

    assert flex.sum(self.x*self.x)<1e-5

    def target(self, vector):

    tmp = vector.deep_copy()

    result = (flex.sum(flex.cos(tmp*10))+self.n+1)*flex.sum( (tmp)*(tmp) )

    return result

    class test_rosenbrock_function(object):

    def __init__(self, dim=5):

    self.x = None

    self.n = 2*dim

    self.dim = dim

    self.domain = [ (1,3) ]*self.n

    self.optimizer = differential_evolution_optimizer(self,population_size=min(self.n*10,40),n_cross=self.n,cr=0.9, eps=1e-8, show_progress=True)

    print list(self.x)

    for x in self.x:

    assert abs(x-1.0)<1e-2

    def target(self, vector):

    tmp = vector.deep_copy()

    x_vec = vector[0:self.dim]

    y_vec = vector[self.dim:]

    result=0

    for x,y in zip(x_vec,y_vec):

    result+=100.0*((y-x*x)**2.0) + (1-x)**2.0

    #print list(x_vec), list(y_vec), result

    return result

    def print_status(self, mins,means,vector,txt):

    print txt,mins, means, list(vector)

    def run():

    random.seed(0)

    flex.set_random_seed(0)

    test_rosenbrock_function(1)

    print("OK")

    if __name__ == "__main__":

    run()

    展开全文
  • #运用python实现差分进化算法计算函数最大值 import random import math import numpy as np import random cr = 0.6 Population = np.random.rand(100,2) cycle = 500 hig , low = math.pi , 0 def eval(x): y =...
  • 差分进化算法 Python 实现。除此之外,还有这些算法的集合:差分进化算法、遗传算法、粒子群算法、模拟退火算法、蚁群算法、免疫优化算法、鱼群算法
  • python代码随处可见,利用python进行相关的操作和实现时每一个python入门者必不可少的内容,这里利用python 的相关知识,简单的进行了实验,希望对大家有所帮助
  • python代码随处可见,利用python进行相关的操作和实现时每一个python入门者必不可少的内容,这里利用python 的相关知识,简单的进行了实验,希望对大家有所帮助
  • 算法原理请看https://blog..net/ztf312/article/details/78432711下面是python实现# -*- coding: cp936 -*-import numpy as npimport matplotlib.pyplot as pltimport mathimport random# Rastrigr 函数def object_...

    算法原理请看https://blog..net/ztf312/article/details/78432711

    下面是python 实现

    # -*- coding: cp936 -*-

    import numpy as np

    import matplotlib.pyplot as plt

    import math

    import random

    # Rastrigr 函数

    def object_function(x):

    f = 0

    for i in range(0,len(x)):

    f = f + (x[i] ** 2 - (10 * math.cos(2 * np.pi * x[i])) + 10)

    return f

    # 参数

    def initpara():

    NP = 100 # 种群数量

    F = 0.6 # 缩放因子

    CR = 0.7 # 交叉概率

    generation = 2000 # 遗传代数

    len_x = 10

    value_up_range = 5.12

    value_down_range = -5.12

    return NP, F, CR, generation, len_x, value_up_range, value_down_range

    # 种群初始化

    def initialtion(NP):

    np_list = [] # 种群,染色体

    for i in range(0,NP):

    x_list = [] # 个体,基因

    for j in range(0,len_x):

    x_list.append(value_down_range + random.random() * (value_up_range - value_down_range))

    np_list.append(x_list)

    return np_list

    # 列表相减

    def substract(a_list,b_list):

    a = len(a_list)

    new_list = []

    for i in range(0,a):

    new_list.append(a_list[i]-b_list[i])

    return new_list

    # 列表相加

    def add(a_list,b_list):

    a = len(a_list)

    new_list = []

    for i in range(0,a):

    new_list.append(a_list[i]+b_list[i])

    return new_list

    # 列表的数乘

    def multiply(a,b_list):

    b = len(b_list)

    new_list = []

    for i in range(0,b):

    new_list.append(a * b_list[i])

    return new_list

    # 变异

    def mutation(np_list):

    v_list = []

    for i in range(0,NP):

    r1 = random.randint(0,NP-1)

    while r1 == i:

    r1 = random.randint(0,NP-1)

    r2 = random.randint(0,NP-1)

    while r2 == r1 | r2 == i:

    r2 = random.randint(0,NP-1)

    r3 = random.randint(0,NP-1)

    while r3 == r2 | r3 == r1 | r3 == i:

    r3 = random.randint(0,NP-1)

    v_list.append(add(np_list[r1], multiply(F, substract(np_list[r2],np_list[r3]))))

    return v_list

    # 交叉

    def crossover(np_list,v_list):

    u_list = []

    for i in range(0,NP):

    vv_list = []

    for j in range(0,len_x):

    if (random.random() <= CR) | (j == random.randint(0,len_x - 1)):

    vv_list.append(v_list[i][j])

    else:

    vv_list.append(np_list[i][j])

    u_list.append(vv_list)

    return u_list

    # 选择

    def selection(u_list,np_list):

    for i in range(0,NP):

    if object_function(u_list[i]) <= object_function(np_list[i]):

    np_list[i] = u_list[i]

    else:

    np_list[i] = np_list[i]

    return np_list

    # 主函数

    NP, F, CR, generation, len_x, value_up_range, value_down_range = initpara()

    np_list = initialtion(NP)

    min_x = []

    min_f = []

    for i in range(0,NP):

    xx = []

    xx.append(object_function(np_list[i]))

    min_f.append(min(xx))

    min_x.append(np_list[xx.index(min(xx))])

    for i in range(0,generation):

    v_list = mutation(np_list)

    u_list = crossover(np_list,v_list)

    np_list = selection(u_list,np_list)

    for i in range(0,NP):

    xx = []

    xx.append(object_function(np_list[i]))

    min_f.append(min(xx))

    min_x.append(np_list[xx.index(min(xx))])

    # 输出

    min_ff = min(min_f)

    min_xx = min_x[min_f.index(min_ff)]

    print('the minimum point is x ')

    print(min_xx)

    print('the minimum value is y ')

    print(min_ff)

    # 画图

    x_label = np.arange(0,generation+1,1)

    plt.plot(x_label,min_f,color = 'blue')

    plt.xlabel('iteration')

    plt.ylabel('fx')

    plt.savefig('./iteration-f.png')

    plt.show()

    下面是输出结果:

    he minimum point is x

    [6.571458044073247e-10, -1.6132099500206647e-09, 5.479208933390588e-10, 1.4316602937984918e-10, -1.0300811905852257e-09, -9.4552825214611e-11, 7.820675729557053e-10, -1.0017881437355374e-09, 7.502751928050157e-11, 8.595546094079207e-10]

    the minimum value is y

    0.0

    c9f57a56bcba7e77d717527bb1d169d2.png

    展开全文
  • DEIndividual.py import numpy as np import ObjFunction class DEIndividual: ''' individual of differential evolution algorithm ''' ... def __init__(self, vardim, bound): ... bound: bou.
    DEIndividual.py
    
    import numpy as np
    import ObjFunction
    
    
    class DEIndividual:
    
        '''
        individual of differential evolution algorithm
        '''
    
        def __init__(self,  vardim, bound):
            '''
            vardim: dimension of variables
            bound: boundaries of variables
            '''
            self.vardim = vardim
            self.bound = bound
            self.fitness = 0.
    
        def generate(self):
            '''
            generate a random chromsome for differential evolution algorithm
            '''
            len = self.vardim
            rnd = np.random.random(size=len)
            self.chrom = np.zeros(len)
            for i in xrange(0, len):
                self.chrom[i] = self.bound[0, i] + \
                    (self.bound[1, i] - self.bound[0, i]) * rnd[i]
    
        def calculateFitness(self):
            '''
            calculate the fitness of the chromsome
            '''
            self.fitness = ObjFunction.GrieFunc(
                self.vardim, self.chrom, self.bound)
    
    DE.py
    
    import numpy as np
    from DEIndividual import DEIndividual
    import random
    import copy
    import matplotlib.pyplot as plt
    
    
    class DifferentialEvolutionAlgorithm:
    
        '''
        The class for differential evolution algorithm
        '''
    
        def __init__(self, sizepop, vardim, bound, MAXGEN, params):
            '''
            sizepop: population sizepop
            vardim: dimension of variables
            bound: boundaries of variables
            MAXGEN: termination condition
            param: algorithm required parameters, it is a list which is consisting of [crossover rate CR, scaling factor F]
            '''
            self.sizepop = sizepop
            self.MAXGEN = MAXGEN
            self.vardim = vardim
            self.bound = bound
            self.population = []
            self.fitness = np.zeros((self.sizepop, 1))
            self.trace = np.zeros((self.MAXGEN, 2))
            self.params = params
    
        def initialize(self):
            '''
            initialize the population
            '''
            for i in xrange(0, self.sizepop):
                ind = DEIndividual(self.vardim, self.bound)
                ind.generate()
                self.population.append(ind)
    
        def evaluate(self, x):
            '''
            evaluation of the population fitnesses
            '''
            x.calculateFitness()
    
        def solve(self):
            '''
            evolution process of differential evolution algorithm
            '''
            self.t = 0
            self.initialize()
            for i in xrange(0, self.sizepop):
                self.evaluate(self.population[i])
                self.fitness[i] = self.population[i].fitness
            best = np.max(self.fitness)
            bestIndex = np.argmax(self.fitness)
            self.best = copy.deepcopy(self.population[bestIndex])
            self.avefitness = np.mean(self.fitness)
            self.trace[self.t, 0] = (1 - self.best.fitness) / self.best.fitness
            self.trace[self.t, 1] = (1 - self.avefitness) / self.avefitness
            print("Generation %d: optimal function value is: %f; average function value is %f" % (
                self.t, self.trace[self.t, 0], self.trace[self.t, 1]))
            while (self.t < self.MAXGEN - 1):
                self.t += 1
                for i in xrange(0, self.sizepop):
                    vi = self.mutationOperation(i)
                    ui = self.crossoverOperation(i, vi)
                    xi_next = self.selectionOperation(i, ui)
                    self.population[i] = xi_next
                for i in xrange(0, self.sizepop):
                    self.evaluate(self.population[i])
                    self.fitness[i] = self.population[i].fitness
                best = np.max(self.fitness)
                bestIndex = np.argmax(self.fitness)
                if best > self.best.fitness:
                    self.best = copy.deepcopy(self.population[bestIndex])
                self.avefitness = np.mean(self.fitness)
                self.trace[self.t, 0] = (1 - self.best.fitness) / self.best.fitness
                self.trace[self.t, 1] = (1 - self.avefitness) / self.avefitness
                print("Generation %d: optimal function value is: %f; average function value is %f" % (
                    self.t, self.trace[self.t, 0], self.trace[self.t, 1]))
    
            print("Optimal function value is: %f; " %
                  self.trace[self.t, 0])
            print "Optimal solution is:"
            print self.best.chrom
            self.printResult()
    
        def selectionOperation(self, i, ui):
            '''
            selection operation for differential evolution algorithm
            '''
            xi_next = copy.deepcopy(self.population[i])
            xi_next.chrom = ui
            self.evaluate(xi_next)
            if xi_next.fitness > self.population[i].fitness:
                return xi_next
            else:
                return self.population[i]
    
        def crossoverOperation(self, i, vi):
            '''
            crossover operation for differential evolution algorithm
            '''
            k = np.random.random_integers(0, self.vardim - 1)
            ui = np.zeros(self.vardim)
            for j in xrange(0, self.vardim):
                pick = random.random()
                if pick < self.params[0] or j == k:
                    ui[j] = vi[j]
                else:
                    ui[j] = self.population[i].chrom[j]
            return ui
    
        def mutationOperation(self, i):
            '''
            mutation operation for differential evolution algorithm
            '''
            a = np.random.random_integers(0, self.sizepop - 1)
            while a == i:
                a = np.random.random_integers(0, self.sizepop - 1)
            b = np.random.random_integers(0, self.sizepop - 1)
            while b == i or b == a:
                b = np.random.random_integers(0, self.sizepop - 1)
            c = np.random.random_integers(0, self.sizepop - 1)
            while c == i or c == b or c == a:
                c = np.random.random_integers(0, self.sizepop - 1)
            vi = self.population[c].chrom + self.params[1] * \
                (self.population[a].chrom - self.population[b].chrom)
            for j in xrange(0, self.vardim):
                if vi[j] < self.bound[0, j]:
                    vi[j] = self.bound[0, j]
                if vi[j] > self.bound[1, j]:
                    vi[j] = self.bound[1, j]
            return vi
    
        def printResult(self):
            '''
            plot the result of the differential evolution algorithm
            '''
            x = np.arange(0, self.MAXGEN)
            y1 = self.trace[:, 0]
            y2 = self.trace[:, 1]
            plt.plot(x, y1, 'r', label='optimal value')
            plt.plot(x, y2, 'g', label='average value')
            plt.xlabel("Iteration")
            plt.ylabel("function value")
            plt.title("Differential Evolution Algorithm for function optimization")
            plt.legend()
            plt.show()

    运行代码:

     if __name__ == "__main__":
     
         bound = np.tile([[-600], [600]], 25)
         dea = DEA(60, 25, bound, 1000, [0.8,  0.6])
         dea.solve()

    展开全文
  • 提供了自适应差分进化算法的代码,另包含测试函数集 %Reference: A. K. Qin, V. L. Huang, and P. N. Suganthan,“Differential evolution % algorithm with strategy adaptation for global numerical ...
  • 基本差分进化算法DE附上python代码

    千次阅读 2020-05-20 22:58:35
    DE-差分进化算法,与GA(简单遗传算法)思想大致相同,个人觉得主要的差别在于DE利用了个体之间的协同性,引入了差分向量的概念。 差分向量 只是用了一个差向量,可以使用2个或者3个,个人测试效果差异不大。 V[i, ...
  • 差分进化算法是目前来说一种比较好的全局最优器,在处理优化问题时表现出不错的效果,下面给出它的python编程,以方法模块调用来做! 程序: ''' 差分进化算法的原理编写 编写时间:2022.3.22 ''' import numpy as ...
  • 演化算法被归类为一组用于生物进化全局优化的算法,并且基于元启发式搜索方法。可能的解决方案通常跨越问题域上的n维向量空间,并且我们模拟几个总体粒子以达到全局最优。基本形式中的优化问题包括通过根据为算法...
  • 下面采用python编写的差分进化算法解决单目标优化算法! 读者根据需要改写自己想要测试的函数! 该编写方式理解比较容易,完全按着差分进化算法的原理来写的,对于理解差分进化算法有很好的帮助! 这里提供的测试...
  • 此分享不仅提供了差分进化算法代码,还包含10维测试函数实例
  • 使用Python实现的差分进化算法,引用到Numpy和Scipy,可以支持多核与集群并行计算。使用时只需继承DESolver并定义函数def error_func(self, indiv, *args)作为目标函数即可。具体代码如下:import numpyimport scipy...
  • 1 引言 在遗传、选择和变异的作用下,自然界生物体优胜劣汰,不断由低级向高级进化和发展。人们注意到,适者生存的...差分进化算法是基于群体智能理论的优化算法,是通过群体内个体间的合作与竞争而产生的智能优..
  • python代码随处可见,利用python进行相关的操作和实现时每一个python入门者必不可少的内容,这里利用python 的相关知识,简单的进行了实验,希望对大家有所帮助
  • 该代码实现了运用差分进化算法解决目标函数的最小值,这里解决的是目标函数y=x*sin(10*PI*x)+2的最小值,读者可以根据自己的需要进行修改目标函数求解最小值,同时可以修改代码求解最大值。
  • 差分进化算法DE是一种高效的全局优化算法。它也是基于群体的启发式搜索算法,群中的每个个体对应一个解向量。差分进化算法的进化流程则与遗传算法非常类似,都包括变异、杂交和选择操作,但这些操作的具体定义与遗传...
  • 差分进化算法

    2017-05-12 17:10:06
    差分进化算法
  • python代码随处可见,利用python进行相关的操作和实现时每一个python入门者必不可少的内容,这里利用python 的相关知识,简单的进行了实验,希望对大家有所帮助
  • python代码随处可见,利用python进行相关的操作和实现时每一个python入门者必不可少的内容,这里利用python 的相关知识,简单的进行了实验,希望对大家有所帮助
  • DE差分进化算法python

    千次阅读 2019-05-27 13:42:18
    DE差分进化算法(pyhton)第一次尝试编写 编程小白刚学python 见谅 import numpy as np import random as rd import matplotlib.pyplot as plt import copy from math import * class DE: def __init__(self,size...
  • 差分进化(DE)算法Python实现

    千次阅读 2020-03-21 02:24:11
    差分进化算法 差分进化算法(Differential Evolution Algorithm,DE)是一种高效的全局优化算法。它也是基于群体的启发式搜索算法,群中的每个个体对应一个解向量。 简单来说DE算法可用于求函数的极值点,例如:函数...
  • python3实现差分进化算法(DE)

    万次阅读 多人点赞 2018-05-19 16:35:13
    差分进化算法(Differential Evolution Algorithm,DE)是一种高效的全局优化算法。它也是基于群体的启发式搜索算法,群中的每个个体对应一个解向量。差分进化算法的进化流程则与遗传算法非常类似,都包括变异、杂交和...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 2,425
精华内容 970
关键字:

差分进化算法python