• 5星
920KB robotblog 2021-08-30 06:39:35
• 5星
9.36MB qq_42729362 2021-01-29 10:02:46
• 675KB weixin_38694006 2021-06-13 01:12:13
• 1.34MB weixin_38727087 2021-01-14 15:06:00
• ## RSOME用于分布鲁棒优化建模 python 线性规划 动态规划求解 数学建模 机器学习

RSOME软件包中的dro模块是用于求解分布鲁棒优化问题的建模环境。这个建模环境的特色是为分布鲁棒优化建模提供了多种便利的工具，用以定义动态决策变量、含糊集合（ambiguity sets），以及基于含糊集合的最坏情况下...
 RSOME: Robust Stochastic Optimization Made Easy

Contents
The General Formulation for Distributionally Robust Optimization ModelsIntroduction to the rsome.dro EnvironmentModelsDecision VariablesRandom Variables
Event-wise Ambiguity SetCreate an Ambiguity Set

Q

k

m

\mathcal{Q}_{km}

as the Support of Conditional Expectations

Z

s

m

\mathcal{Z}_{sm}

as the Support of Random Variables

P

m

\mathcal{P}_m

as the Support of Scenario Probabilities
Event-wise Recourse AdaptationsThe Worst-Case ExpectationsApplication ExamplesReference

The General Formulation for Distributionally Robust Optimization Models

The RSOME package supports optimization models that fit the general formulation below

min

⁡

sup

⁡

P

∈

F

0

E

P

{

a

0

⊤

(

s

~

,

z

~

)

x

(

s

~

)

+

b

0

⊤

y

(

s

~

,

z

~

)

+

c

0

(

s

~

,

z

~

)

}

s.t.

max

⁡

s

∈

[

S

]

,

z

∈

Z

m

s

{

a

m

⊤

(

s

,

z

)

x

(

s

)

+

b

m

⊤

y

(

s

,

z

)

+

c

m

(

s

,

z

)

}

≤

0

,

∀

m

∈

M

1

sup

⁡

P

∈

F

m

E

P

{

a

m

⊤

(

s

~

,

z

~

)

x

(

s

~

)

+

b

m

⊤

y

(

s

~

,

z

~

)

+

c

m

(

s

~

,

z

~

)

}

≤

0

,

∀

m

∈

M

2

x

i

∈

A

(

C

x

i

)

∀

i

∈

[

I

x

]

y

n

∈

A

‾

(

C

y

n

,

J

y

n

)

∀

n

∈

[

I

y

]

x

(

s

)

∈

X

s

s

∈

[

S

]

.

\begin{aligned} \min ~~ &\sup\limits_{\mathbb{P}\in\mathcal{F}_0} \mathbb{E}_{\mathbb{P}} \left\{\pmb{a}_0^{\top}(\tilde{s}, \tilde{\pmb{z}})\pmb{x}(\tilde{s}) + \pmb{b}_0^{\top}\pmb{y}(\tilde{s}, \tilde{\pmb{z}}) + c_0(\tilde{s}, \tilde{\pmb{z}})\right\} &&\\ \text{s.t.} ~~ &\max\limits_{s\in[S], \pmb{z}\in\mathcal{Z}_{ms}}\left\{\pmb{a}_m^{\top}(s, \pmb{z})\pmb{x}(s) + \pmb{b}_m^{\top}\pmb{y}(s, \pmb{z}) + c_m(s, \pmb{z})\right\} \leq 0, && \forall m\in \mathcal{M}_1 \\ & \sup\limits_{\mathbb{P}\in\mathcal{F}_m}\mathbb{E}_{\mathbb{P}}\left\{\pmb{a}_m^{\top}(\tilde{s}, \tilde{\pmb{z}})\pmb{x}(\tilde{s}) + \pmb{b}_m^{\top}\pmb{y}(\tilde{s}, \tilde{\pmb{z}}) + c_m(\tilde{s}, \tilde{\pmb{z}})\right\} \leq 0, && \forall m\in \mathcal{M}_2 \\ & x_i \in \mathcal{A}\left(\mathcal{C}_x^i\right) && \forall i \in [I_x] \\ & y_n \in \overline{\mathcal{A}}\left(\mathcal{C}_y^n, \mathcal{J}_y^n\right) && \forall n \in [I_y] \\ & \pmb{x}(s) \in \mathcal{X}_s && s \in [S]. \\ \end{aligned}

Here, parameters of proper dimensions,

a

m

(

s

,

z

)

:

=

a

m

s

0

+

∑

j

∈

[

J

]

a

m

s

j

z

j

b

m

(

s

)

:

=

b

m

s

c

m

(

z

)

:

=

c

m

s

0

+

∑

j

∈

[

J

]

c

m

s

j

z

j

\begin{aligned} &\pmb{a}_m(s,\pmb{z}) := \pmb{a}_{ms}^0 + \sum\limits_{j\in[J]}\pmb{a}_{ms}^jz_j \\ &\pmb{b}_m(s) := \pmb{b}_{ms}\\ &c_m(\pmb{z}) := c_{ms}^0 + \sum\limits_{j\in[J]}c_{ms}^jz_j \end{aligned}

are defined similarly as in the case of RSOME for Robust Optimization and

X

s

\mathcal{X}_s

is an second-order cone (SOC) representable feasible set of a decision

x

(

s

)

\pmb{x}(s)

in the scenario

x

x

. Constraints indexed by

m

∈

M

1

m\in\mathcal{M}_1

are satisfied under the worst-case realization, just like those introduced in RSOME for robust optimization. Another set of constraints indexed by

m

∈

M

2

m\in\mathcal{M}_2

are satisfied with regard to the worst-case expectation overall all possible distributions defined by an ambiguity set

F

m

\mathcal{F}_m

in the general form introduced in Chen et al. (2020):

F

m

=

{

P

∈

P

0

(

R

J

×

[

S

]

)

∣

(

z

~

,

s

~

)

∼

P

E

P

[

z

~

∣

s

~

∈

E

k

m

]

∈

Q

k

m

∀

k

∈

[

K

]

P

[

z

~

∈

Z

s

m

∣

s

~

=

s

]

=

1

∀

s

∈

[

S

]

P

[

s

~

=

s

]

=

p

s

∀

s

∈

[

S

]

for some

p

∈

P

m

}

,

∀

m

∈

M

2

.

\begin{aligned} \mathcal{F}_m = \left\{ \mathbb{P}\in\mathcal{P}_0\left(\mathbb{R}^{J}\times[S]\right) \left| \begin{array}{ll} ~\left(\tilde{\pmb{z}}, \tilde{s}\right) \sim \mathbb{P} & \\ \mathbb{E}_{\mathbb{P}}[\tilde{\pmb{z}}|\tilde{s}\in\mathcal{E}_{km}] \in \mathcal{Q}_{km} & \forall k \in [K] \\ \mathbb{P}[\tilde{\pmb{z}}\in \mathcal{Z}_{sm}| \tilde{s}=s]=1 & \forall s \in [S] \\ \mathbb{P}[\tilde{s}=s] = p_s & \forall s \in [S] \\ \text{for some } \pmb{p} \in \mathcal{P}_m & \end{array} \right. \right\},~~~ \forall m \in \mathcal{M}_2 \end{aligned}.

Here for each constraint indexed by

m

∈

M

2

m\in\mathcal{M}_2

,
The conditional expectation of

z

~

\tilde{\pmb{z}}

over events (defined as subsets of scenarios and denoted by

E

k

m

\mathcal{E}_{km}

are known to reside in an SOC representable set

Q

k

m

\mathcal{Q}_{km}

;The support of

z

~

\tilde{\pmb{z}}

in each scenario

s

∈

[

S

]

s\in[S]

is specified to be another SOC representable set

Z

s

m

\mathcal{Z}_{sm}

;Probabilities of scenarios, collectively denoted by a vector

p

\pmb{p}

, are constrained by a third SOC representable subset

P

m

⊆

{

p

∈

R

+

+

S

:

e

⊤

p

=

1

}

\mathcal{P}_m\subseteq\left\{\pmb{p}\in \mathbb{R}_{++}^S:~ \pmb{e}^{\top}\pmb{p}=1 \right\}

in the probability simplex.
Dynamics of decision-making is captured by the event-wise recourse adaptation for wait-and-see decisions of two types—-the event-wise static adaptation denoted by

A

(

C

)

\mathcal{A}(C)

as well as the event-wise affine adaptation denoted by

A

‾

(

C

,

J

)

\overline{\mathcal{A}}(\mathcal{C}, \mathcal{J})

. In particular, given a fixed number of

S

S

scenarios and a partition

C

\mathcal{C}

of these scenarios (i.e., a collection of mutually exclusive and collectively exhaustive events), the event-wise recourse adaptation is formally defined as follows:

A

(

C

)

=

{

x

:

[

S

]

↦

R

∣

x

(

s

)

=

x

E

,

E

=

H

C

(

s

)

for some

x

E

∈

R

}

,

A

‾

(

C

,

J

)

=

{

y

:

[

S

]

×

R

J

↦

R

∣

y

(

s

,

z

)

=

y

0

(

s

)

+

∑

j

∈

J

y

j

(

s

)

z

j

for some

y

0

(

s

)

,

y

j

(

s

)

∈

A

(

C

)

,

j

∈

J

}

.

\begin{aligned} &\mathcal{A}\left(\mathcal{C}\right) = \left\{ x: [S] \mapsto \mathbb{R} \left| \begin{array}{l} ~x(s)=x^{\mathcal{E}}, \mathcal{E}=\mathcal{H}_{\mathcal{C}}(s) \\ \text{for some } x^{\mathcal{E}} \in \mathbb{R} \end{array} \right. \right\}, \\ &\overline{\mathcal{A}}\left(\mathcal{C}, \mathcal{J}\right) = \left\{ y: [S]\times\mathbb{R}^{J} \mapsto \mathbb{R} \left| \begin{array}{l} ~y(s, \pmb{z}) = y^0(s) + \sum\limits_{j\in\mathcal{J}}y^j(s)z_j \\ \text{for some }y^0(s), y^j(s) \in \mathcal{A}\left(\mathcal{C}\right), j\in\mathcal{J} \end{array} \right. \right\} \end{aligned}.

Here,

H

C

:

[

S

]

↦

C

\mathcal{H}_{\mathcal{C}}: [S] \mapsto \mathcal{C}

is a function such that

H

C

=

E

\mathcal{H}_{\mathcal{C}}= \mathcal{E}

maps the scenario

s

s

to the only event

E

\mathcal{E}

in

C

\mathcal{C}

that contains

s

s

, and

J

⊆

[

J

]

\mathcal{J} \subseteq [J]

is an index subset of random components

z

~

1

\tilde{z}_1

,…,

z

~

J

\tilde{z}_J

that the affine adaptation depends on. In the remaining part of the guide, we will introduce the RSOME code for specifying the event-wise ambiguity set and recourse adaptation rules.

Introduction to the rsome.dro Environment
In general, the rsome.dro modeling environment is very similar to rsome.ro discussed in the section Introduction to the rsome.ro Environment, so almost all array operations, indexing and slicing syntax could be applied to dro models. The unique features of the dro model mainly come from the scenario-representation of uncertainties and a different way of specifying the event-wise adaptation of decision variables.
Models
Similar to the rsome.ro modeling environment, the dro models are all defined upon Model type objects, which are created by the constructor Model() imported from the sub-package rsome.dro.
class Model(builtins.object)
|  Model(scens=1, name=None)
|
|  Returns a model object with the given number of scenarios.
|
|  Parameters
|  ----------
|  scens : int or array-like objects
|      The number of scenarios, if it is an integer. It could also be
|      an array of scenario indices.
|  name : str
|      Name of the model
|
|  Returns
|  -------
|  model : rsome.dro.Model
|      A model object

It can be seen that the dro models are different from the ro models in the first argument scens, which specifies the scenario-representation of uncertainties. The scens argument can be specified as an integer, indicating the number of scenarios, or an array of scenario indices, as shown by the following sample code.
from rsome import dro

model1 = dro.Model(5)       # a DRO model with 5 scenarios

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model2 = dro.Model(labels)  # a DRO model with 5 scenarios given by a list.

Decision Variables
In the general formulation, decision variables

x

(

s

)

\pmb{x}(s)

are event-wise static and

y

(

s

,

z

)

\pmb{y}(s, \pmb{z})

are event-wise affinely adaptive. The dro modeling environment does not differentiate the two in creating these variables. Both types of decision variables are created by the dvar() method of the model object, with the same syntax as the rsome.ro models.
dvar(shape=(1,), vtype='C', name=None) method of rsome.dro.Model instance
Returns an array of decision variables with the given shape
and variable type.

Parameters
----------
shape : int or tuple
Shape of the variable array.
vtype : {'C', 'B', 'I'}
Type of the decision variables. 'C' means continuous; 'B'
means binary, and 'I" means integer.
name : str
Name of the variable array

Returns
-------
new_var : rsome.lp.DecVar
An array of new decision variables

All decision variables are created to be non-adaptive at first. The event-wise and affine adaptation can be then specified by the method adapt() of the variable object. Details of specifying the adaptive decisions are provided in the section Event-wise recourse adaptations
Random Variables
The syntax of creating random variables for a dro model is exactly the same as the ro models. You may refer to the charpter RSOME for Robust Optimization for more details.
The dro model supports the expectation of random variables, so that we could define the expectation sets

Q

k

m

\mathcal{Q}_{km}

in the ambiguity set

F

m

\mathcal{F}_m

. This is different from ro models. The expectation is indicated by the E() function imported from rsome, as demonstrated by the sample code below.
from rsome import dro
from rsome import E     # import E as the expectation operator

model = dro.Model()
z = model.rvar((3, 5))  # 3x5 random variables as a 2D array

E(z) <= 1               # E(z) is smaller than or equal to zero
E(z) >= -1              # E(z) is larger than or equal to -1
E(z).sum() == 0         # sum of E(z) is zero


Event-wise Ambiguity Set
Create an Ambiguity Set
Ambiguity sets

F

m

\mathcal{F}_m

of a dro model can be created by the ambiguity() method. The associated scenario indices of the ambiguity set can be accessed by the s attribute of the ambiguity set object, as shown by the following sample code.
from rsome import dro

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)     # create a model with 5 scenarios

fset = model.ambiguity()      # create an ambiguity set of the model
print(fset)                   # print scenarios of the model (ambiguity set)

Scenario indices:
sunny     0
cloudy    1
rainy     2
windy     3
snowy     4
dtype: int64

In the example above, strings in the list labels become the labels of the scenario indices. If an integer instead of an array is used to specify the scens argument of the Model() constructor, then the labels will be the same as the integer values. Similar to the pandas.Series data structure, labels of the scenario indices could be any hashable data types.
RSOME supports indexing and slicing of the scenarios via either the labels or the integer-positions, as shown by the following code segments.
print(fset[2])          # the third scenario

Scenario index:
2

print(fset['rainy'])    # the third scenario

Scenario index:
2

print(fset[:3])         # the first three scenarios

Scenario indices:
sunny     0
cloudy    1
rainy     2
dtype: int64

print(fset[:'rainy'])   # the first three scenarios

Scenario indices:
sunny     0
cloudy    1
rainy     2
dtype: int64

RSOME is also consistent with the pandas.Sereis data type in using the iloc and loc indexers for accessing label-based or integer-position based indices.
print(fset.iloc[:2])            # the first two scenarios via the iloc[] indexer

Scenario indices:
sunny     0
cloudy    1
dtype: int64

print(fset.loc[:'cloudy'])      # the first two scenarios via the loc[] indexer

Scenario indices:
sunny     0
cloudy    1
dtype: int64

The indices of the scenarios are crucial in defining components of the ambiguity set, such as sets

Q

k

m

\mathcal{Q}_{km}

, and

Z

s

m

\mathcal{Z}_{sm}

, which will be discussed next.

Q

k

m

\mathcal{Q}_{km}

as the Support of Conditional Expectations
According to the formulation of the ambiguity set

F

m

\mathcal{F}_m

presented in the section The General Formulation for Distributionally Robust Optimization Models, the SOC representable set

Q

k

m

\mathcal{Q}_{km}

is defined as the support of the conditional expectation of random variables under the event

E

k

\mathcal{E}_k

, which is a collection of selected scenarios. In the RSOME package, such a collection of scenarios can be specified by the indexing or slicing of the ambiguity set object, and constraints of the

Q

k

m

\mathcal{Q}_{km}

are defined by the exptset() method of the ambiguity set object. Take the supports of conditional expectations below, for example,

E

[

z

~

∣

s

∈

E

1

]

∈

{

z

:

R

3

∣

∣

z

∣

≤

1

∥

z

∥

1

≤

1.5

}

,

E

1

=

{

sunny

,

cloudy

,

rainy

,

windy

,

snowy

}

E

[

z

~

∣

s

∈

E

2

]

∈

{

z

:

R

3

∣

z

=

0

}

,

E

2

=

{

sunny

,

rainy

,

snowy

}

,

\begin{aligned} &\mathbb{E}\left[\tilde{\pmb{z}}|s\in\mathcal{E}_{1}\right] \in \left\{\pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~|\pmb{z}| \leq 1 \\ \|\pmb{z}\|_1 \leq 1.5 \end{array} \right. \right\}, && \mathcal{E}_1 = \{\text{sunny}, \text{cloudy}, \text{rainy}, \text{windy}, \text{snowy}\} \\ &\mathbb{E}\left[\tilde{\pmb{z}}|s\in\mathcal{E}_{2}\right] \in \left\{\pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~\pmb{z} = 0 \end{array} \right. \right\}, && \mathcal{E}_2 = \{\text{sunny}, \text{rainy}, \text{snowy}\}, \end{aligned}

where the first event

E

1

\mathcal{E}_1

is a collection of all scenarios, and the second event

E

2

\mathcal{E}_2

includes scenarios “sunny”, “rainy”, and “snowy”. The conditional expectation information of the ambiguity set can be specified by the code below.
from rsome import dro
from rsome import E
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()             # create an ambiguity set
fset.exptset(abs(E(z)) <= 1,
norm(E(z), 1) <= 1.5)   # the 1st support of conditional expectations
fset.loc[::2].exptset(E(z) == 0)     # the 2nd support of conditional expectations

The ambiguity set fset itself represents the event

E

1

\mathcal{E}_1

of all scenarios, and the loc indexer is used to form the event

E

2

\mathcal{E}_2

with three scenarios included. Besides loc, other indexing and slicing expressions described in the previous section can also be used to construct the events for the support sets of the expectations.

Z

s

m

\mathcal{Z}_{sm}

as the Support of Random Variables
The support

Z

s

m

\mathcal{Z}_{sm}

of random variables can be specified by the method suppset() method, and the scenario information of the support can also be specified by the indexing and slicing expressions of the ambiguity set object. Take the following supports of random variables

z

~

∈

R

3

\tilde{\pmb{z}}\in\mathbb{R}^3

for example,

P

[

z

~

∈

{

z

:

R

3

∣

∣

z

∣

≤

1

∥

z

∥

≤

1.5

}

∣

s

~

=

s

]

=

1

,

∀

s

∈

{

sunny

,

rainy

,

snowy

}

P

[

z

~

∈

{

z

:

R

3

∣

∑

j

=

1

3

z

j

=

0

}

∣

s

~

=

s

]

=

1

,

∀

s

∈

{

cloudy

,

windy

}

,

\begin{aligned} &\mathbb{P}\left[\tilde{\pmb{z}}\in \left\{ \left. \pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~|\pmb{z}| \leq 1 \\ \|\pmb{z}\| \leq 1.5 \end{array} \right. \right\} \right| \tilde{s}=s \right]=1, &\forall s \in \{\text{sunny}, \text{rainy}, \text{snowy}\} \\ &\mathbb{P}\left[\tilde{\pmb{z}}\in \left\{ \left. \pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~\sum\limits_{j=1}^3z_j = 0 \end{array} \right. \right\} \right| \tilde{s}=s \right]=1, &\forall s \in \{\text{cloudy}, \text{windy}\}, \end{aligned}

the RSOME code for specifying the supports can be written as follows.
from rsome import dro
from rsome import E
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()
fset.iloc[::2].suppset(abs(z) <= 1,
norm(z, 1) <= 1.5)  # the support of z in scenarios 0, 2, 4
fset.iloc[1::2].suppset(z.sum() == 0)      # the support of z in scenarios 1, 3

Note that a valid ambiguity set must have the support sets for all scenarios to be specified. An error message will be given in solving the model if any of the supports are unspecified. RSOME provides a method called showevents() to display the specified supports for random variables and their expectations in a data frame, in order to help users check their ambiguity set.
from rsome import dro
from rsome import E
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()
fset.iloc[::2].suppset(abs(z) <= 1, norm(z, 1) <= 1.5)
fset.iloc[1::2].suppset(z.sum() == 0)
fset.exptset(abs(E(z)) <= 1, norm(E(z), 1) <= 1.5)
fset.loc[::2].exptset(E(z) == 0)

fset.showevents()            # display how the ambiguity set is specified


supportexpectation 0expectation 1sunnydefinedTrueTruecloudydefinedTrueFalserainydefinedTrueTruewindydefinedTrueFalsesnowydefinedTrueTrue

As the example above shows, supports for all scenarios have been defined, and there are two supports defined for the conditional expectations.

P

m

\mathcal{P}_m

as the Support of Scenario Probabilities
In the event-wise ambiguity set, the support of scenario probabilities can also be specified via the calling the method probset(), as demonstrated by the following sample code.
from rsome import dro
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()
p = model.p                         # p is the array of scenario probabilities
fset.probset(norm(p-0.2) <= 0.05)   # define the support of the array p

The scenario probabilities are formatted as a one-dimensional array, which can be accessed via the attribute p of the model object. Notice that two underlying constraints for probabilities:

p

≥

0

\pmb{p}\geq \pmb{0}

and

e

⊤

p

=

1

\pmb{e}^{\top}\pmb{p}=1

, are already integrated in the ambiguity set, so there is no need to specify them in defining the support of scenario probabilities.

Here we introduce how the event-wise static adaptation

A

(

C

)

\mathcal{A}(\mathcal{C})

A

‾

(

C

,

J

)

\overline{\mathcal{A}}(\mathcal{C}, \mathcal{J})

are specified in RSOME. Note that decision variables created in the rsome.dro modeling environment are initially non-adaptive, in the sense that the event set

C

=

{

[

S

]

}

\mathcal{C} = \{[S]\}

and the dependent set

J

=

∅

\mathcal{J}=\varnothing

. These two sets can be modified by the adapt() method of the decision variable object for specifying the decision adaptation, as demonstrated by the following sample code.
from rsome import dro

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)
x = model.dvar(3)

x.adapt([2, 3])         # x adapts to the event of rainy and windy


In the code segment above, the scenario partition

C

\mathcal{C}

is specified to have three events:

{

sunny

}

\{\text{sunny}\}

,

{

rainy

,

windy

}

\{\text{rainy}, \text{windy}\}

, and collectively the remaining scenarios

{

cloudy

,

snowy

}

\{\text{cloudy}, \text{snowy}\}

. The decision variable x[:2] is set to be affinely adaptive to the random variable z[:2].
Similar to the linear decision rule defined in the ro module, coefficients of an event-wise recourse adaptation could be accessed by the get() method after solving the model, where
y.get() returns the constant coefficients of the recourse adaptation. In cases of multiple scenarios, the returned object is a pandas.Series with the length to be the same as the number of scenarios. Each element of the series is an array that has the same shape as y.y.get(z) returns the linear coefficients of the recourse adaptation. In cases of multiple scenarios, the returned object is a pandas.Series with the length to be the same as the number of scenarios. Each element of the series is an array, and the shape of the array is y.shape + z.shape, i.e., the combination of dimensions of y and z.

The Worst-Case Expectations
Once the ambiguity sets

F

m

\mathcal{F}_m

and the recourse adaptation of decision variables are defined, the worst-case expectations in the objective function or constraints can be then specified. The ambiguity set of the objective function can be specified by the minsup() method, for minimizing, or the maxinf() method, for maximizing, the objective function involving the worst-case expectations. The documentation of the minsup() method is given below.
minsup(obj, ambset) method of rsome.dro.Model instance
Minimize the worst-case expected objective value over the given
ambiguity set.

Parameters
----------
obj
Objective function involving random variables
ambset : Ambiguity
The ambiguity set defined for the worst-case expectation

Notes
-----
The ambiguity set defined for the objective function is considered
the default ambiguity set for the distributionally robust model.

Similar to ro models, ambiguity sets of constraints indexed by

m

∈

M

2

m\in\mathcal{M}_2

, concerning the worst-case expectations, can be specified by the forall() method, and if the ambiguity set of such constraints are unspecified, then by default, the underlying ambiguity set is the same as

F

0

\mathcal{F}_0

defined for the objective function. As for the worst-case constraints indexed by

m

∈

M

1

m\in\mathcal{M}_1

, the uncertainty set can also be specified by the forall() method, and if the uncertainty set is unspecified, such worst-case constraints are defined upon support sets

Z

0

s

\mathcal{Z}_{0s}

of the default ambiguity set

F

0

\mathcal{F}_0

, for each scenario

s

∈

[

S

]

s\in[S]

. The sample code below demonstrates how the worst-case expectations can be formulated in the objective function and constraints.
from rsome import dro
from rsome import norm
from rsome import E

model = dro.Model()             # create a model with one scenario
z = model.rvar(3)
u = model.rvar(3)
x = model.dvar(3)

fset = model.ambiguity()        # create an ambiguity set
fset.suppset(abs(z) <= u, u <= 1,
norm(z, 1) <= 1.5)
fset.exptset(E(z) == 0, E(u) == 0.5)

model.maxinf(E(x @ z), fset)    # maximize the worst-case expectation over fset

model.st((E(x - z) <= 0))       # worst-case expectation over fset
model.st(2x >= z)               # worst-case over the support of fset


Application Examples
Distributionally Robust PortfolioDistributionally Robust Medical AppointmentMulti-Item Newsvendor Problem with Wasserstein Ambiguity SetsAdaptive Distributionally Robust Lot-SizingRobust Vehicle Pre-Allocation with Uncertain CovariatesMulti-Stage Inventory ControlMulti-Stage Stochastic Financial Planning

Reference

Bertsimas, Dimitris, Melvyn Sim, and Meilin Zhang. 2019. Adaptive distributionally robust optimization. Management Science 65(2) 604-618.

Chen, Zhi, Melvyn Sim, Peng Xiong. 2020. Robust stochastic optimization made easy with RSOME. Management Science 66(8) 3329–3339.

Wiesemann, Wolfram, Daniel Kuhn, Melvyn Sim. 2014. Distributionally robust convex optimization. Operations Research 62(6) 1358–1376.
展开全文
XiongPengJohnny 2021-07-14 21:35:04
• ## RSOME案例：两阶段分布鲁棒优化用于批量生产问题 python 线性规划 数学建模 动态规划求解

在这个案例中，我们用RSOME建模求解一个两阶段批量生产问题。这里我们演示如何通过定义不同形式的含糊集合来实现相应的分布鲁棒优化模型。
 RSOME: Robust Stochastic Optimization Made Easy

In this section, we are using RSOME to replicate numerical case studies presented in Bertsimas et al. (2021). A capacitated network with

n

n

locations is considered, where each location

i

i

has an unknown demand

d

i

d_i

, and the demand can be satisfied by the existing local stock

x

i

x_i

or by transporting an amount

y

i

j

y_{ij}

of units from another location

j

j

, which is determined after the demand is realized. Such a lot-sizing problem can be written as the following two-stage formulation,

min

⁡

∑

i

=

1

N

c

i

x

i

+

E

[

Q

(

x

,

d

~

)

]

,

s.t.

0

≤

x

i

≤

K

,

i

=

1

,

2

,

.

.

.

,

n

\begin{aligned} \min~& \sum\limits_{i=1}^Nc_ix_i + \mathbb{E}\left[Q(\pmb{x}, \tilde{\pmb{d}})\right], \\ \text{s.t.}~ &0 \leq x_i \leq K, & i = 1, 2, ..., n \end{aligned}

where the recourse problem

Q

(

x

,

d

)

Q(\pmb{x}, \pmb{d})

is written as

Q

(

x

,

z

)

=

min

⁡

∑

i

=

1

n

∑

j

=

1

n

c

i

j

y

i

j

s.t.

x

i

−

∑

j

=

1

N

y

j

i

+

∑

j

=

1

N

y

i

j

≥

d

i

i

=

1

,

2

,

.

.

.

,

n

0

≤

y

≤

b

.

\begin{aligned} Q(\pmb{x}, \pmb{z}) = \min~& \sum\limits_{i=1}^n\sum\limits_{j=1}^nc_{ij}y_{ij} &\\ \text{s.t.}~&x_i - \sum\limits_{j=1}^Ny_{ji} + \sum\limits_{j=1}^Ny_{ij} \geq d_i & i = 1, 2, ..., n \\ &\pmb{0} \leq \pmb{y} \leq \pmb{b}. \end{aligned}

Parameters of the case studies are given below:
The number of locations:

n

=

10

n=10

;Cost of local stock:

a

i

=

1

a_i=1

,

i

=

1

,

2

,

.

.

.

,

n

i=1, 2, ..., n

;Cost of shipping units:

c

i

j

c_{ij}

is the Euclidean distance between location

i

i

and

j

j

.Stock capacity:

K

=

20

K=20

;Transportation capacity:

b

i

j

=

K

/

(

n

−

1

)

u

i

j

b_{ij}=K/(n-1)u_{ij}

, where each

u

i

j

u_{ij}

is a random variable generated from a standard uniform distribution;Sample size of the training set:

N

=

20

N=20

;Robustness parameter:

ϵ

=

10

N

−

1

/

10

\epsilon=10N^{-1/10}

.
n = 10
a = 1
K = 20
N = 20
epsilon = 10 * N**(-1/10)

b = K / (n-1) / rd.rand(n, n)

xy = 10*rd.rand(2, n)
c = ((xy[[0]] - xy[[0]].T) ** 2
+ (xy[[1]] - xy[[1]].T) ** 2) ** 0.5

Here we generate a training dataset ds assuming the demand at each location follows a uniform distribution. The average sample demand at each location is shown by the following figure. The lot-sizing problem is then solved by approaches described in Bertsimas et al. (2021): sample average approximation (SAA), single-policy affine approximation (SP affine), multi-policy affine approximation (MP affine), and the Wasserstein scenario-wise adaptation (Wass SW).

SAA
A direct means of implementing the SAA model is to define the recourse decision

y

\pmb{y}

as a three-dimensional array, where the first dimension represents the sample size, and the remaining two dimensions indicate the from and to locations.
from rsome import dro
import rsome.grb_solver as grb
import rsome as rso

model = dro.Model()             # define a model
x = model.dvar(n)               # here-and-now location decisions
y = model.dvar((N, n, n))       # wait-and-see transportation decisions

# Define model objective and constraints
model.min((a*x).sum() + (1/N)*(c*y.sum(axis=0)).sum())
model.st(ds <= y.sum(axis=1) - y.sum(axis=2) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 0.0119s

Alternatively, the SAA method could be implemented by defining a scenario-wise ambiguity set

F

\mathbb{F}

with singleton supports for each scenario,

F

=

{

P

∈

P

(

R

n

×

[

N

]

)

∣

(

d

~

,

s

)

∼

P

P

[

d

~

=

d

^

s

∣

s

~

=

1

]

=

1

,

∀

s

∈

[

N

]

P

[

s

~

=

s

]

=

1

,

∀

s

∈

[

N

]

for some

p

∈

P

}

\begin{aligned} \mathbb{F} = \left\{ \mathbb{P} \in \mathcal{P}(\mathbb{R}^n\times [N])\left| \begin{array}{ll} ~(\tilde{\pmb{d}}, s) \sim \mathbb{P} & \\ \mathbb{P}[\tilde{\pmb{d}} = \hat{\pmb{d}}_s|\tilde{s}=1]=1, &\forall s\in [N] \\ \mathbb{P}[\tilde{s}=s] = 1, &\forall s \in [N] \\ \text{for some }\pmb{p} \in \mathcal{P} & \end{array} \right. \right\} \end{aligned}

with

d

^

s

\hat{\pmb{d}}_s

being each sample record of demands. Model with such an ambiguity set can be implemented by the code below.
model = dro.Model(N)    # define a model with N scenarios
x = model.dvar(n)       # here-and-now location decisions
y = model.dvar((n, n))  # wait-and-see transportation decisions
d = model.rvar(n)       # define random variables as the demand

for s in range(N):

# define the ambiguity set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d == ds[s])
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 0.0193s

The optimal stock decision provided by the SAA model is shown by the following figure.

SP affine
The second data-driven approach is referred to as sample robust optimization, where an uncertainty set around each data sample is considered. Here the recourse decisions are approximated by a single-policy linear decision rule, thus the name SP affine. The ambiguity set of the sample robust model is presented below.

F

=

{

P

∈

P

(

R

n

×

[

N

]

)

∣

(

d

~

,

s

)

∼

P

P

[

∥

d

~

−

d

^

s

∥

≤

ϵ

∣

s

~

=

1

]

=

1

,

∀

s

∈

[

N

]

P

[

s

~

=

s

]

=

1

,

∀

s

∈

[

N

]

for some

p

∈

P

}

\begin{aligned} \mathbb{F} = \left\{ \mathbb{P} \in \mathcal{P}(\mathbb{R}^n\times [N])\left| \begin{array}{ll} ~(\tilde{\pmb{d}}, s) \sim \mathbb{P} & \\ \mathbb{P}[\|\tilde{\pmb{d}} - \hat{\pmb{d}}_s\|\leq \epsilon|\tilde{s}=1]=1, &\forall s\in [N] \\ \mathbb{P}[\tilde{s}=s] = 1, &\forall s \in [N] \\ \text{for some }\pmb{p} \in \mathcal{P} & \end{array} \right. \right\} \end{aligned}

The code for implementing such an ambiguity set is given below.
model = dro.Model(N)    # define a model with N scenarios
x = model.dvar(n)       # here-and-now location decisions
y = model.dvar((n, n))  # wait-and-see transportation decisions
d = model.rvar(n)       # define random variables as the demand

y.adapt(d)              # the decision rule y affinely depends on d

# define the ambiguity set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d >= 0, d <= K, rso.norm(d - ds[s]) <= epsilon)
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 38.8390s

The optimal lot-sizing decisions of the SP affine sample robust optimization method would be more conservative compared with the SAA method, as shown by the following figure.

MP affine
The sample robust optimization model above could be extended to the MP affine version, where the recourse decision

y

\pmb{y}

not only affinely depends on

d

\pmb{d}

, but also adapts to various scenarios. The code together with the optimal allocation of stocks are presented below. It can be seen that the solution is less conservative compared with the SP affine case, because the recourse decision has a higher level of flexibility in approximating the actual recourse decisions.
model = dro.Model(N)    # define a model with N scenarios
x = model.dvar(n)       # here-and-now location decisions
y = model.dvar((n, n))  # wait-and-see transportation decisions
d = model.rvar(n)       # define random variables as the demand

y.adapt(d)              # the decision rule y affinely depends on d
for s in range(N):
y.adapt(s)          # the decision rule y also adapts to each scenario s

# define the ambiguity set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d >= 0, d <= K, rso.norm(d - ds[s]) <= epsilon)
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 19.4076s


Wass SW
As pointed out by the paper Bertsimas et al. (2021), the distributionally robust optimization model considering the Wasserstein ambiguity set proposed in Chen et al. (2020) only yield one feasible first-stage decision

x

=

(

K

,

K

,

.

.

.

,

K

)

\pmb{x}=(K, K, ..., K)

because this is the only solution that guarantees the feasibility of the second-stage problem over the entire support of

d

\pmb{d}

. In order to make the numerical implementation more meaningful, we introduced an emergency order

w

i

w_i

to fill up the shortage at each location

i

i

. The cost of the emergency order is assumed to be five times of stock cost

a

i

a_i

. The code for implementing this distributionally robust optimization model is given below.
model = dro.Model(N)    # define a model
x = model.dvar(n)       # define location decisions
y = model.dvar((n, n))  # define decision rule as the shifted quantities
w = model.dvar(n)       # define decision rule as the emergency cost
d = model.rvar(n)       # define random variables as the demand
u = model.rvar()        # define an auxiliary random variable

for s in range(N):

# define the uncertainty set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d >= 0, d <= K, rso.norm(d - ds[s], 1) <= u)
fset.exptset(E(u) <= epsilon)
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum() + (5*a*w).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x + w)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)
model.st(w >= 0)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 19.1652s

The optimal lot-sizing decision is illustrated by the following figure.

Reference

Bertsimas, Dimitris, Shimrit Shtern, and Bradley Sturt. 2021. Two-stage sample robust optimization. Operations Research.

Chen, Zhi, Melvyn Sim, Peng Xiong. 2020. Robust stochastic optimization made easy with RSOME. Management Science 66(8) 3329–3339.
展开全文
XiongPengJohnny 2021-07-15 14:40:57
• 1.57MB weixin_38572979 2021-01-14 19:43:17
• ## 鲁棒优化研究综述 鲁棒优化

933KB u013608645 2016-10-17 22:11:24
• ## 【基于KL离散度的分布鲁棒优化】在复现一篇文献过程中遇到了问题 matlab

qwda112 2021-08-24 22:41:27
• ## RSOME案例：分布鲁棒优化用于门诊预约排程 python 线性规划 数学建模

在这个案例中，我们用RSOME建立一个分布鲁棒优化模型来解决门诊预约排程问题。
 RSOME: Robust Stochastic Optimization Made Easy

Distributionally Robust Medical Appointment
In this example, we consider a medical appointment scheduling problem described in Bertsimas et al. (2019), where

N

N

patients arrive at their stipulated schedule and may have to wait in a queue to be served by a physician. The patients’ consultation times are uncertain and their arrival schedules are determined at the first stage, which can influence the waiting times of the patients and the overtime of the physician.
A distributionally robust optimization model presented below is used to minimize the worst-case expected total cost of patients waiting and physician overtime over a partial cross moment ambiguity set.

min

⁡

sup

⁡

P

∈

F

E

P

[

∑

j

=

1

N

y

i

(

z

~

,

u

~

)

+

γ

y

N

+

1

(

z

~

,

u

~

)

]

s.t.

y

j

(

z

,

u

)

−

y

j

−

1

(

z

,

u

)

+

x

j

+

1

≥

z

j

−

1

∀

(

z

,

u

)

∈

Z

,

∀

j

∈

{

2

,

3

,

.

.

.

,

N

+

1

}

y

(

z

,

u

)

≥

0

,

∀

(

z

,

u

)

∈

Z

x

≥

0

,

∑

j

=

1

N

x

j

≤

T

\begin{aligned} \min~&\sup\limits_{\mathbb{P}\in\mathcal{F}}~\mathbb{E}_{\mathbb{P}}\left[\sum\limits_{j=1}^N y_i(\tilde{\pmb{z}}, \tilde{\pmb{u}}) + \gamma y_{N+1}(\tilde{\pmb{z}}, \tilde{\pmb{u}})\right] && \\ \text{s.t.}~&y_j(\pmb{z}, \pmb{u}) - y_{j-1}(\pmb{z}, \pmb{u}) + x_{j+1} \geq z_{j-1} && \forall (\pmb{z}, \pmb{u})\in \mathcal{Z}, ~\forall j \in \{2, 3, ..., N+1\} \\ &\pmb{y}(\pmb{z}, \pmb{u}) \geq \pmb{0}, && \forall (\pmb{z}, \pmb{u})\in \mathcal{Z} \\ &\pmb{x} \geq \pmb{0},~\sum\limits_{j=1}^N x_j \leq T && \\ \end{aligned}

with the random variable

z

~

j

\tilde{z}_j

representing the uncertain consultation time of each patient. The first-stage decision

x

j

x_j

denotes the inter-interval time between patient

j

j

j

+

1

j+1

, for

j

∈

[

N

−

1

]

j\in[N-1]

, and

x

N

x_N

is the arrival of the last patient and the scheduled completion time for the physician before overtime commences. The second-stage decision

y

j

y_j

denotes the waiting time of patient

j

j

, with

i

∈

[

N

]

i \in [N]

and

y

N

+

1

y_{N+1}

represents the overtime of the physician. In order to achieve a tractable formulation, it is approximated by the decision rule

y

(

z

,

u

)

\pmb{y}(\pmb{z}, \pmb{u})

which affinely adapts to random variables

z

\pmb{z}

and auxiliary variables

u

\pmb{u}

.
In this numerical experiment, we consider the lifted form of the partial cross moment ambiguity set

F

\mathcal{F}

given below.

F

=

{

P

∈

P

(

R

N

×

R

N

+

1

)

∣

(

z

~

,

u

~

)

∼

P

z

≥

0

(

z

j

−

μ

j

)

2

≤

u

j

,

∀

j

∈

[

N

]

(

1

⊤

(

z

−

μ

)

)

2

≤

u

N

+

1

E

P

(

z

~

)

=

μ

E

P

(

u

~

j

)

≤

σ

j

2

,

∀

j

∈

[

N

]

E

P

(

u

~

N

+

1

)

≤

e

⊤

Σ

e

}

.

\begin{aligned} \mathcal{F} = \left\{ \mathbb{P}\in\mathcal{P}(\mathbb{R}^N\times\mathbb{R}^{N+1}) \left| \begin{array}{ll} ~(\tilde{\pmb{z}}, \tilde{\pmb{u}}) \sim \mathbb{P} \\ \pmb{z} \geq \pmb{0} & \\ (z_j - \mu_j)^2 \leq u_j, & \forall j \in [N] \\ (\pmb{1}^{\top}(\pmb{z} - \pmb{\mu}))^2 \leq u_{N+1} \\ \mathbb{E}_{\mathbb{P}}(\tilde{\pmb{z}}) = \pmb{\mu} & \\ \mathbb{E}_{\mathbb{P}}(\tilde{u}_j) \leq \sigma_j^2, & \forall j \in [N] \\ \mathbb{E}_{\mathbb{P}}(\tilde{u}_{N+1}) \leq \pmb{e}^{\top}\pmb{\Sigma}\pmb{e} \end{array} \right. \right\}. \end{aligned}

Values of model and ambiguity set parameters are specified as follows:
Number of patients:

N

=

8

N=8

,Unit cost of physician overtime:

γ

=

2

\gamma=2

,Correlation parameter:

α

=

0.25

\alpha=0.25

,Expected consultation times:

μ

\pmb{\mu}

are random numbers uniformly distributed between

[

30

,

60

]

[30, 60]

,Standard deviations of the consultation times:

σ

j

=

μ

j

ϵ

j

\sigma_j=\mu_j\epsilon_j

, where

ϵ

j

\epsilon_j

is a random number uniformly distributed between

[

0

,

0.3

]

[0, 0.3]

,The scheduled completion time for the physician before overtime commences:

T

=

∑

j

=

1

N

μ

j

+

0.5

∑

j

=

1

N

σ

j

2

,

T=\sum\limits_{j=1}^N\mu_j + 0.5\sqrt{\sum\limits_{j=1}^N\sigma_j^2},

Covariance matrix:

Σ

\pmb{\Sigma}

, with its elements to be:

(

Σ

)

i

j

=

{

α

σ

i

σ

j

if

i

≠

j

σ

j

2

otherwise

.

\left(\pmb{\Sigma}\right)_{ij} = \begin{cases} \alpha \sigma_i\sigma_j & \text{if }i\not=j \\ \sigma_j^2 & \text{otherwise}. \end{cases}

The RSOME code for implementing this distributionally robust optimization model is given below.
from rsome import dro
from rsome import square
from rsome import E
from rsome import grb_solver as grb
import numpy as np
import numpy.random as rd

# model and ambiguity set parameters
N = 8
gamma = 2
alpha = 0.25
mu = 30 + 30*rd.rand(N)
sigma = mu * 0.3*rd.rand(1, N)
T = mu.sum() + 0.5*((sigma**2).sum())**0.5

mul = np.diag(np.ones(N))*(1-alpha) + np.ones((N, N))*alpha
Sigma = (sigma.T @ sigma) * mul

# modeling with RSOME
model = dro.Model()                                  # define a DRO model
z = model.rvar(N)                                    # random variable z
u = model.rvar(N+1)                                  # auxiliary variable u

fset = model.ambiguity()
fset.suppset(z >= 0, square(z - mu) <= u[:-1],
square((z-mu).sum()) <= u[-1])          # support of random variables
fset.exptset(E(z) == mu, E(u[:-1]) <= sigma**2,
E(u[-1]) <= Sigma.sum())                # support of expectations

x = model.dvar(N)                                    # the here-and-now decision
y = model.dvar(N+1)                                  # the wait-and-see decision

model.minsup(E(y[:-1].sum() + gamma*y[-1]), fset)    # worst-case expected cost
model.st(y[1:] - y[:-1] + x >= z)                    # constraints
model.st(y >= 0)                                     # constraints
model.st(x >= 0)                                     # constraints
model.st(x.sum() <= T)                               # constraints

model.solve(grb)                                     # solve the model by Gurobi

Being solved by Gurobi...
Solution status: 2
Running time: 0.0516s

Reference

Bertsimas, Dimitris, Melvyn Sim, and Meilin Zhang. 2019. Adaptive distributionally robust optimization. Management Science 65(2) 604-618.
展开全文
XiongPengJohnny 2021-07-15 13:34:57
• 915KB u013883025 2021-08-08 19:18:17
• 5星
920KB robotblog 2021-08-30 06:39:35
• 使用yalmip求解鲁棒优化前言鲁棒优化简介yalmip实操求解优化...由于随机变量的分布未知，在此考虑使用鲁棒优化RO进行资源分配，具有更好地鲁棒性，即对变化具有更好包容性。如果能够得到随机变量的均值和方差，使用...

使用yalmip求解鲁棒优化
前言
鲁棒优化简介
yalmip
实操求解
优化问题
示例代码
结果
总结
前言
记录一下早期夭折的研究想法，想使用鲁棒优化(robust optimization，RO)解决整数变量的资源分配问题。目标函数和约束条件都很简单， 但是含有随机变量。由于随机变量的分布未知，在此考虑使用鲁棒优化RO进行资源分配，具有更好地鲁棒性，即对变化具有更好包容性。如果能够得到随机变量的均值和方差，使用鲁棒优化资源分配十分完美。
但是对于随机变量，自己又没有实际的数据，无法得到其均值和方差，也无从进行数据拟合，进行概率分布检验(卡方检验等)。最终的想法也就夭折了。或许夭折的想法也有意义，在此记录，以便对看到的人有启发。
鲁棒优化简介
鲁棒优化是含有不确定参数的优化问题 [1]，同随机优化不同。随机优化的不确定参数是具有确定性概率分布的随机变量。而鲁棒优化的不确定性，是指在优化问题中相关不确定参数位于确定的集合范围内，没有确定的概率分布，这里的参数可以是目标函数的系数，也可以是约束条件的系数。鲁棒优化追求即便是在最坏情况下(worst case),其得出的解仍然满足约束条件，不可违背。
鲁棒优化对于数学功底好的同学来说，写论文很顺畅。因为鲁棒优化一般都可以转换为确定性的对等式(Robust counterpart)，再使用确定的求解方法求解。
yalmip
yalmip类似python的第三方库，在MATLAB环境下运行，支持多种优化求解器，使得求解优化问题十分简单方便。使用yalmip求解鲁棒优化参考[2]，yalmip的使用可以参考[3]。
实操求解
优化问题

示例优化问题如上图所示，12个整数优化变量，3个不确定参数。在该示例中，假设不确定集合为box类型(即箱型，每个不确定参数都有确定上下界)。以上示例可以很简单的转化为确定性的线性规划问题。
示例代码
%% 2019-7-1 by WDL
% 使用鲁棒优化建模分析
% 定义整数变量 包含优化变量和随机变量
clc;clear all
%step1 变量定义
x = intvar(12, 1);%优化变量 intvar定义整型优化变量12行1列
w = intvar(3, 1);%随机变量
%step2 参数设置
Ta=150; Tb=150; Tc=150;
Na=400;Nb=400;Nc=400;
dt=1;%时间间隔
%添加约束条件
F = [x(1)+x(2)+x(3)+x(4)<=Na,...
x(5)+x(6)+x(7)+x(8)<=Nb,...
x(9)+x(10)+x(11)+x(12)<=Nc,... %约束条件(1)-(3)
x(1)>=w(1)*dt+Ta>=0,...
x(5)>=w(2)*dt+Tb>=0,...
x(9)>=w(3)*dt+Tc>=0,...%约束条件(4)-(6)
x>=50]; %非负约束
%设置不确定集合，此示例为box类型
% W = [0<=w(1)<=40,0<=w(2)<=40,0<=w(3)<=40,uncertain(w)];
W = [-20<=w<=20,uncertain(w)]; %uncertain(w)指明不确定性
options = sdpsettings('solver','cplex'); %设置求解器为cplex
objective = sum(x); %目标函数 最小化优化变量累加和
sol=optimize(F + W,objective,options) %优化求解
Xc=value(x) %解
Oc=value(objective)%目标值
%% 不考虑鲁棒优化
x=intvar(12,1);
F1 = [x(1)+x(2)+x(3)+x(4)<=Na,...
x(5)+x(6)+x(7)+x(8)<=Nb,...
x(9)+x(10)+x(11)+x(12)<=Nc,... %约束条件(1)-(3)
x(1)>=w(1)*dt+Ta>=0,...
x(5)>=w(2)*dt+Tb>=0,...
x(9)>=w(3)*dt+Tc>=0,...%约束条件(4)-(6)
x>=50]; %非负约束
objective1 = sum(x);
options = sdpsettings('solver','cplex'); %设置求解器为cplex
sol1=optimize(F1,objective1,options)
Xc1=value(x)
Oc1=value(objective1)
figure %作图
X=[Xc,Xc1]
bar3(X)
结果
(1)鲁棒优化
Oc=960
Xc=[170 50 50 50 170 50 50 50 170 50 50 50]’
(1)确定性优化
Oc1=600
Xc1=[50 50 50 50 50 50 50 50 50 50 50 50]’

总结
使用yalmip时，最好安装相应的求解器，例如cplex或者gurobi。
参考文献
[1] Gorissen, Bram L., et al. “A Practical Guide to Robust Optimization.” Omega-International Journal of Management Science, vol. 53, 2015, pp. 124–137.
[2]Robust optimization. https://yalmip.github.io/tutorial/robustoptimization/
[3]yalmip + lpsolve + matlab 求解混合整数线性规划问题(MIP/MILP). https://www.cnblogs.com/kane1990/p/3428129.html

展开全文
weixin_42347048 2021-04-22 19:20:52
• ## 鲁棒优化（1） 算法 线性代数

Yudibrother 2021-04-26 12:10:07
• 5星
9.36MB qq_42729362 2021-01-29 10:02:46
• yili_coffe 2021-07-24 15:15:29
• weixin_42364681 2021-04-19 01:41:46
• ## 鲁棒优化（二） 线性代数 算法

qq_25018077 2021-09-12 09:47:51
• weixin_35618196 2021-01-14 13:09:48
• zte10096334 2020-01-16 14:52:33
• 195KB weixin_38574410 2021-01-14 13:42:36
• weixin_43795921 2020-02-27 14:17:57
• Yudibrother 2021-04-28 16:52:50
• 165KB weixin_42160376 2021-02-25 07:12:35
• ## 数据驱动的鲁棒优化论文复现一：Learning-Based Robust Optimization: Procedures and Statistical ... 算法 机器学习

Laynezhao 2021-01-09 17:21:33
• 175KB qq_41805668 2021-05-09 19:02:26
• ## 手术的最优化分配（6）——鲁棒优化 python 启发式算法 算法

zzzssszzzzsz 2021-11-23 14:24:17
• weixin_45624954 2021-06-11 14:35:43
• ## 论文研究-基于坡度的鲁棒优化方法.pdf 进化算法

1.4MB weixin_39841856 2019-07-22 20:44:15
• wdl1992 2020-07-15 23:37:59
• weixin_35544490 2020-12-23 12:07:21

...